I0109 12:56:13.894371 8 e2e.go:243] Starting e2e run "cd08ceec-a962-4738-b750-0c49299814ab" on Ginkgo node 1 Running Suite: Kubernetes e2e suite =================================== Random Seed: 1578574572 - Will randomize all specs Will run 215 of 4412 specs Jan 9 12:56:14.137: INFO: >>> kubeConfig: /root/.kube/config Jan 9 12:56:14.140: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Jan 9 12:56:14.175: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Jan 9 12:56:14.213: INFO: 10 / 10 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Jan 9 12:56:14.213: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Jan 9 12:56:14.213: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Jan 9 12:56:14.225: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Jan 9 12:56:14.225: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'weave-net' (0 seconds elapsed) Jan 9 12:56:14.225: INFO: e2e test version: v1.15.7 Jan 9 12:56:14.227: INFO: kube-apiserver version: v1.15.1 SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 9 12:56:14.228: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected Jan 9 12:56:14.323: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Jan 9 12:56:14.403: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4e13bfb9-a841-4ac0-ba50-07c008e7616d" in namespace "projected-5391" to be "success or failure" Jan 9 12:56:14.423: INFO: Pod "downwardapi-volume-4e13bfb9-a841-4ac0-ba50-07c008e7616d": Phase="Pending", Reason="", readiness=false. Elapsed: 19.794878ms Jan 9 12:56:16.433: INFO: Pod "downwardapi-volume-4e13bfb9-a841-4ac0-ba50-07c008e7616d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029664485s Jan 9 12:56:18.466: INFO: Pod "downwardapi-volume-4e13bfb9-a841-4ac0-ba50-07c008e7616d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.062639642s Jan 9 12:56:20.475: INFO: Pod "downwardapi-volume-4e13bfb9-a841-4ac0-ba50-07c008e7616d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.071873485s Jan 9 12:56:22.488: INFO: Pod "downwardapi-volume-4e13bfb9-a841-4ac0-ba50-07c008e7616d": Phase="Pending", Reason="", readiness=false. Elapsed: 8.084703183s Jan 9 12:56:24.509: INFO: Pod "downwardapi-volume-4e13bfb9-a841-4ac0-ba50-07c008e7616d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.105126894s STEP: Saw pod success Jan 9 12:56:24.509: INFO: Pod "downwardapi-volume-4e13bfb9-a841-4ac0-ba50-07c008e7616d" satisfied condition "success or failure" Jan 9 12:56:24.517: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-4e13bfb9-a841-4ac0-ba50-07c008e7616d container client-container: STEP: delete the pod Jan 9 12:56:24.593: INFO: Waiting for pod downwardapi-volume-4e13bfb9-a841-4ac0-ba50-07c008e7616d to disappear Jan 9 12:56:24.607: INFO: Pod downwardapi-volume-4e13bfb9-a841-4ac0-ba50-07c008e7616d no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 9 12:56:24.608: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5391" for this suite. Jan 9 12:56:30.672: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 9 12:56:30.793: INFO: namespace projected-5391 deletion completed in 6.174191992s • [SLOW TEST:16.565 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run deployment should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 9 12:56:30.794: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1557 [It] should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Jan 9 12:56:30.849: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --generator=deployment/apps.v1 --namespace=kubectl-1874' Jan 9 12:56:33.687: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Jan 9 12:56:33.687: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n" STEP: verifying the deployment e2e-test-nginx-deployment was created STEP: verifying the pod controlled by deployment e2e-test-nginx-deployment was created [AfterEach] [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1562 Jan 9 12:56:35.859: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=kubectl-1874' Jan 9 12:56:35.999: INFO: stderr: "" Jan 9 12:56:35.999: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 9 12:56:35.999: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1874" for this suite. Jan 9 12:57:00.075: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 9 12:57:00.193: INFO: namespace kubectl-1874 deletion completed in 24.154641162s • [SLOW TEST:29.400 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run default should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 9 12:57:00.194: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1420 [It] should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Jan 9 12:57:00.373: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-8856' Jan 9 12:57:00.774: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Jan 9 12:57:00.774: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n" STEP: verifying the pod controlled by e2e-test-nginx-deployment gets created [AfterEach] [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1426 Jan 9 12:57:00.850: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=kubectl-8856' Jan 9 12:57:01.111: INFO: stderr: "" Jan 9 12:57:01.111: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 9 12:57:01.111: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8856" for this suite. Jan 9 12:57:23.169: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 9 12:57:23.335: INFO: namespace kubectl-8856 deletion completed in 22.190925372s • [SLOW TEST:23.141 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 9 12:57:23.335: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0777 on node default medium Jan 9 12:57:23.420: INFO: Waiting up to 5m0s for pod "pod-25947503-92dd-4e22-a965-851ef7ccf3ee" in namespace "emptydir-2558" to be "success or failure" Jan 9 12:57:23.457: INFO: Pod "pod-25947503-92dd-4e22-a965-851ef7ccf3ee": Phase="Pending", Reason="", readiness=false. Elapsed: 37.092889ms Jan 9 12:57:25.469: INFO: Pod "pod-25947503-92dd-4e22-a965-851ef7ccf3ee": Phase="Pending", Reason="", readiness=false. Elapsed: 2.049500206s Jan 9 12:57:27.481: INFO: Pod "pod-25947503-92dd-4e22-a965-851ef7ccf3ee": Phase="Pending", Reason="", readiness=false. Elapsed: 4.061180456s Jan 9 12:57:29.492: INFO: Pod "pod-25947503-92dd-4e22-a965-851ef7ccf3ee": Phase="Pending", Reason="", readiness=false. Elapsed: 6.072430739s Jan 9 12:57:31.503: INFO: Pod "pod-25947503-92dd-4e22-a965-851ef7ccf3ee": Phase="Pending", Reason="", readiness=false. Elapsed: 8.08327104s Jan 9 12:57:33.512: INFO: Pod "pod-25947503-92dd-4e22-a965-851ef7ccf3ee": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.091835866s STEP: Saw pod success Jan 9 12:57:33.512: INFO: Pod "pod-25947503-92dd-4e22-a965-851ef7ccf3ee" satisfied condition "success or failure" Jan 9 12:57:33.516: INFO: Trying to get logs from node iruya-node pod pod-25947503-92dd-4e22-a965-851ef7ccf3ee container test-container: STEP: delete the pod Jan 9 12:57:33.593: INFO: Waiting for pod pod-25947503-92dd-4e22-a965-851ef7ccf3ee to disappear Jan 9 12:57:33.602: INFO: Pod pod-25947503-92dd-4e22-a965-851ef7ccf3ee no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 9 12:57:33.602: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2558" for this suite. Jan 9 12:57:41.704: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 9 12:57:41.859: INFO: namespace emptydir-2558 deletion completed in 8.241467408s • [SLOW TEST:18.524 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 9 12:57:41.859: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod liveness-d83ff467-91b7-4944-85c7-076552b1eac5 in namespace container-probe-1745 Jan 9 12:57:52.145: INFO: Started pod liveness-d83ff467-91b7-4944-85c7-076552b1eac5 in namespace container-probe-1745 STEP: checking the pod's current state and verifying that restartCount is present Jan 9 12:57:52.151: INFO: Initial restart count of pod liveness-d83ff467-91b7-4944-85c7-076552b1eac5 is 0 Jan 9 12:58:14.333: INFO: Restart count of pod container-probe-1745/liveness-d83ff467-91b7-4944-85c7-076552b1eac5 is now 1 (22.181597409s elapsed) Jan 9 12:58:34.771: INFO: Restart count of pod container-probe-1745/liveness-d83ff467-91b7-4944-85c7-076552b1eac5 is now 2 (42.619910931s elapsed) Jan 9 12:58:54.990: INFO: Restart count of pod container-probe-1745/liveness-d83ff467-91b7-4944-85c7-076552b1eac5 is now 3 (1m2.838938816s elapsed) Jan 9 12:59:13.085: INFO: Restart count of pod container-probe-1745/liveness-d83ff467-91b7-4944-85c7-076552b1eac5 is now 4 (1m20.933726994s elapsed) Jan 9 13:00:23.438: INFO: Restart count of pod container-probe-1745/liveness-d83ff467-91b7-4944-85c7-076552b1eac5 is now 5 (2m31.286477022s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 9 13:00:23.460: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-1745" for this suite. Jan 9 13:00:29.511: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 9 13:00:29.621: INFO: namespace container-probe-1745 deletion completed in 6.151802414s • [SLOW TEST:167.762 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 9 13:00:29.622: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Jan 9 13:00:47.947: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jan 9 13:00:47.958: INFO: Pod pod-with-poststart-http-hook still exists Jan 9 13:00:49.958: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jan 9 13:00:49.976: INFO: Pod pod-with-poststart-http-hook still exists Jan 9 13:00:51.958: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jan 9 13:00:51.969: INFO: Pod pod-with-poststart-http-hook still exists Jan 9 13:00:53.959: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jan 9 13:00:53.984: INFO: Pod pod-with-poststart-http-hook still exists Jan 9 13:00:55.958: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jan 9 13:00:55.971: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 9 13:00:55.971: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-796" for this suite. Jan 9 13:01:18.014: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 9 13:01:18.153: INFO: namespace container-lifecycle-hook-796 deletion completed in 22.174411597s • [SLOW TEST:48.532 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 9 13:01:18.154: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 Jan 9 13:01:18.229: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Jan 9 13:01:18.281: INFO: Waiting for terminating namespaces to be deleted... Jan 9 13:01:18.285: INFO: Logging pods the kubelet thinks is on node iruya-node before test Jan 9 13:01:18.296: INFO: weave-net-rlp57 from kube-system started at 2019-10-12 11:56:39 +0000 UTC (2 container statuses recorded) Jan 9 13:01:18.296: INFO: Container weave ready: true, restart count 0 Jan 9 13:01:18.296: INFO: Container weave-npc ready: true, restart count 0 Jan 9 13:01:18.296: INFO: kube-proxy-976zl from kube-system started at 2019-08-04 09:01:39 +0000 UTC (1 container statuses recorded) Jan 9 13:01:18.296: INFO: Container kube-proxy ready: true, restart count 0 Jan 9 13:01:18.296: INFO: Logging pods the kubelet thinks is on node iruya-server-sfge57q7djm7 before test Jan 9 13:01:18.314: INFO: kube-apiserver-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:39 +0000 UTC (1 container statuses recorded) Jan 9 13:01:18.314: INFO: Container kube-apiserver ready: true, restart count 0 Jan 9 13:01:18.314: INFO: kube-scheduler-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:43 +0000 UTC (1 container statuses recorded) Jan 9 13:01:18.314: INFO: Container kube-scheduler ready: true, restart count 12 Jan 9 13:01:18.314: INFO: coredns-5c98db65d4-xx8w8 from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded) Jan 9 13:01:18.314: INFO: Container coredns ready: true, restart count 0 Jan 9 13:01:18.314: INFO: etcd-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:38 +0000 UTC (1 container statuses recorded) Jan 9 13:01:18.314: INFO: Container etcd ready: true, restart count 0 Jan 9 13:01:18.314: INFO: weave-net-bzl4d from kube-system started at 2019-08-04 08:52:37 +0000 UTC (2 container statuses recorded) Jan 9 13:01:18.314: INFO: Container weave ready: true, restart count 0 Jan 9 13:01:18.314: INFO: Container weave-npc ready: true, restart count 0 Jan 9 13:01:18.314: INFO: coredns-5c98db65d4-bm4gs from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded) Jan 9 13:01:18.314: INFO: Container coredns ready: true, restart count 0 Jan 9 13:01:18.314: INFO: kube-controller-manager-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:42 +0000 UTC (1 container statuses recorded) Jan 9 13:01:18.314: INFO: Container kube-controller-manager ready: true, restart count 18 Jan 9 13:01:18.314: INFO: kube-proxy-58v95 from kube-system started at 2019-08-04 08:52:37 +0000 UTC (1 container statuses recorded) Jan 9 13:01:18.314: INFO: Container kube-proxy ready: true, restart count 0 [It] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: verifying the node has the label node iruya-node STEP: verifying the node has the label node iruya-server-sfge57q7djm7 Jan 9 13:01:18.449: INFO: Pod coredns-5c98db65d4-bm4gs requesting resource cpu=100m on Node iruya-server-sfge57q7djm7 Jan 9 13:01:18.449: INFO: Pod coredns-5c98db65d4-xx8w8 requesting resource cpu=100m on Node iruya-server-sfge57q7djm7 Jan 9 13:01:18.449: INFO: Pod etcd-iruya-server-sfge57q7djm7 requesting resource cpu=0m on Node iruya-server-sfge57q7djm7 Jan 9 13:01:18.449: INFO: Pod kube-apiserver-iruya-server-sfge57q7djm7 requesting resource cpu=250m on Node iruya-server-sfge57q7djm7 Jan 9 13:01:18.449: INFO: Pod kube-controller-manager-iruya-server-sfge57q7djm7 requesting resource cpu=200m on Node iruya-server-sfge57q7djm7 Jan 9 13:01:18.449: INFO: Pod kube-proxy-58v95 requesting resource cpu=0m on Node iruya-server-sfge57q7djm7 Jan 9 13:01:18.449: INFO: Pod kube-proxy-976zl requesting resource cpu=0m on Node iruya-node Jan 9 13:01:18.449: INFO: Pod kube-scheduler-iruya-server-sfge57q7djm7 requesting resource cpu=100m on Node iruya-server-sfge57q7djm7 Jan 9 13:01:18.449: INFO: Pod weave-net-bzl4d requesting resource cpu=20m on Node iruya-server-sfge57q7djm7 Jan 9 13:01:18.449: INFO: Pod weave-net-rlp57 requesting resource cpu=20m on Node iruya-node STEP: Starting Pods to consume most of the cluster CPU. STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-6804b55e-44a5-42ab-9c72-7d667646912c.15e8397d052e5e2d], Reason = [Scheduled], Message = [Successfully assigned sched-pred-993/filler-pod-6804b55e-44a5-42ab-9c72-7d667646912c to iruya-node] STEP: Considering event: Type = [Normal], Name = [filler-pod-6804b55e-44a5-42ab-9c72-7d667646912c.15e8397e7e119a63], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-6804b55e-44a5-42ab-9c72-7d667646912c.15e8397f780cb898], Reason = [Created], Message = [Created container filler-pod-6804b55e-44a5-42ab-9c72-7d667646912c] STEP: Considering event: Type = [Normal], Name = [filler-pod-6804b55e-44a5-42ab-9c72-7d667646912c.15e8397f9d9a79dc], Reason = [Started], Message = [Started container filler-pod-6804b55e-44a5-42ab-9c72-7d667646912c] STEP: Considering event: Type = [Normal], Name = [filler-pod-97bf7276-a95a-4f69-8a83-ccbb72acafa8.15e8397d052cb5df], Reason = [Scheduled], Message = [Successfully assigned sched-pred-993/filler-pod-97bf7276-a95a-4f69-8a83-ccbb72acafa8 to iruya-server-sfge57q7djm7] STEP: Considering event: Type = [Normal], Name = [filler-pod-97bf7276-a95a-4f69-8a83-ccbb72acafa8.15e8397e5ce207d6], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-97bf7276-a95a-4f69-8a83-ccbb72acafa8.15e8397f5ef122ce], Reason = [Created], Message = [Created container filler-pod-97bf7276-a95a-4f69-8a83-ccbb72acafa8] STEP: Considering event: Type = [Normal], Name = [filler-pod-97bf7276-a95a-4f69-8a83-ccbb72acafa8.15e8397f85d47a29], Reason = [Started], Message = [Started container filler-pod-97bf7276-a95a-4f69-8a83-ccbb72acafa8] STEP: Considering event: Type = [Warning], Name = [additional-pod.15e8397fd2f744f0], Reason = [FailedScheduling], Message = [0/2 nodes are available: 2 Insufficient cpu.] STEP: removing the label node off the node iruya-node STEP: verifying the node doesn't have the label node STEP: removing the label node off the node iruya-server-sfge57q7djm7 STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 9 13:01:31.742: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-993" for this suite. Jan 9 13:01:37.820: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 9 13:01:37.962: INFO: namespace sched-pred-993 deletion completed in 6.214483051s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72 • [SLOW TEST:19.810 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23 validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl patch should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 9 13:01:37.965: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating Redis RC Jan 9 13:01:40.100: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2871' Jan 9 13:01:40.602: INFO: stderr: "" Jan 9 13:01:40.602: INFO: stdout: "replicationcontroller/redis-master created\n" STEP: Waiting for Redis master to start. Jan 9 13:01:42.983: INFO: Selector matched 1 pods for map[app:redis] Jan 9 13:01:42.983: INFO: Found 0 / 1 Jan 9 13:01:43.636: INFO: Selector matched 1 pods for map[app:redis] Jan 9 13:01:43.636: INFO: Found 0 / 1 Jan 9 13:01:44.614: INFO: Selector matched 1 pods for map[app:redis] Jan 9 13:01:44.615: INFO: Found 0 / 1 Jan 9 13:01:46.039: INFO: Selector matched 1 pods for map[app:redis] Jan 9 13:01:46.039: INFO: Found 0 / 1 Jan 9 13:01:46.660: INFO: Selector matched 1 pods for map[app:redis] Jan 9 13:01:46.660: INFO: Found 0 / 1 Jan 9 13:01:47.610: INFO: Selector matched 1 pods for map[app:redis] Jan 9 13:01:47.610: INFO: Found 0 / 1 Jan 9 13:01:48.624: INFO: Selector matched 1 pods for map[app:redis] Jan 9 13:01:48.624: INFO: Found 0 / 1 Jan 9 13:01:49.612: INFO: Selector matched 1 pods for map[app:redis] Jan 9 13:01:49.612: INFO: Found 0 / 1 Jan 9 13:01:50.619: INFO: Selector matched 1 pods for map[app:redis] Jan 9 13:01:50.619: INFO: Found 0 / 1 Jan 9 13:01:51.663: INFO: Selector matched 1 pods for map[app:redis] Jan 9 13:01:51.663: INFO: Found 0 / 1 Jan 9 13:01:52.620: INFO: Selector matched 1 pods for map[app:redis] Jan 9 13:01:52.620: INFO: Found 0 / 1 Jan 9 13:01:53.639: INFO: Selector matched 1 pods for map[app:redis] Jan 9 13:01:53.639: INFO: Found 0 / 1 Jan 9 13:01:54.616: INFO: Selector matched 1 pods for map[app:redis] Jan 9 13:01:54.616: INFO: Found 0 / 1 Jan 9 13:01:55.634: INFO: Selector matched 1 pods for map[app:redis] Jan 9 13:01:55.634: INFO: Found 1 / 1 Jan 9 13:01:55.634: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods Jan 9 13:01:55.641: INFO: Selector matched 1 pods for map[app:redis] Jan 9 13:01:55.641: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Jan 9 13:01:55.642: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod redis-master-2x7sz --namespace=kubectl-2871 -p {"metadata":{"annotations":{"x":"y"}}}' Jan 9 13:01:55.839: INFO: stderr: "" Jan 9 13:01:55.839: INFO: stdout: "pod/redis-master-2x7sz patched\n" STEP: checking annotations Jan 9 13:01:55.877: INFO: Selector matched 1 pods for map[app:redis] Jan 9 13:01:55.877: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 9 13:01:55.877: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2871" for this suite. Jan 9 13:02:19.969: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 9 13:02:20.092: INFO: namespace kubectl-2871 deletion completed in 24.207533269s • [SLOW TEST:42.128 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl patch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 9 13:02:20.094: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-map-f08fbe7f-9adb-4291-a2fb-791c5c51f59e STEP: Creating a pod to test consume configMaps Jan 9 13:02:20.334: INFO: Waiting up to 5m0s for pod "pod-configmaps-dc795ae8-221e-4aea-941e-8cca3671bc91" in namespace "configmap-6364" to be "success or failure" Jan 9 13:02:20.351: INFO: Pod "pod-configmaps-dc795ae8-221e-4aea-941e-8cca3671bc91": Phase="Pending", Reason="", readiness=false. Elapsed: 16.917174ms Jan 9 13:02:22.362: INFO: Pod "pod-configmaps-dc795ae8-221e-4aea-941e-8cca3671bc91": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028545597s Jan 9 13:02:24.383: INFO: Pod "pod-configmaps-dc795ae8-221e-4aea-941e-8cca3671bc91": Phase="Pending", Reason="", readiness=false. Elapsed: 4.049413328s Jan 9 13:02:26.392: INFO: Pod "pod-configmaps-dc795ae8-221e-4aea-941e-8cca3671bc91": Phase="Pending", Reason="", readiness=false. Elapsed: 6.058548517s Jan 9 13:02:28.404: INFO: Pod "pod-configmaps-dc795ae8-221e-4aea-941e-8cca3671bc91": Phase="Pending", Reason="", readiness=false. Elapsed: 8.069797992s Jan 9 13:02:30.416: INFO: Pod "pod-configmaps-dc795ae8-221e-4aea-941e-8cca3671bc91": Phase="Pending", Reason="", readiness=false. Elapsed: 10.082523184s Jan 9 13:02:32.431: INFO: Pod "pod-configmaps-dc795ae8-221e-4aea-941e-8cca3671bc91": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.096992015s STEP: Saw pod success Jan 9 13:02:32.431: INFO: Pod "pod-configmaps-dc795ae8-221e-4aea-941e-8cca3671bc91" satisfied condition "success or failure" Jan 9 13:02:32.435: INFO: Trying to get logs from node iruya-node pod pod-configmaps-dc795ae8-221e-4aea-941e-8cca3671bc91 container configmap-volume-test: STEP: delete the pod Jan 9 13:02:32.879: INFO: Waiting for pod pod-configmaps-dc795ae8-221e-4aea-941e-8cca3671bc91 to disappear Jan 9 13:02:32.887: INFO: Pod pod-configmaps-dc795ae8-221e-4aea-941e-8cca3671bc91 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 9 13:02:32.887: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6364" for this suite. Jan 9 13:02:38.932: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 9 13:02:39.084: INFO: namespace configmap-6364 deletion completed in 6.184956778s • [SLOW TEST:18.990 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 9 13:02:39.084: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes Jan 9 13:02:39.196: INFO: observed the pod list STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice STEP: verifying pod deletion was observed [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 9 13:02:55.657: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-167" for this suite. Jan 9 13:03:01.700: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 9 13:03:01.845: INFO: namespace pods-167 deletion completed in 6.176971976s • [SLOW TEST:22.761 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 9 13:03:01.853: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 9 13:03:02.205: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-9373" for this suite. Jan 9 13:03:08.258: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 9 13:03:08.533: INFO: namespace kubelet-test-9373 deletion completed in 6.310185172s • [SLOW TEST:6.680 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78 should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 9 13:03:08.534: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a test externalName service STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3515.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-3515.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3515.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-3515.svc.cluster.local; sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jan 9 13:03:22.777: INFO: File wheezy_udp@dns-test-service-3.dns-3515.svc.cluster.local from pod dns-3515/dns-test-4b8aae9f-5944-41d9-94d8-938016a9f854 contains '' instead of 'foo.example.com.' Jan 9 13:03:22.782: INFO: File jessie_udp@dns-test-service-3.dns-3515.svc.cluster.local from pod dns-3515/dns-test-4b8aae9f-5944-41d9-94d8-938016a9f854 contains '' instead of 'foo.example.com.' Jan 9 13:03:22.782: INFO: Lookups using dns-3515/dns-test-4b8aae9f-5944-41d9-94d8-938016a9f854 failed for: [wheezy_udp@dns-test-service-3.dns-3515.svc.cluster.local jessie_udp@dns-test-service-3.dns-3515.svc.cluster.local] Jan 9 13:03:27.852: INFO: DNS probes using dns-test-4b8aae9f-5944-41d9-94d8-938016a9f854 succeeded STEP: deleting the pod STEP: changing the externalName to bar.example.com STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3515.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-3515.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3515.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-3515.svc.cluster.local; sleep 1; done STEP: creating a second pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jan 9 13:03:46.085: INFO: File wheezy_udp@dns-test-service-3.dns-3515.svc.cluster.local from pod dns-3515/dns-test-1ea7e7b5-f327-45e7-b94b-2cd978fe9de0 contains '' instead of 'bar.example.com.' Jan 9 13:03:46.088: INFO: File jessie_udp@dns-test-service-3.dns-3515.svc.cluster.local from pod dns-3515/dns-test-1ea7e7b5-f327-45e7-b94b-2cd978fe9de0 contains '' instead of 'bar.example.com.' Jan 9 13:03:46.088: INFO: Lookups using dns-3515/dns-test-1ea7e7b5-f327-45e7-b94b-2cd978fe9de0 failed for: [wheezy_udp@dns-test-service-3.dns-3515.svc.cluster.local jessie_udp@dns-test-service-3.dns-3515.svc.cluster.local] Jan 9 13:03:51.101: INFO: File wheezy_udp@dns-test-service-3.dns-3515.svc.cluster.local from pod dns-3515/dns-test-1ea7e7b5-f327-45e7-b94b-2cd978fe9de0 contains 'foo.example.com. ' instead of 'bar.example.com.' Jan 9 13:03:51.109: INFO: File jessie_udp@dns-test-service-3.dns-3515.svc.cluster.local from pod dns-3515/dns-test-1ea7e7b5-f327-45e7-b94b-2cd978fe9de0 contains 'foo.example.com. ' instead of 'bar.example.com.' Jan 9 13:03:51.109: INFO: Lookups using dns-3515/dns-test-1ea7e7b5-f327-45e7-b94b-2cd978fe9de0 failed for: [wheezy_udp@dns-test-service-3.dns-3515.svc.cluster.local jessie_udp@dns-test-service-3.dns-3515.svc.cluster.local] Jan 9 13:03:56.118: INFO: DNS probes using dns-test-1ea7e7b5-f327-45e7-b94b-2cd978fe9de0 succeeded STEP: deleting the pod STEP: changing the service to type=ClusterIP STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3515.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-3515.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3515.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-3515.svc.cluster.local; sleep 1; done STEP: creating a third pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jan 9 13:04:13.002: INFO: File wheezy_udp@dns-test-service-3.dns-3515.svc.cluster.local from pod dns-3515/dns-test-44756168-5139-4335-8a5a-0321facc4571 contains '' instead of '10.105.40.204' Jan 9 13:04:13.007: INFO: File jessie_udp@dns-test-service-3.dns-3515.svc.cluster.local from pod dns-3515/dns-test-44756168-5139-4335-8a5a-0321facc4571 contains '' instead of '10.105.40.204' Jan 9 13:04:13.007: INFO: Lookups using dns-3515/dns-test-44756168-5139-4335-8a5a-0321facc4571 failed for: [wheezy_udp@dns-test-service-3.dns-3515.svc.cluster.local jessie_udp@dns-test-service-3.dns-3515.svc.cluster.local] Jan 9 13:04:18.050: INFO: DNS probes using dns-test-44756168-5139-4335-8a5a-0321facc4571 succeeded STEP: deleting the pod STEP: deleting the test externalName service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 9 13:04:18.228: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-3515" for this suite. Jan 9 13:04:26.373: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 9 13:04:26.515: INFO: namespace dns-3515 deletion completed in 8.251977767s • [SLOW TEST:77.982 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 9 13:04:26.516: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir volume type on tmpfs Jan 9 13:04:26.664: INFO: Waiting up to 5m0s for pod "pod-d5706c17-a6f4-44cd-bec2-d71df62a4efd" in namespace "emptydir-4676" to be "success or failure" Jan 9 13:04:26.675: INFO: Pod "pod-d5706c17-a6f4-44cd-bec2-d71df62a4efd": Phase="Pending", Reason="", readiness=false. Elapsed: 10.476409ms Jan 9 13:04:28.690: INFO: Pod "pod-d5706c17-a6f4-44cd-bec2-d71df62a4efd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025477131s Jan 9 13:04:30.712: INFO: Pod "pod-d5706c17-a6f4-44cd-bec2-d71df62a4efd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.048110803s Jan 9 13:04:32.761: INFO: Pod "pod-d5706c17-a6f4-44cd-bec2-d71df62a4efd": Phase="Pending", Reason="", readiness=false. Elapsed: 6.096516659s Jan 9 13:04:34.774: INFO: Pod "pod-d5706c17-a6f4-44cd-bec2-d71df62a4efd": Phase="Pending", Reason="", readiness=false. Elapsed: 8.109678042s Jan 9 13:04:36.785: INFO: Pod "pod-d5706c17-a6f4-44cd-bec2-d71df62a4efd": Phase="Pending", Reason="", readiness=false. Elapsed: 10.121019245s Jan 9 13:04:38.902: INFO: Pod "pod-d5706c17-a6f4-44cd-bec2-d71df62a4efd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.238367836s STEP: Saw pod success Jan 9 13:04:38.903: INFO: Pod "pod-d5706c17-a6f4-44cd-bec2-d71df62a4efd" satisfied condition "success or failure" Jan 9 13:04:38.913: INFO: Trying to get logs from node iruya-node pod pod-d5706c17-a6f4-44cd-bec2-d71df62a4efd container test-container: STEP: delete the pod Jan 9 13:04:39.041: INFO: Waiting for pod pod-d5706c17-a6f4-44cd-bec2-d71df62a4efd to disappear Jan 9 13:04:39.049: INFO: Pod pod-d5706c17-a6f4-44cd-bec2-d71df62a4efd no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 9 13:04:39.049: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4676" for this suite. Jan 9 13:04:45.092: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 9 13:04:45.209: INFO: namespace emptydir-4676 deletion completed in 6.151968374s • [SLOW TEST:18.693 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 9 13:04:45.210: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44 [It] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod Jan 9 13:04:45.297: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 9 13:05:02.150: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-6047" for this suite. Jan 9 13:05:24.277: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 9 13:05:24.427: INFO: namespace init-container-6047 deletion completed in 22.264977431s • [SLOW TEST:39.218 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 9 13:05:24.428: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-1b1866e2-7423-440f-9480-4422d2eb6b97 STEP: Creating a pod to test consume secrets Jan 9 13:05:24.576: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-b326d890-f635-45c3-8ad8-e560920af6ae" in namespace "projected-3476" to be "success or failure" Jan 9 13:05:24.581: INFO: Pod "pod-projected-secrets-b326d890-f635-45c3-8ad8-e560920af6ae": Phase="Pending", Reason="", readiness=false. Elapsed: 4.338025ms Jan 9 13:05:26.597: INFO: Pod "pod-projected-secrets-b326d890-f635-45c3-8ad8-e560920af6ae": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020748794s Jan 9 13:05:28.621: INFO: Pod "pod-projected-secrets-b326d890-f635-45c3-8ad8-e560920af6ae": Phase="Pending", Reason="", readiness=false. Elapsed: 4.044355694s Jan 9 13:05:30.632: INFO: Pod "pod-projected-secrets-b326d890-f635-45c3-8ad8-e560920af6ae": Phase="Pending", Reason="", readiness=false. Elapsed: 6.055418492s Jan 9 13:05:32.647: INFO: Pod "pod-projected-secrets-b326d890-f635-45c3-8ad8-e560920af6ae": Phase="Pending", Reason="", readiness=false. Elapsed: 8.070911265s Jan 9 13:05:34.662: INFO: Pod "pod-projected-secrets-b326d890-f635-45c3-8ad8-e560920af6ae": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.085372755s STEP: Saw pod success Jan 9 13:05:34.662: INFO: Pod "pod-projected-secrets-b326d890-f635-45c3-8ad8-e560920af6ae" satisfied condition "success or failure" Jan 9 13:05:34.667: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-b326d890-f635-45c3-8ad8-e560920af6ae container projected-secret-volume-test: STEP: delete the pod Jan 9 13:05:34.868: INFO: Waiting for pod pod-projected-secrets-b326d890-f635-45c3-8ad8-e560920af6ae to disappear Jan 9 13:05:34.879: INFO: Pod pod-projected-secrets-b326d890-f635-45c3-8ad8-e560920af6ae no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 9 13:05:34.879: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3476" for this suite. Jan 9 13:05:40.912: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 9 13:05:41.024: INFO: namespace projected-3476 deletion completed in 6.135697381s • [SLOW TEST:16.596 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 9 13:05:41.025: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jan 9 13:05:41.104: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace STEP: Creating rc "condition-test" that asks for more than the allowed pod quota STEP: Checking rc "condition-test" has the desired failure condition set STEP: Scaling down rc "condition-test" to satisfy pod quota Jan 9 13:05:44.153: INFO: Updating replication controller "condition-test" STEP: Checking rc "condition-test" has no failure condition set [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 9 13:05:45.702: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-9086" for this suite. Jan 9 13:05:57.764: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 9 13:05:57.970: INFO: namespace replication-controller-9086 deletion completed in 12.261397489s • [SLOW TEST:16.946 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 9 13:05:57.971: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 Jan 9 13:05:58.042: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Jan 9 13:05:58.105: INFO: Waiting for terminating namespaces to be deleted... Jan 9 13:05:58.112: INFO: Logging pods the kubelet thinks is on node iruya-node before test Jan 9 13:05:58.124: INFO: weave-net-rlp57 from kube-system started at 2019-10-12 11:56:39 +0000 UTC (2 container statuses recorded) Jan 9 13:05:58.124: INFO: Container weave ready: true, restart count 0 Jan 9 13:05:58.124: INFO: Container weave-npc ready: true, restart count 0 Jan 9 13:05:58.124: INFO: kube-proxy-976zl from kube-system started at 2019-08-04 09:01:39 +0000 UTC (1 container statuses recorded) Jan 9 13:05:58.124: INFO: Container kube-proxy ready: true, restart count 0 Jan 9 13:05:58.124: INFO: Logging pods the kubelet thinks is on node iruya-server-sfge57q7djm7 before test Jan 9 13:05:58.133: INFO: coredns-5c98db65d4-bm4gs from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded) Jan 9 13:05:58.133: INFO: Container coredns ready: true, restart count 0 Jan 9 13:05:58.133: INFO: etcd-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:38 +0000 UTC (1 container statuses recorded) Jan 9 13:05:58.133: INFO: Container etcd ready: true, restart count 0 Jan 9 13:05:58.133: INFO: weave-net-bzl4d from kube-system started at 2019-08-04 08:52:37 +0000 UTC (2 container statuses recorded) Jan 9 13:05:58.133: INFO: Container weave ready: true, restart count 0 Jan 9 13:05:58.133: INFO: Container weave-npc ready: true, restart count 0 Jan 9 13:05:58.133: INFO: kube-controller-manager-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:42 +0000 UTC (1 container statuses recorded) Jan 9 13:05:58.133: INFO: Container kube-controller-manager ready: true, restart count 18 Jan 9 13:05:58.133: INFO: kube-proxy-58v95 from kube-system started at 2019-08-04 08:52:37 +0000 UTC (1 container statuses recorded) Jan 9 13:05:58.133: INFO: Container kube-proxy ready: true, restart count 0 Jan 9 13:05:58.133: INFO: kube-apiserver-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:39 +0000 UTC (1 container statuses recorded) Jan 9 13:05:58.133: INFO: Container kube-apiserver ready: true, restart count 0 Jan 9 13:05:58.133: INFO: kube-scheduler-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:43 +0000 UTC (1 container statuses recorded) Jan 9 13:05:58.133: INFO: Container kube-scheduler ready: true, restart count 12 Jan 9 13:05:58.133: INFO: coredns-5c98db65d4-xx8w8 from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded) Jan 9 13:05:58.133: INFO: Container coredns ready: true, restart count 0 [It] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-ef6a65ae-fd48-41cd-a4d6-dfa7fb97ad07 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-ef6a65ae-fd48-41cd-a4d6-dfa7fb97ad07 off the node iruya-node STEP: verifying the node doesn't have the label kubernetes.io/e2e-ef6a65ae-fd48-41cd-a4d6-dfa7fb97ad07 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 9 13:06:20.557: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-4123" for this suite. Jan 9 13:06:34.611: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 9 13:06:34.710: INFO: namespace sched-pred-4123 deletion completed in 14.144128786s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72 • [SLOW TEST:36.739 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23 validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 9 13:06:34.711: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: getting the auto-created API token STEP: reading a file in the container Jan 9 13:06:47.481: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-769 pod-service-account-5ddd4cad-8eaa-4151-8770-837de69bfaa4 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token' STEP: reading a file in the container Jan 9 13:06:51.007: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-769 pod-service-account-5ddd4cad-8eaa-4151-8770-837de69bfaa4 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt' STEP: reading a file in the container Jan 9 13:06:51.393: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-769 pod-service-account-5ddd4cad-8eaa-4151-8770-837de69bfaa4 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace' [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 9 13:06:51.858: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-769" for this suite. Jan 9 13:06:57.905: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 9 13:06:57.997: INFO: namespace svcaccounts-769 deletion completed in 6.12748062s • [SLOW TEST:23.286 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 9 13:06:57.997: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-map-f807f108-39d5-411d-85cc-fb464194793f STEP: Creating a pod to test consume secrets Jan 9 13:06:58.078: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-012f39ff-c4e3-4169-8b9f-01c5ed01b446" in namespace "projected-3268" to be "success or failure" Jan 9 13:06:58.082: INFO: Pod "pod-projected-secrets-012f39ff-c4e3-4169-8b9f-01c5ed01b446": Phase="Pending", Reason="", readiness=false. Elapsed: 4.597587ms Jan 9 13:07:00.098: INFO: Pod "pod-projected-secrets-012f39ff-c4e3-4169-8b9f-01c5ed01b446": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020008154s Jan 9 13:07:02.163: INFO: Pod "pod-projected-secrets-012f39ff-c4e3-4169-8b9f-01c5ed01b446": Phase="Pending", Reason="", readiness=false. Elapsed: 4.085295647s Jan 9 13:07:04.174: INFO: Pod "pod-projected-secrets-012f39ff-c4e3-4169-8b9f-01c5ed01b446": Phase="Pending", Reason="", readiness=false. Elapsed: 6.096097587s Jan 9 13:07:06.250: INFO: Pod "pod-projected-secrets-012f39ff-c4e3-4169-8b9f-01c5ed01b446": Phase="Pending", Reason="", readiness=false. Elapsed: 8.172449646s Jan 9 13:07:08.261: INFO: Pod "pod-projected-secrets-012f39ff-c4e3-4169-8b9f-01c5ed01b446": Phase="Pending", Reason="", readiness=false. Elapsed: 10.183281339s Jan 9 13:07:10.269: INFO: Pod "pod-projected-secrets-012f39ff-c4e3-4169-8b9f-01c5ed01b446": Phase="Pending", Reason="", readiness=false. Elapsed: 12.190928808s Jan 9 13:07:12.302: INFO: Pod "pod-projected-secrets-012f39ff-c4e3-4169-8b9f-01c5ed01b446": Phase="Pending", Reason="", readiness=false. Elapsed: 14.224102063s Jan 9 13:07:14.313: INFO: Pod "pod-projected-secrets-012f39ff-c4e3-4169-8b9f-01c5ed01b446": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.234868867s STEP: Saw pod success Jan 9 13:07:14.313: INFO: Pod "pod-projected-secrets-012f39ff-c4e3-4169-8b9f-01c5ed01b446" satisfied condition "success or failure" Jan 9 13:07:14.317: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-012f39ff-c4e3-4169-8b9f-01c5ed01b446 container projected-secret-volume-test: STEP: delete the pod Jan 9 13:07:14.396: INFO: Waiting for pod pod-projected-secrets-012f39ff-c4e3-4169-8b9f-01c5ed01b446 to disappear Jan 9 13:07:14.407: INFO: Pod pod-projected-secrets-012f39ff-c4e3-4169-8b9f-01c5ed01b446 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 9 13:07:14.407: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3268" for this suite. Jan 9 13:07:20.447: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 9 13:07:20.568: INFO: namespace projected-3268 deletion completed in 6.15242929s • [SLOW TEST:22.571 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 9 13:07:20.568: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0666 on node default medium Jan 9 13:07:20.693: INFO: Waiting up to 5m0s for pod "pod-0513ea92-6d4b-48cf-ac47-98ce9d0448f4" in namespace "emptydir-7461" to be "success or failure" Jan 9 13:07:20.702: INFO: Pod "pod-0513ea92-6d4b-48cf-ac47-98ce9d0448f4": Phase="Pending", Reason="", readiness=false. Elapsed: 8.449446ms Jan 9 13:07:22.716: INFO: Pod "pod-0513ea92-6d4b-48cf-ac47-98ce9d0448f4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02224198s Jan 9 13:07:24.735: INFO: Pod "pod-0513ea92-6d4b-48cf-ac47-98ce9d0448f4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.04203273s Jan 9 13:07:26.741: INFO: Pod "pod-0513ea92-6d4b-48cf-ac47-98ce9d0448f4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.04799226s Jan 9 13:07:28.781: INFO: Pod "pod-0513ea92-6d4b-48cf-ac47-98ce9d0448f4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.08779968s STEP: Saw pod success Jan 9 13:07:28.781: INFO: Pod "pod-0513ea92-6d4b-48cf-ac47-98ce9d0448f4" satisfied condition "success or failure" Jan 9 13:07:28.791: INFO: Trying to get logs from node iruya-node pod pod-0513ea92-6d4b-48cf-ac47-98ce9d0448f4 container test-container: STEP: delete the pod Jan 9 13:07:29.038: INFO: Waiting for pod pod-0513ea92-6d4b-48cf-ac47-98ce9d0448f4 to disappear Jan 9 13:07:29.045: INFO: Pod pod-0513ea92-6d4b-48cf-ac47-98ce9d0448f4 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 9 13:07:29.046: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7461" for this suite. Jan 9 13:07:35.099: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 9 13:07:35.236: INFO: namespace emptydir-7461 deletion completed in 6.180039967s • [SLOW TEST:14.668 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run rc should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 9 13:07:35.237: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1456 [It] should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Jan 9 13:07:35.464: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=kubectl-245' Jan 9 13:07:35.592: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Jan 9 13:07:35.592: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n" STEP: verifying the rc e2e-test-nginx-rc was created STEP: verifying the pod controlled by rc e2e-test-nginx-rc was created STEP: confirm that you can get logs from an rc Jan 9 13:07:35.614: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-nginx-rc-m2wkl] Jan 9 13:07:35.614: INFO: Waiting up to 5m0s for pod "e2e-test-nginx-rc-m2wkl" in namespace "kubectl-245" to be "running and ready" Jan 9 13:07:35.640: INFO: Pod "e2e-test-nginx-rc-m2wkl": Phase="Pending", Reason="", readiness=false. Elapsed: 25.914984ms Jan 9 13:07:37.652: INFO: Pod "e2e-test-nginx-rc-m2wkl": Phase="Pending", Reason="", readiness=false. Elapsed: 2.037725723s Jan 9 13:07:39.667: INFO: Pod "e2e-test-nginx-rc-m2wkl": Phase="Pending", Reason="", readiness=false. Elapsed: 4.052853687s Jan 9 13:07:41.679: INFO: Pod "e2e-test-nginx-rc-m2wkl": Phase="Pending", Reason="", readiness=false. Elapsed: 6.064851389s Jan 9 13:07:43.693: INFO: Pod "e2e-test-nginx-rc-m2wkl": Phase="Running", Reason="", readiness=true. Elapsed: 8.078515111s Jan 9 13:07:43.693: INFO: Pod "e2e-test-nginx-rc-m2wkl" satisfied condition "running and ready" Jan 9 13:07:43.693: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-nginx-rc-m2wkl] Jan 9 13:07:43.693: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-nginx-rc --namespace=kubectl-245' Jan 9 13:07:43.902: INFO: stderr: "" Jan 9 13:07:43.903: INFO: stdout: "" [AfterEach] [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1461 Jan 9 13:07:43.903: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=kubectl-245' Jan 9 13:07:44.017: INFO: stderr: "" Jan 9 13:07:44.017: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 9 13:07:44.017: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-245" for this suite. Jan 9 13:08:06.041: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 9 13:08:06.176: INFO: namespace kubectl-245 deletion completed in 22.15495795s • [SLOW TEST:30.939 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 9 13:08:06.176: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-1511 [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a new StatefulSet Jan 9 13:08:06.264: INFO: Found 0 stateful pods, waiting for 3 Jan 9 13:08:16.296: INFO: Found 2 stateful pods, waiting for 3 Jan 9 13:08:26.284: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jan 9 13:08:26.284: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jan 9 13:08:26.284: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Jan 9 13:08:36.274: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jan 9 13:08:36.274: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jan 9 13:08:36.274: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Updating stateful set template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine Jan 9 13:08:36.328: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Not applying an update when the partition is greater than the number of replicas STEP: Performing a canary update Jan 9 13:08:46.465: INFO: Updating stateful set ss2 Jan 9 13:08:46.658: INFO: Waiting for Pod statefulset-1511/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c STEP: Restoring Pods to the correct revision when they are deleted Jan 9 13:08:57.033: INFO: Found 2 stateful pods, waiting for 3 Jan 9 13:09:07.042: INFO: Found 2 stateful pods, waiting for 3 Jan 9 13:09:17.241: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jan 9 13:09:17.242: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jan 9 13:09:17.242: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Jan 9 13:09:27.050: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jan 9 13:09:27.050: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jan 9 13:09:27.051: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Performing a phased rolling update Jan 9 13:09:27.102: INFO: Updating stateful set ss2 Jan 9 13:09:27.205: INFO: Waiting for Pod statefulset-1511/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Jan 9 13:09:37.220: INFO: Waiting for Pod statefulset-1511/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Jan 9 13:09:47.259: INFO: Updating stateful set ss2 Jan 9 13:09:47.358: INFO: Waiting for StatefulSet statefulset-1511/ss2 to complete update Jan 9 13:09:47.359: INFO: Waiting for Pod statefulset-1511/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Jan 9 13:09:57.377: INFO: Waiting for StatefulSet statefulset-1511/ss2 to complete update [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 Jan 9 13:10:07.398: INFO: Deleting all statefulset in ns statefulset-1511 Jan 9 13:10:07.400: INFO: Scaling statefulset ss2 to 0 Jan 9 13:10:37.449: INFO: Waiting for statefulset status.replicas updated to 0 Jan 9 13:10:37.454: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 9 13:10:37.480: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-1511" for this suite. Jan 9 13:10:45.519: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 9 13:10:45.658: INFO: namespace statefulset-1511 deletion completed in 8.16991332s • [SLOW TEST:159.482 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 9 13:10:45.659: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars Jan 9 13:10:45.795: INFO: Waiting up to 5m0s for pod "downward-api-043bf5aa-6d4f-4541-a544-974150c60731" in namespace "downward-api-7118" to be "success or failure" Jan 9 13:10:45.827: INFO: Pod "downward-api-043bf5aa-6d4f-4541-a544-974150c60731": Phase="Pending", Reason="", readiness=false. Elapsed: 31.177845ms Jan 9 13:10:47.834: INFO: Pod "downward-api-043bf5aa-6d4f-4541-a544-974150c60731": Phase="Pending", Reason="", readiness=false. Elapsed: 2.038703618s Jan 9 13:10:49.853: INFO: Pod "downward-api-043bf5aa-6d4f-4541-a544-974150c60731": Phase="Pending", Reason="", readiness=false. Elapsed: 4.057211293s Jan 9 13:10:51.864: INFO: Pod "downward-api-043bf5aa-6d4f-4541-a544-974150c60731": Phase="Pending", Reason="", readiness=false. Elapsed: 6.068386499s Jan 9 13:10:53.880: INFO: Pod "downward-api-043bf5aa-6d4f-4541-a544-974150c60731": Phase="Pending", Reason="", readiness=false. Elapsed: 8.084301774s Jan 9 13:10:55.900: INFO: Pod "downward-api-043bf5aa-6d4f-4541-a544-974150c60731": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.104562173s STEP: Saw pod success Jan 9 13:10:55.900: INFO: Pod "downward-api-043bf5aa-6d4f-4541-a544-974150c60731" satisfied condition "success or failure" Jan 9 13:10:55.905: INFO: Trying to get logs from node iruya-node pod downward-api-043bf5aa-6d4f-4541-a544-974150c60731 container dapi-container: STEP: delete the pod Jan 9 13:10:56.093: INFO: Waiting for pod downward-api-043bf5aa-6d4f-4541-a544-974150c60731 to disappear Jan 9 13:10:56.123: INFO: Pod downward-api-043bf5aa-6d4f-4541-a544-974150c60731 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 9 13:10:56.123: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7118" for this suite. Jan 9 13:11:02.243: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 9 13:11:02.326: INFO: namespace downward-api-7118 deletion completed in 6.195682751s • [SLOW TEST:16.667 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 9 13:11:02.326: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-configmap-9b9m STEP: Creating a pod to test atomic-volume-subpath Jan 9 13:11:02.454: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-9b9m" in namespace "subpath-6038" to be "success or failure" Jan 9 13:11:02.476: INFO: Pod "pod-subpath-test-configmap-9b9m": Phase="Pending", Reason="", readiness=false. Elapsed: 21.841297ms Jan 9 13:11:04.489: INFO: Pod "pod-subpath-test-configmap-9b9m": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034651154s Jan 9 13:11:06.496: INFO: Pod "pod-subpath-test-configmap-9b9m": Phase="Pending", Reason="", readiness=false. Elapsed: 4.041779692s Jan 9 13:11:08.506: INFO: Pod "pod-subpath-test-configmap-9b9m": Phase="Pending", Reason="", readiness=false. Elapsed: 6.051667168s Jan 9 13:11:10.519: INFO: Pod "pod-subpath-test-configmap-9b9m": Phase="Running", Reason="", readiness=true. Elapsed: 8.064790633s Jan 9 13:11:12.536: INFO: Pod "pod-subpath-test-configmap-9b9m": Phase="Running", Reason="", readiness=true. Elapsed: 10.081699304s Jan 9 13:11:14.548: INFO: Pod "pod-subpath-test-configmap-9b9m": Phase="Running", Reason="", readiness=true. Elapsed: 12.093958129s Jan 9 13:11:16.561: INFO: Pod "pod-subpath-test-configmap-9b9m": Phase="Running", Reason="", readiness=true. Elapsed: 14.106492032s Jan 9 13:11:18.574: INFO: Pod "pod-subpath-test-configmap-9b9m": Phase="Running", Reason="", readiness=true. Elapsed: 16.119474807s Jan 9 13:11:20.591: INFO: Pod "pod-subpath-test-configmap-9b9m": Phase="Running", Reason="", readiness=true. Elapsed: 18.136909104s Jan 9 13:11:22.606: INFO: Pod "pod-subpath-test-configmap-9b9m": Phase="Running", Reason="", readiness=true. Elapsed: 20.151569994s Jan 9 13:11:24.619: INFO: Pod "pod-subpath-test-configmap-9b9m": Phase="Running", Reason="", readiness=true. Elapsed: 22.164453508s Jan 9 13:11:26.636: INFO: Pod "pod-subpath-test-configmap-9b9m": Phase="Running", Reason="", readiness=true. Elapsed: 24.181425469s Jan 9 13:11:28.662: INFO: Pod "pod-subpath-test-configmap-9b9m": Phase="Running", Reason="", readiness=true. Elapsed: 26.207478494s Jan 9 13:11:30.682: INFO: Pod "pod-subpath-test-configmap-9b9m": Phase="Running", Reason="", readiness=true. Elapsed: 28.228040025s Jan 9 13:11:33.194: INFO: Pod "pod-subpath-test-configmap-9b9m": Phase="Succeeded", Reason="", readiness=false. Elapsed: 30.739975155s STEP: Saw pod success Jan 9 13:11:33.194: INFO: Pod "pod-subpath-test-configmap-9b9m" satisfied condition "success or failure" Jan 9 13:11:33.200: INFO: Trying to get logs from node iruya-node pod pod-subpath-test-configmap-9b9m container test-container-subpath-configmap-9b9m: STEP: delete the pod Jan 9 13:11:33.380: INFO: Waiting for pod pod-subpath-test-configmap-9b9m to disappear Jan 9 13:11:33.389: INFO: Pod pod-subpath-test-configmap-9b9m no longer exists STEP: Deleting pod pod-subpath-test-configmap-9b9m Jan 9 13:11:33.389: INFO: Deleting pod "pod-subpath-test-configmap-9b9m" in namespace "subpath-6038" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 9 13:11:33.394: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-6038" for this suite. Jan 9 13:11:39.429: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 9 13:11:39.540: INFO: namespace subpath-6038 deletion completed in 6.140650595s • [SLOW TEST:37.214 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run --rm job should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 9 13:11:39.541: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: executing a command with run --rm and attach with stdin Jan 9 13:11:39.753: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-2061 run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed'' Jan 9 13:11:55.899: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\nI0109 13:11:52.947587 291 log.go:172] (0xc00013f340) (0xc0006b6a00) Create stream\nI0109 13:11:52.947890 291 log.go:172] (0xc00013f340) (0xc0006b6a00) Stream added, broadcasting: 1\nI0109 13:11:52.983442 291 log.go:172] (0xc00013f340) Reply frame received for 1\nI0109 13:11:52.983582 291 log.go:172] (0xc00013f340) (0xc000611f40) Create stream\nI0109 13:11:52.983600 291 log.go:172] (0xc00013f340) (0xc000611f40) Stream added, broadcasting: 3\nI0109 13:11:52.985646 291 log.go:172] (0xc00013f340) Reply frame received for 3\nI0109 13:11:52.985699 291 log.go:172] (0xc00013f340) (0xc0006b6000) Create stream\nI0109 13:11:52.985735 291 log.go:172] (0xc00013f340) (0xc0006b6000) Stream added, broadcasting: 5\nI0109 13:11:52.990171 291 log.go:172] (0xc00013f340) Reply frame received for 5\nI0109 13:11:52.990349 291 log.go:172] (0xc00013f340) (0xc000028000) Create stream\nI0109 13:11:52.990385 291 log.go:172] (0xc00013f340) (0xc000028000) Stream added, broadcasting: 7\nI0109 13:11:52.992927 291 log.go:172] (0xc00013f340) Reply frame received for 7\nI0109 13:11:52.993329 291 log.go:172] (0xc000611f40) (3) Writing data frame\nI0109 13:11:52.993523 291 log.go:172] (0xc000611f40) (3) Writing data frame\nI0109 13:11:53.012720 291 log.go:172] (0xc00013f340) Data frame received for 5\nI0109 13:11:53.012740 291 log.go:172] (0xc0006b6000) (5) Data frame handling\nI0109 13:11:53.012756 291 log.go:172] (0xc0006b6000) (5) Data frame sent\nI0109 13:11:53.019917 291 log.go:172] (0xc00013f340) Data frame received for 5\nI0109 13:11:53.019934 291 log.go:172] (0xc0006b6000) (5) Data frame handling\nI0109 13:11:53.019942 291 log.go:172] (0xc0006b6000) (5) Data frame sent\nI0109 13:11:55.841496 291 log.go:172] (0xc00013f340) Data frame received for 1\nI0109 13:11:55.841571 291 log.go:172] (0xc0006b6a00) (1) Data frame handling\nI0109 13:11:55.841607 291 log.go:172] (0xc0006b6a00) (1) Data frame sent\nI0109 13:11:55.841650 291 log.go:172] (0xc00013f340) (0xc0006b6a00) Stream removed, broadcasting: 1\nI0109 13:11:55.842859 291 log.go:172] (0xc00013f340) (0xc000028000) Stream removed, broadcasting: 7\nI0109 13:11:55.842974 291 log.go:172] (0xc00013f340) (0xc0006b6000) Stream removed, broadcasting: 5\nI0109 13:11:55.843075 291 log.go:172] (0xc00013f340) (0xc0006b6a00) Stream removed, broadcasting: 1\nI0109 13:11:55.843122 291 log.go:172] (0xc00013f340) (0xc000611f40) Stream removed, broadcasting: 3\nI0109 13:11:55.843180 291 log.go:172] (0xc00013f340) Go away received\nI0109 13:11:55.843731 291 log.go:172] (0xc00013f340) (0xc000611f40) Stream removed, broadcasting: 3\nI0109 13:11:55.843769 291 log.go:172] (0xc00013f340) (0xc0006b6000) Stream removed, broadcasting: 5\nI0109 13:11:55.843784 291 log.go:172] (0xc00013f340) (0xc000028000) Stream removed, broadcasting: 7\n" Jan 9 13:11:55.899: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n" STEP: verifying the job e2e-test-rm-busybox-job was deleted [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 9 13:11:57.908: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2061" for this suite. Jan 9 13:12:04.029: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 9 13:12:04.128: INFO: namespace kubectl-2061 deletion completed in 6.21405769s • [SLOW TEST:24.587 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run --rm job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 9 13:12:04.129: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod Jan 9 13:12:04.207: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 9 13:12:19.770: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-5232" for this suite. Jan 9 13:12:25.843: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 9 13:12:26.019: INFO: namespace init-container-5232 deletion completed in 6.217228974s • [SLOW TEST:21.891 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 9 13:12:26.020: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jan 9 13:12:26.222: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 9 13:12:38.340: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-9298" for this suite. Jan 9 13:13:30.453: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 9 13:13:30.579: INFO: namespace pods-9298 deletion completed in 52.231591521s • [SLOW TEST:64.559 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 9 13:13:30.580: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: getting the auto-created API token Jan 9 13:13:31.335: INFO: created pod pod-service-account-defaultsa Jan 9 13:13:31.335: INFO: pod pod-service-account-defaultsa service account token volume mount: true Jan 9 13:13:31.354: INFO: created pod pod-service-account-mountsa Jan 9 13:13:31.355: INFO: pod pod-service-account-mountsa service account token volume mount: true Jan 9 13:13:31.409: INFO: created pod pod-service-account-nomountsa Jan 9 13:13:31.409: INFO: pod pod-service-account-nomountsa service account token volume mount: false Jan 9 13:13:31.550: INFO: created pod pod-service-account-defaultsa-mountspec Jan 9 13:13:31.550: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true Jan 9 13:13:31.596: INFO: created pod pod-service-account-mountsa-mountspec Jan 9 13:13:31.596: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true Jan 9 13:13:31.615: INFO: created pod pod-service-account-nomountsa-mountspec Jan 9 13:13:31.615: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true Jan 9 13:13:31.878: INFO: created pod pod-service-account-defaultsa-nomountspec Jan 9 13:13:31.878: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false Jan 9 13:13:31.971: INFO: created pod pod-service-account-mountsa-nomountspec Jan 9 13:13:31.972: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false Jan 9 13:13:32.093: INFO: created pod pod-service-account-nomountsa-nomountspec Jan 9 13:13:32.093: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 9 13:13:32.093: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-5395" for this suite. Jan 9 13:14:04.355: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 9 13:14:04.432: INFO: namespace svcaccounts-5395 deletion completed in 32.296547976s • [SLOW TEST:33.853 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Guestbook application should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 9 13:14:04.433: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating all guestbook components Jan 9 13:14:04.599: INFO: apiVersion: v1 kind: Service metadata: name: redis-slave labels: app: redis role: slave tier: backend spec: ports: - port: 6379 selector: app: redis role: slave tier: backend Jan 9 13:14:04.599: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4510' Jan 9 13:14:05.191: INFO: stderr: "" Jan 9 13:14:05.192: INFO: stdout: "service/redis-slave created\n" Jan 9 13:14:05.192: INFO: apiVersion: v1 kind: Service metadata: name: redis-master labels: app: redis role: master tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: redis role: master tier: backend Jan 9 13:14:05.192: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4510' Jan 9 13:14:05.639: INFO: stderr: "" Jan 9 13:14:05.639: INFO: stdout: "service/redis-master created\n" Jan 9 13:14:05.640: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend Jan 9 13:14:05.640: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4510' Jan 9 13:14:06.386: INFO: stderr: "" Jan 9 13:14:06.387: INFO: stdout: "service/frontend created\n" Jan 9 13:14:06.387: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: frontend spec: replicas: 3 selector: matchLabels: app: guestbook tier: frontend template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: php-redis image: gcr.io/google-samples/gb-frontend:v6 resources: requests: cpu: 100m memory: 100Mi env: - name: GET_HOSTS_FROM value: dns # If your cluster config does not include a dns service, then to # instead access environment variables to find service host # info, comment out the 'value: dns' line above, and uncomment the # line below: # value: env ports: - containerPort: 80 Jan 9 13:14:06.388: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4510' Jan 9 13:14:06.956: INFO: stderr: "" Jan 9 13:14:06.956: INFO: stdout: "deployment.apps/frontend created\n" Jan 9 13:14:06.957: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: redis-master spec: replicas: 1 selector: matchLabels: app: redis role: master tier: backend template: metadata: labels: app: redis role: master tier: backend spec: containers: - name: master image: gcr.io/kubernetes-e2e-test-images/redis:1.0 resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Jan 9 13:14:06.957: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4510' Jan 9 13:14:10.448: INFO: stderr: "" Jan 9 13:14:10.448: INFO: stdout: "deployment.apps/redis-master created\n" Jan 9 13:14:10.450: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: redis-slave spec: replicas: 2 selector: matchLabels: app: redis role: slave tier: backend template: metadata: labels: app: redis role: slave tier: backend spec: containers: - name: slave image: gcr.io/google-samples/gb-redisslave:v3 resources: requests: cpu: 100m memory: 100Mi env: - name: GET_HOSTS_FROM value: dns # If your cluster config does not include a dns service, then to # instead access an environment variable to find the master # service's host, comment out the 'value: dns' line above, and # uncomment the line below: # value: env ports: - containerPort: 6379 Jan 9 13:14:10.450: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4510' Jan 9 13:14:11.195: INFO: stderr: "" Jan 9 13:14:11.195: INFO: stdout: "deployment.apps/redis-slave created\n" STEP: validating guestbook app Jan 9 13:14:11.195: INFO: Waiting for all frontend pods to be Running. Jan 9 13:14:41.248: INFO: Waiting for frontend to serve content. Jan 9 13:14:41.315: INFO: Trying to add a new entry to the guestbook. Jan 9 13:14:41.370: INFO: Verifying that added entry can be retrieved. STEP: using delete to clean up resources Jan 9 13:14:41.438: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-4510' Jan 9 13:14:41.771: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 9 13:14:41.771: INFO: stdout: "service \"redis-slave\" force deleted\n" STEP: using delete to clean up resources Jan 9 13:14:41.772: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-4510' Jan 9 13:14:42.195: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 9 13:14:42.195: INFO: stdout: "service \"redis-master\" force deleted\n" STEP: using delete to clean up resources Jan 9 13:14:42.196: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-4510' Jan 9 13:14:42.487: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 9 13:14:42.487: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources Jan 9 13:14:42.488: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-4510' Jan 9 13:14:42.720: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 9 13:14:42.720: INFO: stdout: "deployment.apps \"frontend\" force deleted\n" STEP: using delete to clean up resources Jan 9 13:14:42.721: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-4510' Jan 9 13:14:42.889: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 9 13:14:42.889: INFO: stdout: "deployment.apps \"redis-master\" force deleted\n" STEP: using delete to clean up resources Jan 9 13:14:42.890: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-4510' Jan 9 13:14:43.169: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 9 13:14:43.169: INFO: stdout: "deployment.apps \"redis-slave\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 9 13:14:43.169: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4510" for this suite. Jan 9 13:15:27.369: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 9 13:15:27.477: INFO: namespace kubectl-4510 deletion completed in 44.169138304s • [SLOW TEST:83.044 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Guestbook application /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 9 13:15:27.477: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Jan 9 13:15:27.729: INFO: Waiting up to 5m0s for pod "downwardapi-volume-921d8cf3-d723-4656-96dd-1f4f89066b7b" in namespace "downward-api-2461" to be "success or failure" Jan 9 13:15:27.740: INFO: Pod "downwardapi-volume-921d8cf3-d723-4656-96dd-1f4f89066b7b": Phase="Pending", Reason="", readiness=false. Elapsed: 10.553265ms Jan 9 13:15:29.749: INFO: Pod "downwardapi-volume-921d8cf3-d723-4656-96dd-1f4f89066b7b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01937597s Jan 9 13:15:31.775: INFO: Pod "downwardapi-volume-921d8cf3-d723-4656-96dd-1f4f89066b7b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.045570215s Jan 9 13:15:33.788: INFO: Pod "downwardapi-volume-921d8cf3-d723-4656-96dd-1f4f89066b7b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.059000823s Jan 9 13:15:35.856: INFO: Pod "downwardapi-volume-921d8cf3-d723-4656-96dd-1f4f89066b7b": Phase="Pending", Reason="", readiness=false. Elapsed: 8.126761562s Jan 9 13:15:37.871: INFO: Pod "downwardapi-volume-921d8cf3-d723-4656-96dd-1f4f89066b7b": Phase="Pending", Reason="", readiness=false. Elapsed: 10.141268887s Jan 9 13:15:39.878: INFO: Pod "downwardapi-volume-921d8cf3-d723-4656-96dd-1f4f89066b7b": Phase="Pending", Reason="", readiness=false. Elapsed: 12.148346761s Jan 9 13:15:41.898: INFO: Pod "downwardapi-volume-921d8cf3-d723-4656-96dd-1f4f89066b7b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.168293319s STEP: Saw pod success Jan 9 13:15:41.898: INFO: Pod "downwardapi-volume-921d8cf3-d723-4656-96dd-1f4f89066b7b" satisfied condition "success or failure" Jan 9 13:15:41.912: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-921d8cf3-d723-4656-96dd-1f4f89066b7b container client-container: STEP: delete the pod Jan 9 13:15:42.119: INFO: Waiting for pod downwardapi-volume-921d8cf3-d723-4656-96dd-1f4f89066b7b to disappear Jan 9 13:15:42.126: INFO: Pod downwardapi-volume-921d8cf3-d723-4656-96dd-1f4f89066b7b no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 9 13:15:42.126: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2461" for this suite. Jan 9 13:15:48.164: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 9 13:15:48.281: INFO: namespace downward-api-2461 deletion completed in 6.147948482s • [SLOW TEST:20.804 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 9 13:15:48.281: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jan 9 13:16:00.698: INFO: Waiting up to 5m0s for pod "client-envvars-17727437-ffe2-4629-b888-4f8850729822" in namespace "pods-3705" to be "success or failure" Jan 9 13:16:00.757: INFO: Pod "client-envvars-17727437-ffe2-4629-b888-4f8850729822": Phase="Pending", Reason="", readiness=false. Elapsed: 58.663958ms Jan 9 13:16:02.762: INFO: Pod "client-envvars-17727437-ffe2-4629-b888-4f8850729822": Phase="Pending", Reason="", readiness=false. Elapsed: 2.063427232s Jan 9 13:16:04.771: INFO: Pod "client-envvars-17727437-ffe2-4629-b888-4f8850729822": Phase="Pending", Reason="", readiness=false. Elapsed: 4.072565817s Jan 9 13:16:06.778: INFO: Pod "client-envvars-17727437-ffe2-4629-b888-4f8850729822": Phase="Pending", Reason="", readiness=false. Elapsed: 6.079655861s Jan 9 13:16:08.787: INFO: Pod "client-envvars-17727437-ffe2-4629-b888-4f8850729822": Phase="Pending", Reason="", readiness=false. Elapsed: 8.088844749s Jan 9 13:16:10.798: INFO: Pod "client-envvars-17727437-ffe2-4629-b888-4f8850729822": Phase="Pending", Reason="", readiness=false. Elapsed: 10.099794958s Jan 9 13:16:12.805: INFO: Pod "client-envvars-17727437-ffe2-4629-b888-4f8850729822": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.106952048s STEP: Saw pod success Jan 9 13:16:12.805: INFO: Pod "client-envvars-17727437-ffe2-4629-b888-4f8850729822" satisfied condition "success or failure" Jan 9 13:16:12.807: INFO: Trying to get logs from node iruya-node pod client-envvars-17727437-ffe2-4629-b888-4f8850729822 container env3cont: STEP: delete the pod Jan 9 13:16:12.900: INFO: Waiting for pod client-envvars-17727437-ffe2-4629-b888-4f8850729822 to disappear Jan 9 13:16:12.911: INFO: Pod client-envvars-17727437-ffe2-4629-b888-4f8850729822 no longer exists [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 9 13:16:12.911: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-3705" for this suite. Jan 9 13:16:59.723: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 9 13:16:59.877: INFO: namespace pods-3705 deletion completed in 46.960592875s • [SLOW TEST:71.596 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 9 13:16:59.878: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jan 9 13:17:00.013: INFO: Creating ReplicaSet my-hostname-basic-2b83756c-4495-4cf0-9fc1-f9adf346603d Jan 9 13:17:00.031: INFO: Pod name my-hostname-basic-2b83756c-4495-4cf0-9fc1-f9adf346603d: Found 0 pods out of 1 Jan 9 13:17:05.996: INFO: Pod name my-hostname-basic-2b83756c-4495-4cf0-9fc1-f9adf346603d: Found 1 pods out of 1 Jan 9 13:17:05.996: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-2b83756c-4495-4cf0-9fc1-f9adf346603d" is running Jan 9 13:17:10.009: INFO: Pod "my-hostname-basic-2b83756c-4495-4cf0-9fc1-f9adf346603d-hr9nc" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-09 13:17:00 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-09 13:17:00 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-2b83756c-4495-4cf0-9fc1-f9adf346603d]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-09 13:17:00 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-2b83756c-4495-4cf0-9fc1-f9adf346603d]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-09 13:17:00 +0000 UTC Reason: Message:}]) Jan 9 13:17:10.009: INFO: Trying to dial the pod Jan 9 13:17:15.038: INFO: Controller my-hostname-basic-2b83756c-4495-4cf0-9fc1-f9adf346603d: Got expected result from replica 1 [my-hostname-basic-2b83756c-4495-4cf0-9fc1-f9adf346603d-hr9nc]: "my-hostname-basic-2b83756c-4495-4cf0-9fc1-f9adf346603d-hr9nc", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 9 13:17:15.038: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-95" for this suite. Jan 9 13:17:21.061: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 9 13:17:21.242: INFO: namespace replicaset-95 deletion completed in 6.199082603s • [SLOW TEST:21.364 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 9 13:17:21.243: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-2a3f345d-4d5e-4d33-b614-dd4c4109fe04 STEP: Creating a pod to test consume secrets Jan 9 13:17:21.364: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-94f8ce30-d7fb-4b21-99c4-1b74d9520367" in namespace "projected-2268" to be "success or failure" Jan 9 13:17:21.380: INFO: Pod "pod-projected-secrets-94f8ce30-d7fb-4b21-99c4-1b74d9520367": Phase="Pending", Reason="", readiness=false. Elapsed: 15.542026ms Jan 9 13:17:23.386: INFO: Pod "pod-projected-secrets-94f8ce30-d7fb-4b21-99c4-1b74d9520367": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022144119s Jan 9 13:17:25.392: INFO: Pod "pod-projected-secrets-94f8ce30-d7fb-4b21-99c4-1b74d9520367": Phase="Pending", Reason="", readiness=false. Elapsed: 4.028212418s Jan 9 13:17:27.400: INFO: Pod "pod-projected-secrets-94f8ce30-d7fb-4b21-99c4-1b74d9520367": Phase="Pending", Reason="", readiness=false. Elapsed: 6.035669498s Jan 9 13:17:29.406: INFO: Pod "pod-projected-secrets-94f8ce30-d7fb-4b21-99c4-1b74d9520367": Phase="Pending", Reason="", readiness=false. Elapsed: 8.042324647s Jan 9 13:17:31.419: INFO: Pod "pod-projected-secrets-94f8ce30-d7fb-4b21-99c4-1b74d9520367": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.054646749s STEP: Saw pod success Jan 9 13:17:31.419: INFO: Pod "pod-projected-secrets-94f8ce30-d7fb-4b21-99c4-1b74d9520367" satisfied condition "success or failure" Jan 9 13:17:31.426: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-94f8ce30-d7fb-4b21-99c4-1b74d9520367 container projected-secret-volume-test: STEP: delete the pod Jan 9 13:17:31.484: INFO: Waiting for pod pod-projected-secrets-94f8ce30-d7fb-4b21-99c4-1b74d9520367 to disappear Jan 9 13:17:31.605: INFO: Pod pod-projected-secrets-94f8ce30-d7fb-4b21-99c4-1b74d9520367 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 9 13:17:31.605: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2268" for this suite. Jan 9 13:17:37.638: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 9 13:17:37.765: INFO: namespace projected-2268 deletion completed in 6.150422347s • [SLOW TEST:16.523 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 9 13:17:37.766: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars Jan 9 13:17:37.874: INFO: Waiting up to 5m0s for pod "downward-api-4ac7997e-eecf-4b0d-8a11-9d69ebe42c74" in namespace "downward-api-5869" to be "success or failure" Jan 9 13:17:37.926: INFO: Pod "downward-api-4ac7997e-eecf-4b0d-8a11-9d69ebe42c74": Phase="Pending", Reason="", readiness=false. Elapsed: 51.593225ms Jan 9 13:17:39.940: INFO: Pod "downward-api-4ac7997e-eecf-4b0d-8a11-9d69ebe42c74": Phase="Pending", Reason="", readiness=false. Elapsed: 2.065614274s Jan 9 13:17:41.959: INFO: Pod "downward-api-4ac7997e-eecf-4b0d-8a11-9d69ebe42c74": Phase="Pending", Reason="", readiness=false. Elapsed: 4.084799947s Jan 9 13:17:43.972: INFO: Pod "downward-api-4ac7997e-eecf-4b0d-8a11-9d69ebe42c74": Phase="Pending", Reason="", readiness=false. Elapsed: 6.097285783s Jan 9 13:17:45.979: INFO: Pod "downward-api-4ac7997e-eecf-4b0d-8a11-9d69ebe42c74": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.105019497s STEP: Saw pod success Jan 9 13:17:45.980: INFO: Pod "downward-api-4ac7997e-eecf-4b0d-8a11-9d69ebe42c74" satisfied condition "success or failure" Jan 9 13:17:45.984: INFO: Trying to get logs from node iruya-node pod downward-api-4ac7997e-eecf-4b0d-8a11-9d69ebe42c74 container dapi-container: STEP: delete the pod Jan 9 13:17:46.023: INFO: Waiting for pod downward-api-4ac7997e-eecf-4b0d-8a11-9d69ebe42c74 to disappear Jan 9 13:17:46.063: INFO: Pod downward-api-4ac7997e-eecf-4b0d-8a11-9d69ebe42c74 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 9 13:17:46.063: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5869" for this suite. Jan 9 13:17:52.096: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 9 13:17:52.267: INFO: namespace downward-api-5869 deletion completed in 6.19660001s • [SLOW TEST:14.502 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-network] Services should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 9 13:17:52.268: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88 [It] should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating service multi-endpoint-test in namespace services-746 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-746 to expose endpoints map[] Jan 9 13:17:52.487: INFO: Get endpoints failed (13.200317ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found Jan 9 13:17:53.497: INFO: successfully validated that service multi-endpoint-test in namespace services-746 exposes endpoints map[] (1.023292195s elapsed) STEP: Creating pod pod1 in namespace services-746 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-746 to expose endpoints map[pod1:[100]] Jan 9 13:17:57.739: INFO: Unexpected endpoints: found map[], expected map[pod1:[100]] (4.222879239s elapsed, will retry) Jan 9 13:18:00.794: INFO: successfully validated that service multi-endpoint-test in namespace services-746 exposes endpoints map[pod1:[100]] (7.277881836s elapsed) STEP: Creating pod pod2 in namespace services-746 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-746 to expose endpoints map[pod1:[100] pod2:[101]] Jan 9 13:18:06.049: INFO: Unexpected endpoints: found map[f3deb85f-fa5a-4e6f-8e82-954d989ba4b8:[100]], expected map[pod1:[100] pod2:[101]] (5.24052222s elapsed, will retry) Jan 9 13:18:08.767: INFO: successfully validated that service multi-endpoint-test in namespace services-746 exposes endpoints map[pod1:[100] pod2:[101]] (7.959171606s elapsed) STEP: Deleting pod pod1 in namespace services-746 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-746 to expose endpoints map[pod2:[101]] Jan 9 13:18:08.817: INFO: successfully validated that service multi-endpoint-test in namespace services-746 exposes endpoints map[pod2:[101]] (45.766085ms elapsed) STEP: Deleting pod pod2 in namespace services-746 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-746 to expose endpoints map[] Jan 9 13:18:08.846: INFO: successfully validated that service multi-endpoint-test in namespace services-746 exposes endpoints map[] (10.950835ms elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 9 13:18:08.941: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-746" for this suite. Jan 9 13:18:30.968: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 9 13:18:31.104: INFO: namespace services-746 deletion completed in 22.156237816s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92 • [SLOW TEST:38.836 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl expose should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 9 13:18:31.104: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating Redis RC Jan 9 13:18:31.185: INFO: namespace kubectl-7505 Jan 9 13:18:31.185: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7505' Jan 9 13:18:33.655: INFO: stderr: "" Jan 9 13:18:33.655: INFO: stdout: "replicationcontroller/redis-master created\n" STEP: Waiting for Redis master to start. Jan 9 13:18:34.673: INFO: Selector matched 1 pods for map[app:redis] Jan 9 13:18:34.673: INFO: Found 0 / 1 Jan 9 13:18:35.671: INFO: Selector matched 1 pods for map[app:redis] Jan 9 13:18:35.671: INFO: Found 0 / 1 Jan 9 13:18:36.666: INFO: Selector matched 1 pods for map[app:redis] Jan 9 13:18:36.666: INFO: Found 0 / 1 Jan 9 13:18:37.667: INFO: Selector matched 1 pods for map[app:redis] Jan 9 13:18:37.667: INFO: Found 0 / 1 Jan 9 13:18:39.111: INFO: Selector matched 1 pods for map[app:redis] Jan 9 13:18:39.111: INFO: Found 0 / 1 Jan 9 13:18:39.666: INFO: Selector matched 1 pods for map[app:redis] Jan 9 13:18:39.666: INFO: Found 0 / 1 Jan 9 13:18:40.663: INFO: Selector matched 1 pods for map[app:redis] Jan 9 13:18:40.663: INFO: Found 0 / 1 Jan 9 13:18:41.664: INFO: Selector matched 1 pods for map[app:redis] Jan 9 13:18:41.664: INFO: Found 0 / 1 Jan 9 13:18:42.676: INFO: Selector matched 1 pods for map[app:redis] Jan 9 13:18:42.676: INFO: Found 0 / 1 Jan 9 13:18:43.666: INFO: Selector matched 1 pods for map[app:redis] Jan 9 13:18:43.666: INFO: Found 1 / 1 Jan 9 13:18:43.666: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Jan 9 13:18:43.671: INFO: Selector matched 1 pods for map[app:redis] Jan 9 13:18:43.671: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Jan 9 13:18:43.671: INFO: wait on redis-master startup in kubectl-7505 Jan 9 13:18:43.671: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-9pnv9 redis-master --namespace=kubectl-7505' Jan 9 13:18:43.872: INFO: stderr: "" Jan 9 13:18:43.872: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 09 Jan 13:18:41.943 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 09 Jan 13:18:41.943 # Server started, Redis version 3.2.12\n1:M 09 Jan 13:18:41.944 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 09 Jan 13:18:41.944 * The server is now ready to accept connections on port 6379\n" STEP: exposing RC Jan 9 13:18:43.872: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc redis-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-7505' Jan 9 13:18:46.046: INFO: stderr: "" Jan 9 13:18:46.046: INFO: stdout: "service/rm2 exposed\n" Jan 9 13:18:46.056: INFO: Service rm2 in namespace kubectl-7505 found. STEP: exposing service Jan 9 13:18:48.066: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-7505' Jan 9 13:18:48.331: INFO: stderr: "" Jan 9 13:18:48.332: INFO: stdout: "service/rm3 exposed\n" Jan 9 13:18:48.340: INFO: Service rm3 in namespace kubectl-7505 found. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 9 13:18:50.458: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7505" for this suite. Jan 9 13:19:16.503: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 9 13:19:16.616: INFO: namespace kubectl-7505 deletion completed in 26.148680245s • [SLOW TEST:45.512 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl expose /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 9 13:19:16.616: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test substitution in container's args Jan 9 13:19:16.770: INFO: Waiting up to 5m0s for pod "var-expansion-427346ec-1165-4de4-884d-02c79c49a315" in namespace "var-expansion-2610" to be "success or failure" Jan 9 13:19:16.775: INFO: Pod "var-expansion-427346ec-1165-4de4-884d-02c79c49a315": Phase="Pending", Reason="", readiness=false. Elapsed: 5.269013ms Jan 9 13:19:18.783: INFO: Pod "var-expansion-427346ec-1165-4de4-884d-02c79c49a315": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012484166s Jan 9 13:19:20.793: INFO: Pod "var-expansion-427346ec-1165-4de4-884d-02c79c49a315": Phase="Pending", Reason="", readiness=false. Elapsed: 4.023030102s Jan 9 13:19:22.801: INFO: Pod "var-expansion-427346ec-1165-4de4-884d-02c79c49a315": Phase="Pending", Reason="", readiness=false. Elapsed: 6.030503451s Jan 9 13:19:24.818: INFO: Pod "var-expansion-427346ec-1165-4de4-884d-02c79c49a315": Phase="Pending", Reason="", readiness=false. Elapsed: 8.04830892s Jan 9 13:19:26.849: INFO: Pod "var-expansion-427346ec-1165-4de4-884d-02c79c49a315": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.078612906s STEP: Saw pod success Jan 9 13:19:26.849: INFO: Pod "var-expansion-427346ec-1165-4de4-884d-02c79c49a315" satisfied condition "success or failure" Jan 9 13:19:26.865: INFO: Trying to get logs from node iruya-node pod var-expansion-427346ec-1165-4de4-884d-02c79c49a315 container dapi-container: STEP: delete the pod Jan 9 13:19:26.988: INFO: Waiting for pod var-expansion-427346ec-1165-4de4-884d-02c79c49a315 to disappear Jan 9 13:19:27.007: INFO: Pod var-expansion-427346ec-1165-4de4-884d-02c79c49a315 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 9 13:19:27.008: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-2610" for this suite. Jan 9 13:19:33.157: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 9 13:19:33.264: INFO: namespace var-expansion-2610 deletion completed in 6.229160973s • [SLOW TEST:16.648 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 9 13:19:33.265: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [It] should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a replication controller Jan 9 13:19:33.352: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4197' Jan 9 13:19:33.889: INFO: stderr: "" Jan 9 13:19:33.889: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Jan 9 13:19:33.889: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4197' Jan 9 13:19:34.087: INFO: stderr: "" Jan 9 13:19:34.087: INFO: stdout: "update-demo-nautilus-6ldxz update-demo-nautilus-bf4zq " Jan 9 13:19:34.088: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6ldxz -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4197' Jan 9 13:19:34.200: INFO: stderr: "" Jan 9 13:19:34.200: INFO: stdout: "" Jan 9 13:19:34.200: INFO: update-demo-nautilus-6ldxz is created but not running Jan 9 13:19:39.200: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4197' Jan 9 13:19:39.524: INFO: stderr: "" Jan 9 13:19:39.524: INFO: stdout: "update-demo-nautilus-6ldxz update-demo-nautilus-bf4zq " Jan 9 13:19:39.525: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6ldxz -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4197' Jan 9 13:19:39.716: INFO: stderr: "" Jan 9 13:19:39.716: INFO: stdout: "" Jan 9 13:19:39.716: INFO: update-demo-nautilus-6ldxz is created but not running Jan 9 13:19:44.717: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4197' Jan 9 13:19:44.851: INFO: stderr: "" Jan 9 13:19:44.851: INFO: stdout: "update-demo-nautilus-6ldxz update-demo-nautilus-bf4zq " Jan 9 13:19:44.851: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6ldxz -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4197' Jan 9 13:19:44.969: INFO: stderr: "" Jan 9 13:19:44.969: INFO: stdout: "" Jan 9 13:19:44.969: INFO: update-demo-nautilus-6ldxz is created but not running Jan 9 13:19:49.970: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4197' Jan 9 13:19:50.109: INFO: stderr: "" Jan 9 13:19:50.109: INFO: stdout: "update-demo-nautilus-6ldxz update-demo-nautilus-bf4zq " Jan 9 13:19:50.109: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6ldxz -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4197' Jan 9 13:19:50.205: INFO: stderr: "" Jan 9 13:19:50.205: INFO: stdout: "true" Jan 9 13:19:50.205: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6ldxz -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4197' Jan 9 13:19:50.277: INFO: stderr: "" Jan 9 13:19:50.277: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jan 9 13:19:50.277: INFO: validating pod update-demo-nautilus-6ldxz Jan 9 13:19:50.300: INFO: got data: { "image": "nautilus.jpg" } Jan 9 13:19:50.300: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 9 13:19:50.300: INFO: update-demo-nautilus-6ldxz is verified up and running Jan 9 13:19:50.300: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-bf4zq -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4197' Jan 9 13:19:50.374: INFO: stderr: "" Jan 9 13:19:50.374: INFO: stdout: "true" Jan 9 13:19:50.374: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-bf4zq -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4197' Jan 9 13:19:50.452: INFO: stderr: "" Jan 9 13:19:50.452: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jan 9 13:19:50.452: INFO: validating pod update-demo-nautilus-bf4zq Jan 9 13:19:50.460: INFO: got data: { "image": "nautilus.jpg" } Jan 9 13:19:50.460: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 9 13:19:50.460: INFO: update-demo-nautilus-bf4zq is verified up and running STEP: scaling down the replication controller Jan 9 13:19:50.464: INFO: scanned /root for discovery docs: Jan 9 13:19:50.464: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-4197' Jan 9 13:19:51.614: INFO: stderr: "" Jan 9 13:19:51.614: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Jan 9 13:19:51.614: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4197' Jan 9 13:19:51.758: INFO: stderr: "" Jan 9 13:19:51.758: INFO: stdout: "update-demo-nautilus-6ldxz update-demo-nautilus-bf4zq " STEP: Replicas for name=update-demo: expected=1 actual=2 Jan 9 13:19:56.758: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4197' Jan 9 13:19:56.955: INFO: stderr: "" Jan 9 13:19:56.955: INFO: stdout: "update-demo-nautilus-bf4zq " Jan 9 13:19:56.955: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-bf4zq -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4197' Jan 9 13:19:57.078: INFO: stderr: "" Jan 9 13:19:57.078: INFO: stdout: "true" Jan 9 13:19:57.078: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-bf4zq -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4197' Jan 9 13:19:57.238: INFO: stderr: "" Jan 9 13:19:57.238: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jan 9 13:19:57.238: INFO: validating pod update-demo-nautilus-bf4zq Jan 9 13:19:57.247: INFO: got data: { "image": "nautilus.jpg" } Jan 9 13:19:57.247: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 9 13:19:57.247: INFO: update-demo-nautilus-bf4zq is verified up and running STEP: scaling up the replication controller Jan 9 13:19:57.250: INFO: scanned /root for discovery docs: Jan 9 13:19:57.250: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-4197' Jan 9 13:19:58.397: INFO: stderr: "" Jan 9 13:19:58.397: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Jan 9 13:19:58.397: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4197' Jan 9 13:19:58.529: INFO: stderr: "" Jan 9 13:19:58.529: INFO: stdout: "update-demo-nautilus-8zn4w update-demo-nautilus-bf4zq " Jan 9 13:19:58.529: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8zn4w -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4197' Jan 9 13:19:58.631: INFO: stderr: "" Jan 9 13:19:58.631: INFO: stdout: "" Jan 9 13:19:58.631: INFO: update-demo-nautilus-8zn4w is created but not running Jan 9 13:20:03.632: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4197' Jan 9 13:20:03.760: INFO: stderr: "" Jan 9 13:20:03.760: INFO: stdout: "update-demo-nautilus-8zn4w update-demo-nautilus-bf4zq " Jan 9 13:20:03.760: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8zn4w -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4197' Jan 9 13:20:03.889: INFO: stderr: "" Jan 9 13:20:03.889: INFO: stdout: "" Jan 9 13:20:03.889: INFO: update-demo-nautilus-8zn4w is created but not running Jan 9 13:20:08.890: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4197' Jan 9 13:20:09.012: INFO: stderr: "" Jan 9 13:20:09.013: INFO: stdout: "update-demo-nautilus-8zn4w update-demo-nautilus-bf4zq " Jan 9 13:20:09.013: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8zn4w -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4197' Jan 9 13:20:09.154: INFO: stderr: "" Jan 9 13:20:09.154: INFO: stdout: "true" Jan 9 13:20:09.155: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8zn4w -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4197' Jan 9 13:20:09.316: INFO: stderr: "" Jan 9 13:20:09.316: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jan 9 13:20:09.316: INFO: validating pod update-demo-nautilus-8zn4w Jan 9 13:20:09.329: INFO: got data: { "image": "nautilus.jpg" } Jan 9 13:20:09.329: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 9 13:20:09.329: INFO: update-demo-nautilus-8zn4w is verified up and running Jan 9 13:20:09.330: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-bf4zq -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4197' Jan 9 13:20:09.445: INFO: stderr: "" Jan 9 13:20:09.445: INFO: stdout: "true" Jan 9 13:20:09.446: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-bf4zq -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4197' Jan 9 13:20:09.532: INFO: stderr: "" Jan 9 13:20:09.532: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jan 9 13:20:09.533: INFO: validating pod update-demo-nautilus-bf4zq Jan 9 13:20:09.539: INFO: got data: { "image": "nautilus.jpg" } Jan 9 13:20:09.539: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 9 13:20:09.539: INFO: update-demo-nautilus-bf4zq is verified up and running STEP: using delete to clean up resources Jan 9 13:20:09.539: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-4197' Jan 9 13:20:09.638: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 9 13:20:09.638: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Jan 9 13:20:09.638: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-4197' Jan 9 13:20:09.854: INFO: stderr: "No resources found.\n" Jan 9 13:20:09.854: INFO: stdout: "" Jan 9 13:20:09.855: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-4197 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Jan 9 13:20:10.089: INFO: stderr: "" Jan 9 13:20:10.089: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 9 13:20:10.090: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4197" for this suite. Jan 9 13:20:32.184: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 9 13:20:32.254: INFO: namespace kubectl-4197 deletion completed in 22.14890307s • [SLOW TEST:58.990 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 9 13:20:32.255: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jan 9 13:20:32.347: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. Jan 9 13:20:32.359: INFO: Number of nodes with available pods: 0 Jan 9 13:20:32.359: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. Jan 9 13:20:32.485: INFO: Number of nodes with available pods: 0 Jan 9 13:20:32.485: INFO: Node iruya-node is running more than one daemon pod Jan 9 13:20:33.493: INFO: Number of nodes with available pods: 0 Jan 9 13:20:33.493: INFO: Node iruya-node is running more than one daemon pod Jan 9 13:20:34.506: INFO: Number of nodes with available pods: 0 Jan 9 13:20:34.506: INFO: Node iruya-node is running more than one daemon pod Jan 9 13:20:35.494: INFO: Number of nodes with available pods: 0 Jan 9 13:20:35.494: INFO: Node iruya-node is running more than one daemon pod Jan 9 13:20:36.494: INFO: Number of nodes with available pods: 0 Jan 9 13:20:36.494: INFO: Node iruya-node is running more than one daemon pod Jan 9 13:20:37.497: INFO: Number of nodes with available pods: 0 Jan 9 13:20:37.497: INFO: Node iruya-node is running more than one daemon pod Jan 9 13:20:38.502: INFO: Number of nodes with available pods: 0 Jan 9 13:20:38.502: INFO: Node iruya-node is running more than one daemon pod Jan 9 13:20:39.492: INFO: Number of nodes with available pods: 0 Jan 9 13:20:39.493: INFO: Node iruya-node is running more than one daemon pod Jan 9 13:20:40.494: INFO: Number of nodes with available pods: 0 Jan 9 13:20:40.495: INFO: Node iruya-node is running more than one daemon pod Jan 9 13:20:41.491: INFO: Number of nodes with available pods: 1 Jan 9 13:20:41.491: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled Jan 9 13:20:41.528: INFO: Number of nodes with available pods: 1 Jan 9 13:20:41.528: INFO: Number of running nodes: 0, number of available pods: 1 Jan 9 13:20:42.539: INFO: Number of nodes with available pods: 0 Jan 9 13:20:42.539: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate Jan 9 13:20:42.600: INFO: Number of nodes with available pods: 0 Jan 9 13:20:42.600: INFO: Node iruya-node is running more than one daemon pod Jan 9 13:20:43.615: INFO: Number of nodes with available pods: 0 Jan 9 13:20:43.615: INFO: Node iruya-node is running more than one daemon pod Jan 9 13:20:44.614: INFO: Number of nodes with available pods: 0 Jan 9 13:20:44.614: INFO: Node iruya-node is running more than one daemon pod Jan 9 13:20:45.614: INFO: Number of nodes with available pods: 0 Jan 9 13:20:45.614: INFO: Node iruya-node is running more than one daemon pod Jan 9 13:20:46.616: INFO: Number of nodes with available pods: 0 Jan 9 13:20:46.616: INFO: Node iruya-node is running more than one daemon pod Jan 9 13:20:47.608: INFO: Number of nodes with available pods: 0 Jan 9 13:20:47.608: INFO: Node iruya-node is running more than one daemon pod Jan 9 13:20:48.608: INFO: Number of nodes with available pods: 0 Jan 9 13:20:48.608: INFO: Node iruya-node is running more than one daemon pod Jan 9 13:20:49.614: INFO: Number of nodes with available pods: 0 Jan 9 13:20:49.614: INFO: Node iruya-node is running more than one daemon pod Jan 9 13:20:50.607: INFO: Number of nodes with available pods: 0 Jan 9 13:20:50.607: INFO: Node iruya-node is running more than one daemon pod Jan 9 13:20:52.442: INFO: Number of nodes with available pods: 0 Jan 9 13:20:52.442: INFO: Node iruya-node is running more than one daemon pod Jan 9 13:20:52.632: INFO: Number of nodes with available pods: 0 Jan 9 13:20:52.632: INFO: Node iruya-node is running more than one daemon pod Jan 9 13:20:53.610: INFO: Number of nodes with available pods: 0 Jan 9 13:20:53.610: INFO: Node iruya-node is running more than one daemon pod Jan 9 13:20:54.670: INFO: Number of nodes with available pods: 0 Jan 9 13:20:54.670: INFO: Node iruya-node is running more than one daemon pod Jan 9 13:20:55.610: INFO: Number of nodes with available pods: 0 Jan 9 13:20:55.610: INFO: Node iruya-node is running more than one daemon pod Jan 9 13:20:56.606: INFO: Number of nodes with available pods: 0 Jan 9 13:20:56.606: INFO: Node iruya-node is running more than one daemon pod Jan 9 13:20:57.612: INFO: Number of nodes with available pods: 1 Jan 9 13:20:57.613: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-1606, will wait for the garbage collector to delete the pods Jan 9 13:20:57.711: INFO: Deleting DaemonSet.extensions daemon-set took: 32.523704ms Jan 9 13:20:58.111: INFO: Terminating DaemonSet.extensions daemon-set pods took: 400.878899ms Jan 9 13:21:06.623: INFO: Number of nodes with available pods: 0 Jan 9 13:21:06.623: INFO: Number of running nodes: 0, number of available pods: 0 Jan 9 13:21:06.630: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-1606/daemonsets","resourceVersion":"19900500"},"items":null} Jan 9 13:21:06.633: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-1606/pods","resourceVersion":"19900500"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 9 13:21:06.680: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-1606" for this suite. Jan 9 13:21:12.749: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 9 13:21:12.869: INFO: namespace daemonsets-1606 deletion completed in 6.171842633s • [SLOW TEST:40.614 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 9 13:21:12.869: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test use defaults Jan 9 13:21:12.986: INFO: Waiting up to 5m0s for pod "client-containers-c5dc60ce-e035-4ca0-a41e-acae59a331cb" in namespace "containers-8529" to be "success or failure" Jan 9 13:21:12.999: INFO: Pod "client-containers-c5dc60ce-e035-4ca0-a41e-acae59a331cb": Phase="Pending", Reason="", readiness=false. Elapsed: 12.577598ms Jan 9 13:21:15.009: INFO: Pod "client-containers-c5dc60ce-e035-4ca0-a41e-acae59a331cb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022621868s Jan 9 13:21:17.040: INFO: Pod "client-containers-c5dc60ce-e035-4ca0-a41e-acae59a331cb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.053554117s Jan 9 13:21:19.080: INFO: Pod "client-containers-c5dc60ce-e035-4ca0-a41e-acae59a331cb": Phase="Pending", Reason="", readiness=false. Elapsed: 6.094103804s Jan 9 13:21:21.090: INFO: Pod "client-containers-c5dc60ce-e035-4ca0-a41e-acae59a331cb": Phase="Pending", Reason="", readiness=false. Elapsed: 8.103875549s Jan 9 13:21:23.108: INFO: Pod "client-containers-c5dc60ce-e035-4ca0-a41e-acae59a331cb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.12233487s STEP: Saw pod success Jan 9 13:21:23.108: INFO: Pod "client-containers-c5dc60ce-e035-4ca0-a41e-acae59a331cb" satisfied condition "success or failure" Jan 9 13:21:23.124: INFO: Trying to get logs from node iruya-node pod client-containers-c5dc60ce-e035-4ca0-a41e-acae59a331cb container test-container: STEP: delete the pod Jan 9 13:21:23.240: INFO: Waiting for pod client-containers-c5dc60ce-e035-4ca0-a41e-acae59a331cb to disappear Jan 9 13:21:23.245: INFO: Pod client-containers-c5dc60ce-e035-4ca0-a41e-acae59a331cb no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 9 13:21:23.245: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-8529" for this suite. Jan 9 13:21:29.304: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 9 13:21:29.515: INFO: namespace containers-8529 deletion completed in 6.263361033s • [SLOW TEST:16.646 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 9 13:21:29.516: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name cm-test-opt-del-b878e50a-66f5-45b5-88c1-f822eeec8efc STEP: Creating configMap with name cm-test-opt-upd-e429dcca-e398-4ed5-8aae-161fa97c7ab2 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-b878e50a-66f5-45b5-88c1-f822eeec8efc STEP: Updating configmap cm-test-opt-upd-e429dcca-e398-4ed5-8aae-161fa97c7ab2 STEP: Creating configMap with name cm-test-opt-create-18037a17-5fb3-4a63-91d7-95260aca28dc STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 9 13:22:47.702: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8207" for this suite. Jan 9 13:23:09.773: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 9 13:23:09.953: INFO: namespace configmap-8207 deletion completed in 22.243052212s • [SLOW TEST:100.438 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 9 13:23:09.954: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-9db56bd1-eb8a-40d7-8e71-e6d684fb1f0a STEP: Creating a pod to test consume configMaps Jan 9 13:23:10.199: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-bbb8e29f-e457-4a20-8f6b-b7ca4c587044" in namespace "projected-1396" to be "success or failure" Jan 9 13:23:10.227: INFO: Pod "pod-projected-configmaps-bbb8e29f-e457-4a20-8f6b-b7ca4c587044": Phase="Pending", Reason="", readiness=false. Elapsed: 27.785098ms Jan 9 13:23:12.235: INFO: Pod "pod-projected-configmaps-bbb8e29f-e457-4a20-8f6b-b7ca4c587044": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035432189s Jan 9 13:23:14.247: INFO: Pod "pod-projected-configmaps-bbb8e29f-e457-4a20-8f6b-b7ca4c587044": Phase="Pending", Reason="", readiness=false. Elapsed: 4.047897215s Jan 9 13:23:16.288: INFO: Pod "pod-projected-configmaps-bbb8e29f-e457-4a20-8f6b-b7ca4c587044": Phase="Pending", Reason="", readiness=false. Elapsed: 6.08904745s Jan 9 13:23:18.297: INFO: Pod "pod-projected-configmaps-bbb8e29f-e457-4a20-8f6b-b7ca4c587044": Phase="Pending", Reason="", readiness=false. Elapsed: 8.097717923s Jan 9 13:23:20.303: INFO: Pod "pod-projected-configmaps-bbb8e29f-e457-4a20-8f6b-b7ca4c587044": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.104096927s STEP: Saw pod success Jan 9 13:23:20.303: INFO: Pod "pod-projected-configmaps-bbb8e29f-e457-4a20-8f6b-b7ca4c587044" satisfied condition "success or failure" Jan 9 13:23:20.307: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-bbb8e29f-e457-4a20-8f6b-b7ca4c587044 container projected-configmap-volume-test: STEP: delete the pod Jan 9 13:23:21.205: INFO: Waiting for pod pod-projected-configmaps-bbb8e29f-e457-4a20-8f6b-b7ca4c587044 to disappear Jan 9 13:23:21.218: INFO: Pod pod-projected-configmaps-bbb8e29f-e457-4a20-8f6b-b7ca4c587044 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 9 13:23:21.218: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1396" for this suite. Jan 9 13:23:27.256: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 9 13:23:27.379: INFO: namespace projected-1396 deletion completed in 6.152567029s • [SLOW TEST:17.425 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 9 13:23:27.379: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Jan 9 13:23:27.525: INFO: Waiting up to 5m0s for pod "downwardapi-volume-dfc2a65c-3aaa-41e9-b709-f04059e907b4" in namespace "projected-7977" to be "success or failure" Jan 9 13:23:27.554: INFO: Pod "downwardapi-volume-dfc2a65c-3aaa-41e9-b709-f04059e907b4": Phase="Pending", Reason="", readiness=false. Elapsed: 29.048197ms Jan 9 13:23:29.561: INFO: Pod "downwardapi-volume-dfc2a65c-3aaa-41e9-b709-f04059e907b4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036089087s Jan 9 13:23:31.573: INFO: Pod "downwardapi-volume-dfc2a65c-3aaa-41e9-b709-f04059e907b4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.048021176s Jan 9 13:23:33.581: INFO: Pod "downwardapi-volume-dfc2a65c-3aaa-41e9-b709-f04059e907b4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.056480639s Jan 9 13:23:35.592: INFO: Pod "downwardapi-volume-dfc2a65c-3aaa-41e9-b709-f04059e907b4": Phase="Pending", Reason="", readiness=false. Elapsed: 8.067148269s Jan 9 13:23:37.603: INFO: Pod "downwardapi-volume-dfc2a65c-3aaa-41e9-b709-f04059e907b4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.077880741s STEP: Saw pod success Jan 9 13:23:37.603: INFO: Pod "downwardapi-volume-dfc2a65c-3aaa-41e9-b709-f04059e907b4" satisfied condition "success or failure" Jan 9 13:23:37.622: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-dfc2a65c-3aaa-41e9-b709-f04059e907b4 container client-container: STEP: delete the pod Jan 9 13:23:37.735: INFO: Waiting for pod downwardapi-volume-dfc2a65c-3aaa-41e9-b709-f04059e907b4 to disappear Jan 9 13:23:37.747: INFO: Pod downwardapi-volume-dfc2a65c-3aaa-41e9-b709-f04059e907b4 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 9 13:23:37.748: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7977" for this suite. Jan 9 13:23:43.922: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 9 13:23:44.017: INFO: namespace projected-7977 deletion completed in 6.261972299s • [SLOW TEST:16.638 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 9 13:23:44.018: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Performing setup for networking test in namespace pod-network-test-5771 STEP: creating a selector STEP: Creating the service pods in kubernetes Jan 9 13:23:44.072: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Jan 9 13:24:26.939: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostName&protocol=http&host=10.32.0.4&port=8080&tries=1'] Namespace:pod-network-test-5771 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 9 13:24:26.939: INFO: >>> kubeConfig: /root/.kube/config I0109 13:24:27.003836 8 log.go:172] (0xc000df48f0) (0xc001691860) Create stream I0109 13:24:27.003898 8 log.go:172] (0xc000df48f0) (0xc001691860) Stream added, broadcasting: 1 I0109 13:24:27.009030 8 log.go:172] (0xc000df48f0) Reply frame received for 1 I0109 13:24:27.009063 8 log.go:172] (0xc000df48f0) (0xc001737680) Create stream I0109 13:24:27.009071 8 log.go:172] (0xc000df48f0) (0xc001737680) Stream added, broadcasting: 3 I0109 13:24:27.010382 8 log.go:172] (0xc000df48f0) Reply frame received for 3 I0109 13:24:27.010407 8 log.go:172] (0xc000df48f0) (0xc001691900) Create stream I0109 13:24:27.010415 8 log.go:172] (0xc000df48f0) (0xc001691900) Stream added, broadcasting: 5 I0109 13:24:27.011780 8 log.go:172] (0xc000df48f0) Reply frame received for 5 I0109 13:24:27.194314 8 log.go:172] (0xc000df48f0) Data frame received for 3 I0109 13:24:27.194442 8 log.go:172] (0xc001737680) (3) Data frame handling I0109 13:24:27.194500 8 log.go:172] (0xc001737680) (3) Data frame sent I0109 13:24:27.307825 8 log.go:172] (0xc000df48f0) (0xc001737680) Stream removed, broadcasting: 3 I0109 13:24:27.308082 8 log.go:172] (0xc000df48f0) Data frame received for 1 I0109 13:24:27.308099 8 log.go:172] (0xc001691860) (1) Data frame handling I0109 13:24:27.308122 8 log.go:172] (0xc001691860) (1) Data frame sent I0109 13:24:27.308128 8 log.go:172] (0xc000df48f0) (0xc001691860) Stream removed, broadcasting: 1 I0109 13:24:27.308461 8 log.go:172] (0xc000df48f0) (0xc001691900) Stream removed, broadcasting: 5 I0109 13:24:27.308521 8 log.go:172] (0xc000df48f0) (0xc001691860) Stream removed, broadcasting: 1 I0109 13:24:27.308528 8 log.go:172] (0xc000df48f0) (0xc001737680) Stream removed, broadcasting: 3 I0109 13:24:27.308533 8 log.go:172] (0xc000df48f0) (0xc001691900) Stream removed, broadcasting: 5 Jan 9 13:24:27.309: INFO: Waiting for endpoints: map[] I0109 13:24:27.310137 8 log.go:172] (0xc000df48f0) Go away received Jan 9 13:24:27.354: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostName&protocol=http&host=10.44.0.1&port=8080&tries=1'] Namespace:pod-network-test-5771 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 9 13:24:27.354: INFO: >>> kubeConfig: /root/.kube/config I0109 13:24:27.404127 8 log.go:172] (0xc000df5340) (0xc001691cc0) Create stream I0109 13:24:27.404257 8 log.go:172] (0xc000df5340) (0xc001691cc0) Stream added, broadcasting: 1 I0109 13:24:27.410708 8 log.go:172] (0xc000df5340) Reply frame received for 1 I0109 13:24:27.410760 8 log.go:172] (0xc000df5340) (0xc00063a960) Create stream I0109 13:24:27.410774 8 log.go:172] (0xc000df5340) (0xc00063a960) Stream added, broadcasting: 3 I0109 13:24:27.418642 8 log.go:172] (0xc000df5340) Reply frame received for 3 I0109 13:24:27.418670 8 log.go:172] (0xc000df5340) (0xc001691e00) Create stream I0109 13:24:27.418676 8 log.go:172] (0xc000df5340) (0xc001691e00) Stream added, broadcasting: 5 I0109 13:24:27.419701 8 log.go:172] (0xc000df5340) Reply frame received for 5 I0109 13:24:27.509776 8 log.go:172] (0xc000df5340) Data frame received for 3 I0109 13:24:27.509843 8 log.go:172] (0xc00063a960) (3) Data frame handling I0109 13:24:27.509856 8 log.go:172] (0xc00063a960) (3) Data frame sent I0109 13:24:27.617268 8 log.go:172] (0xc000df5340) Data frame received for 1 I0109 13:24:27.617330 8 log.go:172] (0xc000df5340) (0xc001691e00) Stream removed, broadcasting: 5 I0109 13:24:27.617382 8 log.go:172] (0xc001691cc0) (1) Data frame handling I0109 13:24:27.617392 8 log.go:172] (0xc001691cc0) (1) Data frame sent I0109 13:24:27.617403 8 log.go:172] (0xc000df5340) (0xc00063a960) Stream removed, broadcasting: 3 I0109 13:24:27.617427 8 log.go:172] (0xc000df5340) (0xc001691cc0) Stream removed, broadcasting: 1 I0109 13:24:27.617446 8 log.go:172] (0xc000df5340) Go away received I0109 13:24:27.617957 8 log.go:172] (0xc000df5340) (0xc001691cc0) Stream removed, broadcasting: 1 I0109 13:24:27.618037 8 log.go:172] (0xc000df5340) (0xc00063a960) Stream removed, broadcasting: 3 I0109 13:24:27.618055 8 log.go:172] (0xc000df5340) (0xc001691e00) Stream removed, broadcasting: 5 Jan 9 13:24:27.618: INFO: Waiting for endpoints: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 9 13:24:27.618: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-5771" for this suite. Jan 9 13:24:51.651: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 9 13:24:51.781: INFO: namespace pod-network-test-5771 deletion completed in 24.154801085s • [SLOW TEST:67.763 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 9 13:24:51.781: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jan 9 13:24:51.858: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) Jan 9 13:24:51.897: INFO: Pod name sample-pod: Found 0 pods out of 1 Jan 9 13:24:57.218: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Jan 9 13:25:01.361: INFO: Creating deployment "test-rolling-update-deployment" Jan 9 13:25:01.374: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has Jan 9 13:25:01.389: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created Jan 9 13:25:03.412: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected Jan 9 13:25:03.416: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714173101, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714173101, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714173101, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714173101, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-79f6b9d75c\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 9 13:25:05.422: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714173101, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714173101, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714173101, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714173101, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-79f6b9d75c\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 9 13:25:07.464: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714173101, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714173101, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714173101, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714173101, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-79f6b9d75c\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 9 13:25:09.444: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 Jan 9 13:25:09.459: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment,GenerateName:,Namespace:deployment-4893,SelfLink:/apis/apps/v1/namespaces/deployment-4893/deployments/test-rolling-update-deployment,UID:e2c39f37-df44-47cc-84e2-1e36afea0115,ResourceVersion:19901057,Generation:1,CreationTimestamp:2020-01-09 13:25:01 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-01-09 13:25:01 +0000 UTC 2020-01-09 13:25:01 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-01-09 13:25:08 +0000 UTC 2020-01-09 13:25:01 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rolling-update-deployment-79f6b9d75c" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},} Jan 9 13:25:09.462: INFO: New ReplicaSet "test-rolling-update-deployment-79f6b9d75c" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-79f6b9d75c,GenerateName:,Namespace:deployment-4893,SelfLink:/apis/apps/v1/namespaces/deployment-4893/replicasets/test-rolling-update-deployment-79f6b9d75c,UID:832bc566-237d-4217-9860-28535756a595,ResourceVersion:19901048,Generation:1,CreationTimestamp:2020-01-09 13:25:01 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment e2c39f37-df44-47cc-84e2-1e36afea0115 0xc0034c8cf7 0xc0034c8cf8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} Jan 9 13:25:09.462: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": Jan 9 13:25:09.462: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-controller,GenerateName:,Namespace:deployment-4893,SelfLink:/apis/apps/v1/namespaces/deployment-4893/replicasets/test-rolling-update-controller,UID:32d36cef-f1c2-43ce-bb81-39efaa73217f,ResourceVersion:19901056,Generation:2,CreationTimestamp:2020-01-09 13:24:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305832,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment e2c39f37-df44-47cc-84e2-1e36afea0115 0xc0034c8c0f 0xc0034c8c20}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Jan 9 13:25:09.466: INFO: Pod "test-rolling-update-deployment-79f6b9d75c-6blbk" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-79f6b9d75c-6blbk,GenerateName:test-rolling-update-deployment-79f6b9d75c-,Namespace:deployment-4893,SelfLink:/api/v1/namespaces/deployment-4893/pods/test-rolling-update-deployment-79f6b9d75c-6blbk,UID:38a9fbac-12ef-43ab-9585-c994cc906240,ResourceVersion:19901047,Generation:0,CreationTimestamp:2020-01-09 13:25:01 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rolling-update-deployment-79f6b9d75c 832bc566-237d-4217-9860-28535756a595 0xc0034c95e7 0xc0034c95e8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-hb955 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hb955,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-hb955 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0034c9660} {node.kubernetes.io/unreachable Exists NoExecute 0xc0034c9680}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-09 13:25:01 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-09 13:25:07 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-09 13:25:07 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-09 13:25:01 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.2,StartTime:2020-01-09 13:25:01 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-01-09 13:25:07 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 docker://18db1597ad42761429b9cdb4e14093a9df32dc77b1c3d54c348f3e6e7a59702d}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 9 13:25:09.466: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-4893" for this suite. Jan 9 13:25:15.555: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 9 13:25:15.942: INFO: namespace deployment-4893 deletion completed in 6.470155696s • [SLOW TEST:24.161 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 9 13:25:15.943: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-3376318f-5992-4605-9557-25aa94f3f729 STEP: Creating a pod to test consume configMaps Jan 9 13:25:16.097: INFO: Waiting up to 5m0s for pod "pod-configmaps-bd03bd68-c195-4b54-9798-697654d60299" in namespace "configmap-5952" to be "success or failure" Jan 9 13:25:16.103: INFO: Pod "pod-configmaps-bd03bd68-c195-4b54-9798-697654d60299": Phase="Pending", Reason="", readiness=false. Elapsed: 6.07138ms Jan 9 13:25:18.113: INFO: Pod "pod-configmaps-bd03bd68-c195-4b54-9798-697654d60299": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015735609s Jan 9 13:25:20.135: INFO: Pod "pod-configmaps-bd03bd68-c195-4b54-9798-697654d60299": Phase="Pending", Reason="", readiness=false. Elapsed: 4.037716592s Jan 9 13:25:22.154: INFO: Pod "pod-configmaps-bd03bd68-c195-4b54-9798-697654d60299": Phase="Pending", Reason="", readiness=false. Elapsed: 6.056573849s Jan 9 13:25:24.161: INFO: Pod "pod-configmaps-bd03bd68-c195-4b54-9798-697654d60299": Phase="Pending", Reason="", readiness=false. Elapsed: 8.063833911s Jan 9 13:25:26.180: INFO: Pod "pod-configmaps-bd03bd68-c195-4b54-9798-697654d60299": Phase="Pending", Reason="", readiness=false. Elapsed: 10.082623888s Jan 9 13:25:28.190: INFO: Pod "pod-configmaps-bd03bd68-c195-4b54-9798-697654d60299": Phase="Pending", Reason="", readiness=false. Elapsed: 12.0923124s Jan 9 13:25:30.201: INFO: Pod "pod-configmaps-bd03bd68-c195-4b54-9798-697654d60299": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.103779377s STEP: Saw pod success Jan 9 13:25:30.201: INFO: Pod "pod-configmaps-bd03bd68-c195-4b54-9798-697654d60299" satisfied condition "success or failure" Jan 9 13:25:30.206: INFO: Trying to get logs from node iruya-node pod pod-configmaps-bd03bd68-c195-4b54-9798-697654d60299 container configmap-volume-test: STEP: delete the pod Jan 9 13:25:30.313: INFO: Waiting for pod pod-configmaps-bd03bd68-c195-4b54-9798-697654d60299 to disappear Jan 9 13:25:30.325: INFO: Pod pod-configmaps-bd03bd68-c195-4b54-9798-697654d60299 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 9 13:25:30.325: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5952" for this suite. Jan 9 13:25:36.401: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 9 13:25:36.558: INFO: namespace configmap-5952 deletion completed in 6.226331086s • [SLOW TEST:20.616 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 9 13:25:36.559: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Jan 9 13:25:36.730: INFO: Waiting up to 5m0s for pod "downwardapi-volume-73696727-34e1-4c53-8b16-883e8684767a" in namespace "downward-api-587" to be "success or failure" Jan 9 13:25:36.761: INFO: Pod "downwardapi-volume-73696727-34e1-4c53-8b16-883e8684767a": Phase="Pending", Reason="", readiness=false. Elapsed: 30.638744ms Jan 9 13:25:38.773: INFO: Pod "downwardapi-volume-73696727-34e1-4c53-8b16-883e8684767a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.042329881s Jan 9 13:25:40.789: INFO: Pod "downwardapi-volume-73696727-34e1-4c53-8b16-883e8684767a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.059116084s Jan 9 13:25:42.802: INFO: Pod "downwardapi-volume-73696727-34e1-4c53-8b16-883e8684767a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.071471174s Jan 9 13:25:44.810: INFO: Pod "downwardapi-volume-73696727-34e1-4c53-8b16-883e8684767a": Phase="Pending", Reason="", readiness=false. Elapsed: 8.0801349s Jan 9 13:25:46.819: INFO: Pod "downwardapi-volume-73696727-34e1-4c53-8b16-883e8684767a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.089146686s STEP: Saw pod success Jan 9 13:25:46.819: INFO: Pod "downwardapi-volume-73696727-34e1-4c53-8b16-883e8684767a" satisfied condition "success or failure" Jan 9 13:25:46.823: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-73696727-34e1-4c53-8b16-883e8684767a container client-container: STEP: delete the pod Jan 9 13:25:46.903: INFO: Waiting for pod downwardapi-volume-73696727-34e1-4c53-8b16-883e8684767a to disappear Jan 9 13:25:46.912: INFO: Pod downwardapi-volume-73696727-34e1-4c53-8b16-883e8684767a no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 9 13:25:46.913: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-587" for this suite. Jan 9 13:25:54.963: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 9 13:25:55.074: INFO: namespace downward-api-587 deletion completed in 8.148608112s • [SLOW TEST:18.515 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 9 13:25:55.075: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jan 9 13:25:55.383: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"19a605b4-69da-4d41-86ba-caea1d986bc8", Controller:(*bool)(0xc001a98282), BlockOwnerDeletion:(*bool)(0xc001a98283)}} Jan 9 13:25:55.407: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"7e82e575-c22b-4596-b61f-915ce67e4450", Controller:(*bool)(0xc0011c990a), BlockOwnerDeletion:(*bool)(0xc0011c990b)}} Jan 9 13:25:55.424: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"073abd95-fd33-462f-ad47-d364cde77dce", Controller:(*bool)(0xc001e425ea), BlockOwnerDeletion:(*bool)(0xc001e425eb)}} [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 9 13:26:00.487: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-5446" for this suite. Jan 9 13:26:08.531: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 9 13:26:08.702: INFO: namespace gc-5446 deletion completed in 8.209592304s • [SLOW TEST:13.627 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 9 13:26:08.702: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-a51cbfb1-d100-48a1-8bb6-83c716cb8f62 STEP: Creating a pod to test consume configMaps Jan 9 13:26:09.672: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-f803833b-0b50-46b7-902d-2c261a9d79e1" in namespace "projected-1004" to be "success or failure" Jan 9 13:26:09.677: INFO: Pod "pod-projected-configmaps-f803833b-0b50-46b7-902d-2c261a9d79e1": Phase="Pending", Reason="", readiness=false. Elapsed: 5.543851ms Jan 9 13:26:12.411: INFO: Pod "pod-projected-configmaps-f803833b-0b50-46b7-902d-2c261a9d79e1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.73952186s Jan 9 13:26:14.417: INFO: Pod "pod-projected-configmaps-f803833b-0b50-46b7-902d-2c261a9d79e1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.745192032s Jan 9 13:26:16.424: INFO: Pod "pod-projected-configmaps-f803833b-0b50-46b7-902d-2c261a9d79e1": Phase="Pending", Reason="", readiness=false. Elapsed: 6.752679051s Jan 9 13:26:18.433: INFO: Pod "pod-projected-configmaps-f803833b-0b50-46b7-902d-2c261a9d79e1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.761723953s STEP: Saw pod success Jan 9 13:26:18.434: INFO: Pod "pod-projected-configmaps-f803833b-0b50-46b7-902d-2c261a9d79e1" satisfied condition "success or failure" Jan 9 13:26:18.444: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-f803833b-0b50-46b7-902d-2c261a9d79e1 container projected-configmap-volume-test: STEP: delete the pod Jan 9 13:26:18.513: INFO: Waiting for pod pod-projected-configmaps-f803833b-0b50-46b7-902d-2c261a9d79e1 to disappear Jan 9 13:26:18.524: INFO: Pod pod-projected-configmaps-f803833b-0b50-46b7-902d-2c261a9d79e1 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 9 13:26:18.524: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1004" for this suite. Jan 9 13:26:24.557: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 9 13:26:24.658: INFO: namespace projected-1004 deletion completed in 6.124823506s • [SLOW TEST:15.956 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 9 13:26:24.658: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-4f0bba79-3617-4249-bd04-763393937442 STEP: Creating a pod to test consume configMaps Jan 9 13:26:24.794: INFO: Waiting up to 5m0s for pod "pod-configmaps-49383187-54bc-4faa-aecd-14875ac2bab8" in namespace "configmap-5286" to be "success or failure" Jan 9 13:26:24.852: INFO: Pod "pod-configmaps-49383187-54bc-4faa-aecd-14875ac2bab8": Phase="Pending", Reason="", readiness=false. Elapsed: 58.010055ms Jan 9 13:26:26.878: INFO: Pod "pod-configmaps-49383187-54bc-4faa-aecd-14875ac2bab8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.084493645s Jan 9 13:26:28.919: INFO: Pod "pod-configmaps-49383187-54bc-4faa-aecd-14875ac2bab8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.12507191s Jan 9 13:26:30.935: INFO: Pod "pod-configmaps-49383187-54bc-4faa-aecd-14875ac2bab8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.141485231s Jan 9 13:26:32.949: INFO: Pod "pod-configmaps-49383187-54bc-4faa-aecd-14875ac2bab8": Phase="Pending", Reason="", readiness=false. Elapsed: 8.154695096s Jan 9 13:26:34.955: INFO: Pod "pod-configmaps-49383187-54bc-4faa-aecd-14875ac2bab8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.160654047s STEP: Saw pod success Jan 9 13:26:34.955: INFO: Pod "pod-configmaps-49383187-54bc-4faa-aecd-14875ac2bab8" satisfied condition "success or failure" Jan 9 13:26:34.957: INFO: Trying to get logs from node iruya-node pod pod-configmaps-49383187-54bc-4faa-aecd-14875ac2bab8 container configmap-volume-test: STEP: delete the pod Jan 9 13:26:35.175: INFO: Waiting for pod pod-configmaps-49383187-54bc-4faa-aecd-14875ac2bab8 to disappear Jan 9 13:26:35.187: INFO: Pod pod-configmaps-49383187-54bc-4faa-aecd-14875ac2bab8 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 9 13:26:35.188: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5286" for this suite. Jan 9 13:26:41.257: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 9 13:26:41.434: INFO: namespace configmap-5286 deletion completed in 6.222751842s • [SLOW TEST:16.776 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl logs should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 9 13:26:41.435: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1292 STEP: creating an rc Jan 9 13:26:41.500: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-200' Jan 9 13:26:41.899: INFO: stderr: "" Jan 9 13:26:41.899: INFO: stdout: "replicationcontroller/redis-master created\n" [It] should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Waiting for Redis master to start. Jan 9 13:26:42.910: INFO: Selector matched 1 pods for map[app:redis] Jan 9 13:26:42.910: INFO: Found 0 / 1 Jan 9 13:26:43.920: INFO: Selector matched 1 pods for map[app:redis] Jan 9 13:26:43.920: INFO: Found 0 / 1 Jan 9 13:26:44.922: INFO: Selector matched 1 pods for map[app:redis] Jan 9 13:26:44.922: INFO: Found 0 / 1 Jan 9 13:26:45.915: INFO: Selector matched 1 pods for map[app:redis] Jan 9 13:26:45.915: INFO: Found 0 / 1 Jan 9 13:26:46.907: INFO: Selector matched 1 pods for map[app:redis] Jan 9 13:26:46.907: INFO: Found 0 / 1 Jan 9 13:26:47.913: INFO: Selector matched 1 pods for map[app:redis] Jan 9 13:26:47.913: INFO: Found 0 / 1 Jan 9 13:26:48.915: INFO: Selector matched 1 pods for map[app:redis] Jan 9 13:26:48.915: INFO: Found 0 / 1 Jan 9 13:26:49.910: INFO: Selector matched 1 pods for map[app:redis] Jan 9 13:26:49.910: INFO: Found 0 / 1 Jan 9 13:26:50.908: INFO: Selector matched 1 pods for map[app:redis] Jan 9 13:26:50.908: INFO: Found 1 / 1 Jan 9 13:26:50.908: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Jan 9 13:26:50.913: INFO: Selector matched 1 pods for map[app:redis] Jan 9 13:26:50.913: INFO: ForEach: Found 1 pods from the filter. Now looping through them. STEP: checking for a matching strings Jan 9 13:26:50.914: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-sdwcg redis-master --namespace=kubectl-200' Jan 9 13:26:51.043: INFO: stderr: "" Jan 9 13:26:51.043: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 09 Jan 13:26:49.283 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 09 Jan 13:26:49.284 # Server started, Redis version 3.2.12\n1:M 09 Jan 13:26:49.284 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 09 Jan 13:26:49.284 * The server is now ready to accept connections on port 6379\n" STEP: limiting log lines Jan 9 13:26:51.043: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-sdwcg redis-master --namespace=kubectl-200 --tail=1' Jan 9 13:26:51.180: INFO: stderr: "" Jan 9 13:26:51.180: INFO: stdout: "1:M 09 Jan 13:26:49.284 * The server is now ready to accept connections on port 6379\n" STEP: limiting log bytes Jan 9 13:26:51.181: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-sdwcg redis-master --namespace=kubectl-200 --limit-bytes=1' Jan 9 13:26:51.367: INFO: stderr: "" Jan 9 13:26:51.367: INFO: stdout: " " STEP: exposing timestamps Jan 9 13:26:51.368: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-sdwcg redis-master --namespace=kubectl-200 --tail=1 --timestamps' Jan 9 13:26:51.525: INFO: stderr: "" Jan 9 13:26:51.525: INFO: stdout: "2020-01-09T13:26:49.285058824Z 1:M 09 Jan 13:26:49.284 * The server is now ready to accept connections on port 6379\n" STEP: restricting to a time range Jan 9 13:26:54.026: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-sdwcg redis-master --namespace=kubectl-200 --since=1s' Jan 9 13:26:54.158: INFO: stderr: "" Jan 9 13:26:54.158: INFO: stdout: "" Jan 9 13:26:54.159: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-sdwcg redis-master --namespace=kubectl-200 --since=24h' Jan 9 13:26:54.276: INFO: stderr: "" Jan 9 13:26:54.276: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 09 Jan 13:26:49.283 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 09 Jan 13:26:49.284 # Server started, Redis version 3.2.12\n1:M 09 Jan 13:26:49.284 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 09 Jan 13:26:49.284 * The server is now ready to accept connections on port 6379\n" [AfterEach] [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1298 STEP: using delete to clean up resources Jan 9 13:26:54.276: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-200' Jan 9 13:26:54.360: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 9 13:26:54.361: INFO: stdout: "replicationcontroller \"redis-master\" force deleted\n" Jan 9 13:26:54.361: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=nginx --no-headers --namespace=kubectl-200' Jan 9 13:26:54.465: INFO: stderr: "No resources found.\n" Jan 9 13:26:54.466: INFO: stdout: "" Jan 9 13:26:54.466: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=nginx --namespace=kubectl-200 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Jan 9 13:26:54.592: INFO: stderr: "" Jan 9 13:26:54.592: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 9 13:26:54.593: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-200" for this suite. Jan 9 13:27:16.633: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 9 13:27:16.718: INFO: namespace kubectl-200 deletion completed in 22.115357619s • [SLOW TEST:35.284 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 9 13:27:16.719: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating the pod Jan 9 13:27:27.444: INFO: Successfully updated pod "labelsupdate090b8634-2bbf-48ec-8487-d6b48bd540c0" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 9 13:27:29.498: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2990" for this suite. Jan 9 13:27:53.613: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 9 13:27:53.694: INFO: namespace downward-api-2990 deletion completed in 24.190040266s • [SLOW TEST:36.975 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl rolling-update should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 9 13:27:53.695: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1516 [It] should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Jan 9 13:27:53.781: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=kubectl-7822' Jan 9 13:27:53.917: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Jan 9 13:27:53.917: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n" STEP: verifying the rc e2e-test-nginx-rc was created Jan 9 13:27:53.996: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0 STEP: rolling-update to same image controller Jan 9 13:27:54.011: INFO: scanned /root for discovery docs: Jan 9 13:27:54.012: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-nginx-rc --update-period=1s --image=docker.io/library/nginx:1.14-alpine --image-pull-policy=IfNotPresent --namespace=kubectl-7822' Jan 9 13:28:20.319: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" Jan 9 13:28:20.319: INFO: stdout: "Created e2e-test-nginx-rc-fd3fb862eb75cdd1d43ed84bb4526b2c\nScaling up e2e-test-nginx-rc-fd3fb862eb75cdd1d43ed84bb4526b2c from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-fd3fb862eb75cdd1d43ed84bb4526b2c up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-fd3fb862eb75cdd1d43ed84bb4526b2c to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n" Jan 9 13:28:20.319: INFO: stdout: "Created e2e-test-nginx-rc-fd3fb862eb75cdd1d43ed84bb4526b2c\nScaling up e2e-test-nginx-rc-fd3fb862eb75cdd1d43ed84bb4526b2c from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-fd3fb862eb75cdd1d43ed84bb4526b2c up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-fd3fb862eb75cdd1d43ed84bb4526b2c to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n" STEP: waiting for all containers in run=e2e-test-nginx-rc pods to come up. Jan 9 13:28:20.319: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-7822' Jan 9 13:28:20.456: INFO: stderr: "" Jan 9 13:28:20.457: INFO: stdout: "e2e-test-nginx-rc-fd3fb862eb75cdd1d43ed84bb4526b2c-f5kvk " Jan 9 13:28:20.457: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-fd3fb862eb75cdd1d43ed84bb4526b2c-f5kvk -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-nginx-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7822' Jan 9 13:28:20.562: INFO: stderr: "" Jan 9 13:28:20.562: INFO: stdout: "true" Jan 9 13:28:20.562: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-fd3fb862eb75cdd1d43ed84bb4526b2c-f5kvk -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-nginx-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7822' Jan 9 13:28:20.670: INFO: stderr: "" Jan 9 13:28:20.670: INFO: stdout: "docker.io/library/nginx:1.14-alpine" Jan 9 13:28:20.670: INFO: e2e-test-nginx-rc-fd3fb862eb75cdd1d43ed84bb4526b2c-f5kvk is verified up and running [AfterEach] [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1522 Jan 9 13:28:20.670: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=kubectl-7822' Jan 9 13:28:20.834: INFO: stderr: "" Jan 9 13:28:20.834: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 9 13:28:20.834: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7822" for this suite. Jan 9 13:28:43.811: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 9 13:28:44.079: INFO: namespace kubectl-7822 deletion completed in 23.224871012s • [SLOW TEST:50.385 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 9 13:28:44.080: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0644 on node default medium Jan 9 13:28:44.376: INFO: Waiting up to 5m0s for pod "pod-c6facf62-48f2-47e9-982b-008022152ccd" in namespace "emptydir-1894" to be "success or failure" Jan 9 13:28:44.406: INFO: Pod "pod-c6facf62-48f2-47e9-982b-008022152ccd": Phase="Pending", Reason="", readiness=false. Elapsed: 29.94162ms Jan 9 13:28:46.414: INFO: Pod "pod-c6facf62-48f2-47e9-982b-008022152ccd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.038012477s Jan 9 13:28:48.425: INFO: Pod "pod-c6facf62-48f2-47e9-982b-008022152ccd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.048371297s Jan 9 13:28:50.434: INFO: Pod "pod-c6facf62-48f2-47e9-982b-008022152ccd": Phase="Pending", Reason="", readiness=false. Elapsed: 6.058017298s Jan 9 13:28:52.442: INFO: Pod "pod-c6facf62-48f2-47e9-982b-008022152ccd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.065500243s STEP: Saw pod success Jan 9 13:28:52.442: INFO: Pod "pod-c6facf62-48f2-47e9-982b-008022152ccd" satisfied condition "success or failure" Jan 9 13:28:52.444: INFO: Trying to get logs from node iruya-node pod pod-c6facf62-48f2-47e9-982b-008022152ccd container test-container: STEP: delete the pod Jan 9 13:28:52.506: INFO: Waiting for pod pod-c6facf62-48f2-47e9-982b-008022152ccd to disappear Jan 9 13:28:52.514: INFO: Pod pod-c6facf62-48f2-47e9-982b-008022152ccd no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 9 13:28:52.514: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1894" for this suite. Jan 9 13:28:58.551: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 9 13:28:58.668: INFO: namespace emptydir-1894 deletion completed in 6.147341976s • [SLOW TEST:14.588 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 9 13:28:58.669: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88 [It] should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating service endpoint-test2 in namespace services-3938 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-3938 to expose endpoints map[] Jan 9 13:28:58.791: INFO: Get endpoints failed (3.376932ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found Jan 9 13:28:59.799: INFO: successfully validated that service endpoint-test2 in namespace services-3938 exposes endpoints map[] (1.011980302s elapsed) STEP: Creating pod pod1 in namespace services-3938 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-3938 to expose endpoints map[pod1:[80]] Jan 9 13:29:03.937: INFO: Unexpected endpoints: found map[], expected map[pod1:[80]] (4.105299723s elapsed, will retry) Jan 9 13:29:07.999: INFO: successfully validated that service endpoint-test2 in namespace services-3938 exposes endpoints map[pod1:[80]] (8.167430967s elapsed) STEP: Creating pod pod2 in namespace services-3938 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-3938 to expose endpoints map[pod1:[80] pod2:[80]] Jan 9 13:29:12.596: INFO: Unexpected endpoints: found map[e06b986b-7e13-43bf-8c95-799b17974f85:[80]], expected map[pod1:[80] pod2:[80]] (4.589568871s elapsed, will retry) Jan 9 13:29:17.175: INFO: successfully validated that service endpoint-test2 in namespace services-3938 exposes endpoints map[pod1:[80] pod2:[80]] (9.168994446s elapsed) STEP: Deleting pod pod1 in namespace services-3938 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-3938 to expose endpoints map[pod2:[80]] Jan 9 13:29:17.225: INFO: successfully validated that service endpoint-test2 in namespace services-3938 exposes endpoints map[pod2:[80]] (35.511175ms elapsed) STEP: Deleting pod pod2 in namespace services-3938 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-3938 to expose endpoints map[] Jan 9 13:29:17.265: INFO: successfully validated that service endpoint-test2 in namespace services-3938 exposes endpoints map[] (15.244917ms elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 9 13:29:17.290: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-3938" for this suite. Jan 9 13:29:41.368: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 9 13:29:41.455: INFO: namespace services-3938 deletion completed in 24.156998018s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92 • [SLOW TEST:42.786 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 9 13:29:41.455: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Jan 9 13:29:41.661: INFO: Waiting up to 5m0s for pod "downwardapi-volume-226c2a18-1e33-4fc9-a67f-83cb43e54932" in namespace "downward-api-4800" to be "success or failure" Jan 9 13:29:41.684: INFO: Pod "downwardapi-volume-226c2a18-1e33-4fc9-a67f-83cb43e54932": Phase="Pending", Reason="", readiness=false. Elapsed: 22.488786ms Jan 9 13:29:43.693: INFO: Pod "downwardapi-volume-226c2a18-1e33-4fc9-a67f-83cb43e54932": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031449245s Jan 9 13:29:45.700: INFO: Pod "downwardapi-volume-226c2a18-1e33-4fc9-a67f-83cb43e54932": Phase="Pending", Reason="", readiness=false. Elapsed: 4.038990794s Jan 9 13:29:47.716: INFO: Pod "downwardapi-volume-226c2a18-1e33-4fc9-a67f-83cb43e54932": Phase="Pending", Reason="", readiness=false. Elapsed: 6.055129445s Jan 9 13:29:49.730: INFO: Pod "downwardapi-volume-226c2a18-1e33-4fc9-a67f-83cb43e54932": Phase="Pending", Reason="", readiness=false. Elapsed: 8.068916417s Jan 9 13:29:51.738: INFO: Pod "downwardapi-volume-226c2a18-1e33-4fc9-a67f-83cb43e54932": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.076817765s STEP: Saw pod success Jan 9 13:29:51.738: INFO: Pod "downwardapi-volume-226c2a18-1e33-4fc9-a67f-83cb43e54932" satisfied condition "success or failure" Jan 9 13:29:51.746: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-226c2a18-1e33-4fc9-a67f-83cb43e54932 container client-container: STEP: delete the pod Jan 9 13:29:56.835: INFO: Waiting for pod downwardapi-volume-226c2a18-1e33-4fc9-a67f-83cb43e54932 to disappear Jan 9 13:29:56.875: INFO: Pod downwardapi-volume-226c2a18-1e33-4fc9-a67f-83cb43e54932 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 9 13:29:56.875: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4800" for this suite. Jan 9 13:30:03.100: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 9 13:30:03.173: INFO: namespace downward-api-4800 deletion completed in 6.130076673s • [SLOW TEST:21.718 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 9 13:30:03.174: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-e7e81454-560f-456e-85d3-f6f7d0235317 STEP: Creating a pod to test consume secrets Jan 9 13:30:03.301: INFO: Waiting up to 5m0s for pod "pod-secrets-71e4b54f-962d-4831-8b2f-421a5e15aadb" in namespace "secrets-2490" to be "success or failure" Jan 9 13:30:03.306: INFO: Pod "pod-secrets-71e4b54f-962d-4831-8b2f-421a5e15aadb": Phase="Pending", Reason="", readiness=false. Elapsed: 5.015837ms Jan 9 13:30:08.606: INFO: Pod "pod-secrets-71e4b54f-962d-4831-8b2f-421a5e15aadb": Phase="Pending", Reason="", readiness=false. Elapsed: 5.305059251s Jan 9 13:30:10.617: INFO: Pod "pod-secrets-71e4b54f-962d-4831-8b2f-421a5e15aadb": Phase="Pending", Reason="", readiness=false. Elapsed: 7.315417694s Jan 9 13:30:12.630: INFO: Pod "pod-secrets-71e4b54f-962d-4831-8b2f-421a5e15aadb": Phase="Pending", Reason="", readiness=false. Elapsed: 9.328746655s Jan 9 13:30:14.655: INFO: Pod "pod-secrets-71e4b54f-962d-4831-8b2f-421a5e15aadb": Phase="Pending", Reason="", readiness=false. Elapsed: 11.354297544s Jan 9 13:30:16.662: INFO: Pod "pod-secrets-71e4b54f-962d-4831-8b2f-421a5e15aadb": Phase="Pending", Reason="", readiness=false. Elapsed: 13.361141025s Jan 9 13:30:18.666: INFO: Pod "pod-secrets-71e4b54f-962d-4831-8b2f-421a5e15aadb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 15.364415516s STEP: Saw pod success Jan 9 13:30:18.666: INFO: Pod "pod-secrets-71e4b54f-962d-4831-8b2f-421a5e15aadb" satisfied condition "success or failure" Jan 9 13:30:18.667: INFO: Trying to get logs from node iruya-node pod pod-secrets-71e4b54f-962d-4831-8b2f-421a5e15aadb container secret-volume-test: STEP: delete the pod Jan 9 13:30:18.810: INFO: Waiting for pod pod-secrets-71e4b54f-962d-4831-8b2f-421a5e15aadb to disappear Jan 9 13:30:18.833: INFO: Pod pod-secrets-71e4b54f-962d-4831-8b2f-421a5e15aadb no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 9 13:30:18.833: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-2490" for this suite. Jan 9 13:30:26.495: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 9 13:30:26.733: INFO: namespace secrets-2490 deletion completed in 7.893133324s • [SLOW TEST:23.559 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 9 13:30:26.733: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Delete Grace Period /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:47 [It] should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: setting up selector STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes Jan 9 13:30:35.016: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0' STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice Jan 9 13:30:50.208: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 9 13:30:50.215: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-2085" for this suite. Jan 9 13:30:56.291: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 9 13:30:56.405: INFO: namespace pods-2085 deletion completed in 6.182837905s • [SLOW TEST:29.672 seconds] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 [k8s.io] Delete Grace Period /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 9 13:30:56.407: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0644 on tmpfs Jan 9 13:30:57.988: INFO: Waiting up to 5m0s for pod "pod-b5942f70-826f-46d2-a172-78dd46fc01f9" in namespace "emptydir-2716" to be "success or failure" Jan 9 13:30:58.105: INFO: Pod "pod-b5942f70-826f-46d2-a172-78dd46fc01f9": Phase="Pending", Reason="", readiness=false. Elapsed: 117.18064ms Jan 9 13:31:00.112: INFO: Pod "pod-b5942f70-826f-46d2-a172-78dd46fc01f9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.124047769s Jan 9 13:31:02.121: INFO: Pod "pod-b5942f70-826f-46d2-a172-78dd46fc01f9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.132780356s Jan 9 13:31:04.125: INFO: Pod "pod-b5942f70-826f-46d2-a172-78dd46fc01f9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.137205269s Jan 9 13:31:06.139: INFO: Pod "pod-b5942f70-826f-46d2-a172-78dd46fc01f9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.151324232s STEP: Saw pod success Jan 9 13:31:06.139: INFO: Pod "pod-b5942f70-826f-46d2-a172-78dd46fc01f9" satisfied condition "success or failure" Jan 9 13:31:06.143: INFO: Trying to get logs from node iruya-node pod pod-b5942f70-826f-46d2-a172-78dd46fc01f9 container test-container: STEP: delete the pod Jan 9 13:31:06.292: INFO: Waiting for pod pod-b5942f70-826f-46d2-a172-78dd46fc01f9 to disappear Jan 9 13:31:06.320: INFO: Pod pod-b5942f70-826f-46d2-a172-78dd46fc01f9 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 9 13:31:06.320: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2716" for this suite. Jan 9 13:31:12.356: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 9 13:31:12.441: INFO: namespace emptydir-2716 deletion completed in 6.113686681s • [SLOW TEST:16.035 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 9 13:31:12.442: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name cm-test-opt-del-19694933-92c8-455a-81c1-8bc108286efc STEP: Creating configMap with name cm-test-opt-upd-4f650860-3589-4e14-8ace-31a652b2ba34 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-19694933-92c8-455a-81c1-8bc108286efc STEP: Updating configmap cm-test-opt-upd-4f650860-3589-4e14-8ace-31a652b2ba34 STEP: Creating configMap with name cm-test-opt-create-d264eb26-62c0-48ab-b51f-c5f79a475911 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 9 13:32:48.150: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5792" for this suite. Jan 9 13:33:10.189: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 9 13:33:10.267: INFO: namespace projected-5792 deletion completed in 22.106893326s • [SLOW TEST:117.826 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 9 13:33:10.268: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object Jan 9 13:33:10.587: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-4263,SelfLink:/api/v1/namespaces/watch-4263/configmaps/e2e-watch-test-label-changed,UID:42b8bcdd-26cc-4b34-acd8-289ee4ae69af,ResourceVersion:19902202,Generation:0,CreationTimestamp:2020-01-09 13:33:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} Jan 9 13:33:10.588: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-4263,SelfLink:/api/v1/namespaces/watch-4263/configmaps/e2e-watch-test-label-changed,UID:42b8bcdd-26cc-4b34-acd8-289ee4ae69af,ResourceVersion:19902203,Generation:0,CreationTimestamp:2020-01-09 13:33:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} Jan 9 13:33:10.588: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-4263,SelfLink:/api/v1/namespaces/watch-4263/configmaps/e2e-watch-test-label-changed,UID:42b8bcdd-26cc-4b34-acd8-289ee4ae69af,ResourceVersion:19902204,Generation:0,CreationTimestamp:2020-01-09 13:33:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored Jan 9 13:33:20.655: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-4263,SelfLink:/api/v1/namespaces/watch-4263/configmaps/e2e-watch-test-label-changed,UID:42b8bcdd-26cc-4b34-acd8-289ee4ae69af,ResourceVersion:19902219,Generation:0,CreationTimestamp:2020-01-09 13:33:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Jan 9 13:33:20.656: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-4263,SelfLink:/api/v1/namespaces/watch-4263/configmaps/e2e-watch-test-label-changed,UID:42b8bcdd-26cc-4b34-acd8-289ee4ae69af,ResourceVersion:19902220,Generation:0,CreationTimestamp:2020-01-09 13:33:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} Jan 9 13:33:20.656: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-4263,SelfLink:/api/v1/namespaces/watch-4263/configmaps/e2e-watch-test-label-changed,UID:42b8bcdd-26cc-4b34-acd8-289ee4ae69af,ResourceVersion:19902221,Generation:0,CreationTimestamp:2020-01-09 13:33:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 9 13:33:20.656: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-4263" for this suite. Jan 9 13:33:26.726: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 9 13:33:26.801: INFO: namespace watch-4263 deletion completed in 6.112509469s • [SLOW TEST:16.533 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 9 13:33:26.801: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod busybox-93febd8f-3e99-44c0-bf33-f55061b45fae in namespace container-probe-9307 Jan 9 13:33:35.073: INFO: Started pod busybox-93febd8f-3e99-44c0-bf33-f55061b45fae in namespace container-probe-9307 STEP: checking the pod's current state and verifying that restartCount is present Jan 9 13:33:35.079: INFO: Initial restart count of pod busybox-93febd8f-3e99-44c0-bf33-f55061b45fae is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 9 13:37:36.529: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-9307" for this suite. Jan 9 13:37:42.575: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 9 13:37:42.687: INFO: namespace container-probe-9307 deletion completed in 6.142570607s • [SLOW TEST:255.887 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 9 13:37:42.688: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Jan 9 13:37:42.789: INFO: Number of nodes with available pods: 0 Jan 9 13:37:42.789: INFO: Node iruya-node is running more than one daemon pod Jan 9 13:37:43.812: INFO: Number of nodes with available pods: 0 Jan 9 13:37:43.812: INFO: Node iruya-node is running more than one daemon pod Jan 9 13:37:44.802: INFO: Number of nodes with available pods: 0 Jan 9 13:37:44.802: INFO: Node iruya-node is running more than one daemon pod Jan 9 13:37:45.810: INFO: Number of nodes with available pods: 0 Jan 9 13:37:45.810: INFO: Node iruya-node is running more than one daemon pod Jan 9 13:37:46.809: INFO: Number of nodes with available pods: 0 Jan 9 13:37:46.809: INFO: Node iruya-node is running more than one daemon pod Jan 9 13:37:48.721: INFO: Number of nodes with available pods: 0 Jan 9 13:37:48.721: INFO: Node iruya-node is running more than one daemon pod Jan 9 13:37:49.908: INFO: Number of nodes with available pods: 0 Jan 9 13:37:49.909: INFO: Node iruya-node is running more than one daemon pod Jan 9 13:37:51.131: INFO: Number of nodes with available pods: 0 Jan 9 13:37:51.131: INFO: Node iruya-node is running more than one daemon pod Jan 9 13:37:51.874: INFO: Number of nodes with available pods: 0 Jan 9 13:37:51.874: INFO: Node iruya-node is running more than one daemon pod Jan 9 13:37:52.810: INFO: Number of nodes with available pods: 2 Jan 9 13:37:52.811: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. Jan 9 13:37:52.914: INFO: Number of nodes with available pods: 1 Jan 9 13:37:52.914: INFO: Node iruya-node is running more than one daemon pod Jan 9 13:37:53.937: INFO: Number of nodes with available pods: 1 Jan 9 13:37:53.937: INFO: Node iruya-node is running more than one daemon pod Jan 9 13:37:54.939: INFO: Number of nodes with available pods: 1 Jan 9 13:37:54.939: INFO: Node iruya-node is running more than one daemon pod Jan 9 13:37:55.993: INFO: Number of nodes with available pods: 1 Jan 9 13:37:55.994: INFO: Node iruya-node is running more than one daemon pod Jan 9 13:37:56.928: INFO: Number of nodes with available pods: 1 Jan 9 13:37:56.928: INFO: Node iruya-node is running more than one daemon pod Jan 9 13:37:57.933: INFO: Number of nodes with available pods: 1 Jan 9 13:37:57.933: INFO: Node iruya-node is running more than one daemon pod Jan 9 13:37:58.945: INFO: Number of nodes with available pods: 1 Jan 9 13:37:58.945: INFO: Node iruya-node is running more than one daemon pod Jan 9 13:38:00.006: INFO: Number of nodes with available pods: 1 Jan 9 13:38:00.006: INFO: Node iruya-node is running more than one daemon pod Jan 9 13:38:00.934: INFO: Number of nodes with available pods: 1 Jan 9 13:38:00.934: INFO: Node iruya-node is running more than one daemon pod Jan 9 13:38:01.932: INFO: Number of nodes with available pods: 1 Jan 9 13:38:01.932: INFO: Node iruya-node is running more than one daemon pod Jan 9 13:38:02.932: INFO: Number of nodes with available pods: 2 Jan 9 13:38:02.932: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-2032, will wait for the garbage collector to delete the pods Jan 9 13:38:03.006: INFO: Deleting DaemonSet.extensions daemon-set took: 12.523995ms Jan 9 13:38:03.407: INFO: Terminating DaemonSet.extensions daemon-set pods took: 401.245249ms Jan 9 13:38:17.912: INFO: Number of nodes with available pods: 0 Jan 9 13:38:17.912: INFO: Number of running nodes: 0, number of available pods: 0 Jan 9 13:38:17.915: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-2032/daemonsets","resourceVersion":"19902701"},"items":null} Jan 9 13:38:17.918: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-2032/pods","resourceVersion":"19902701"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 9 13:38:17.928: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-2032" for this suite. Jan 9 13:38:23.959: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 9 13:38:24.054: INFO: namespace daemonsets-2032 deletion completed in 6.123995292s • [SLOW TEST:41.366 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 9 13:38:24.055: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0666 on tmpfs Jan 9 13:38:24.152: INFO: Waiting up to 5m0s for pod "pod-c00a7390-a6bc-4d83-b6ba-6cb2bad41302" in namespace "emptydir-9539" to be "success or failure" Jan 9 13:38:24.155: INFO: Pod "pod-c00a7390-a6bc-4d83-b6ba-6cb2bad41302": Phase="Pending", Reason="", readiness=false. Elapsed: 3.045948ms Jan 9 13:38:26.167: INFO: Pod "pod-c00a7390-a6bc-4d83-b6ba-6cb2bad41302": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015178529s Jan 9 13:38:28.176: INFO: Pod "pod-c00a7390-a6bc-4d83-b6ba-6cb2bad41302": Phase="Pending", Reason="", readiness=false. Elapsed: 4.02421127s Jan 9 13:38:30.204: INFO: Pod "pod-c00a7390-a6bc-4d83-b6ba-6cb2bad41302": Phase="Pending", Reason="", readiness=false. Elapsed: 6.052467239s Jan 9 13:38:32.212: INFO: Pod "pod-c00a7390-a6bc-4d83-b6ba-6cb2bad41302": Phase="Pending", Reason="", readiness=false. Elapsed: 8.060035423s Jan 9 13:38:34.228: INFO: Pod "pod-c00a7390-a6bc-4d83-b6ba-6cb2bad41302": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.07596117s STEP: Saw pod success Jan 9 13:38:34.228: INFO: Pod "pod-c00a7390-a6bc-4d83-b6ba-6cb2bad41302" satisfied condition "success or failure" Jan 9 13:38:34.236: INFO: Trying to get logs from node iruya-node pod pod-c00a7390-a6bc-4d83-b6ba-6cb2bad41302 container test-container: STEP: delete the pod Jan 9 13:38:34.317: INFO: Waiting for pod pod-c00a7390-a6bc-4d83-b6ba-6cb2bad41302 to disappear Jan 9 13:38:34.404: INFO: Pod pod-c00a7390-a6bc-4d83-b6ba-6cb2bad41302 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 9 13:38:34.404: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9539" for this suite. Jan 9 13:38:40.453: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 9 13:38:40.598: INFO: namespace emptydir-9539 deletion completed in 6.186500084s • [SLOW TEST:16.544 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 9 13:38:40.600: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Setting up the test STEP: Creating hostNetwork=false pod STEP: Creating hostNetwork=true pod STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false Jan 9 13:39:00.840: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-8717 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 9 13:39:00.840: INFO: >>> kubeConfig: /root/.kube/config I0109 13:39:00.902073 8 log.go:172] (0xc0008028f0) (0xc00272f4a0) Create stream I0109 13:39:00.902192 8 log.go:172] (0xc0008028f0) (0xc00272f4a0) Stream added, broadcasting: 1 I0109 13:39:00.909376 8 log.go:172] (0xc0008028f0) Reply frame received for 1 I0109 13:39:00.909430 8 log.go:172] (0xc0008028f0) (0xc00176a820) Create stream I0109 13:39:00.909438 8 log.go:172] (0xc0008028f0) (0xc00176a820) Stream added, broadcasting: 3 I0109 13:39:00.911442 8 log.go:172] (0xc0008028f0) Reply frame received for 3 I0109 13:39:00.911520 8 log.go:172] (0xc0008028f0) (0xc00272f540) Create stream I0109 13:39:00.911540 8 log.go:172] (0xc0008028f0) (0xc00272f540) Stream added, broadcasting: 5 I0109 13:39:00.912844 8 log.go:172] (0xc0008028f0) Reply frame received for 5 I0109 13:39:01.007841 8 log.go:172] (0xc0008028f0) Data frame received for 3 I0109 13:39:01.007919 8 log.go:172] (0xc00176a820) (3) Data frame handling I0109 13:39:01.007943 8 log.go:172] (0xc00176a820) (3) Data frame sent I0109 13:39:01.160509 8 log.go:172] (0xc0008028f0) (0xc00176a820) Stream removed, broadcasting: 3 I0109 13:39:01.160749 8 log.go:172] (0xc0008028f0) Data frame received for 1 I0109 13:39:01.160887 8 log.go:172] (0xc0008028f0) (0xc00272f540) Stream removed, broadcasting: 5 I0109 13:39:01.160936 8 log.go:172] (0xc00272f4a0) (1) Data frame handling I0109 13:39:01.160976 8 log.go:172] (0xc00272f4a0) (1) Data frame sent I0109 13:39:01.160991 8 log.go:172] (0xc0008028f0) (0xc00272f4a0) Stream removed, broadcasting: 1 I0109 13:39:01.161005 8 log.go:172] (0xc0008028f0) Go away received I0109 13:39:01.161470 8 log.go:172] (0xc0008028f0) (0xc00272f4a0) Stream removed, broadcasting: 1 I0109 13:39:01.161487 8 log.go:172] (0xc0008028f0) (0xc00176a820) Stream removed, broadcasting: 3 I0109 13:39:01.161495 8 log.go:172] (0xc0008028f0) (0xc00272f540) Stream removed, broadcasting: 5 Jan 9 13:39:01.161: INFO: Exec stderr: "" Jan 9 13:39:01.161: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-8717 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 9 13:39:01.161: INFO: >>> kubeConfig: /root/.kube/config I0109 13:39:01.222518 8 log.go:172] (0xc0026ab130) (0xc0024ed0e0) Create stream I0109 13:39:01.222667 8 log.go:172] (0xc0026ab130) (0xc0024ed0e0) Stream added, broadcasting: 1 I0109 13:39:01.233645 8 log.go:172] (0xc0026ab130) Reply frame received for 1 I0109 13:39:01.233831 8 log.go:172] (0xc0026ab130) (0xc0004c2aa0) Create stream I0109 13:39:01.233941 8 log.go:172] (0xc0026ab130) (0xc0004c2aa0) Stream added, broadcasting: 3 I0109 13:39:01.237043 8 log.go:172] (0xc0026ab130) Reply frame received for 3 I0109 13:39:01.237098 8 log.go:172] (0xc0026ab130) (0xc0004c2b40) Create stream I0109 13:39:01.237118 8 log.go:172] (0xc0026ab130) (0xc0004c2b40) Stream added, broadcasting: 5 I0109 13:39:01.241347 8 log.go:172] (0xc0026ab130) Reply frame received for 5 I0109 13:39:01.353481 8 log.go:172] (0xc0026ab130) Data frame received for 3 I0109 13:39:01.353568 8 log.go:172] (0xc0004c2aa0) (3) Data frame handling I0109 13:39:01.353601 8 log.go:172] (0xc0004c2aa0) (3) Data frame sent I0109 13:39:01.502655 8 log.go:172] (0xc0026ab130) (0xc0004c2aa0) Stream removed, broadcasting: 3 I0109 13:39:01.502808 8 log.go:172] (0xc0026ab130) Data frame received for 1 I0109 13:39:01.502818 8 log.go:172] (0xc0024ed0e0) (1) Data frame handling I0109 13:39:01.502836 8 log.go:172] (0xc0024ed0e0) (1) Data frame sent I0109 13:39:01.502852 8 log.go:172] (0xc0026ab130) (0xc0024ed0e0) Stream removed, broadcasting: 1 I0109 13:39:01.502926 8 log.go:172] (0xc0026ab130) (0xc0004c2b40) Stream removed, broadcasting: 5 I0109 13:39:01.503165 8 log.go:172] (0xc0026ab130) Go away received I0109 13:39:01.503404 8 log.go:172] (0xc0026ab130) (0xc0024ed0e0) Stream removed, broadcasting: 1 I0109 13:39:01.503419 8 log.go:172] (0xc0026ab130) (0xc0004c2aa0) Stream removed, broadcasting: 3 I0109 13:39:01.503431 8 log.go:172] (0xc0026ab130) (0xc0004c2b40) Stream removed, broadcasting: 5 Jan 9 13:39:01.503: INFO: Exec stderr: "" Jan 9 13:39:01.503: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-8717 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 9 13:39:01.503: INFO: >>> kubeConfig: /root/.kube/config I0109 13:39:01.576308 8 log.go:172] (0xc000803c30) (0xc00272fb80) Create stream I0109 13:39:01.576415 8 log.go:172] (0xc000803c30) (0xc00272fb80) Stream added, broadcasting: 1 I0109 13:39:01.585623 8 log.go:172] (0xc000803c30) Reply frame received for 1 I0109 13:39:01.585652 8 log.go:172] (0xc000803c30) (0xc00272fc20) Create stream I0109 13:39:01.585662 8 log.go:172] (0xc000803c30) (0xc00272fc20) Stream added, broadcasting: 3 I0109 13:39:01.587271 8 log.go:172] (0xc000803c30) Reply frame received for 3 I0109 13:39:01.587320 8 log.go:172] (0xc000803c30) (0xc0002157c0) Create stream I0109 13:39:01.587337 8 log.go:172] (0xc000803c30) (0xc0002157c0) Stream added, broadcasting: 5 I0109 13:39:01.589156 8 log.go:172] (0xc000803c30) Reply frame received for 5 I0109 13:39:01.702873 8 log.go:172] (0xc000803c30) Data frame received for 3 I0109 13:39:01.702977 8 log.go:172] (0xc00272fc20) (3) Data frame handling I0109 13:39:01.703007 8 log.go:172] (0xc00272fc20) (3) Data frame sent I0109 13:39:01.847344 8 log.go:172] (0xc000803c30) (0xc00272fc20) Stream removed, broadcasting: 3 I0109 13:39:01.847553 8 log.go:172] (0xc000803c30) Data frame received for 1 I0109 13:39:01.847587 8 log.go:172] (0xc00272fb80) (1) Data frame handling I0109 13:39:01.847611 8 log.go:172] (0xc00272fb80) (1) Data frame sent I0109 13:39:01.847638 8 log.go:172] (0xc000803c30) (0xc0002157c0) Stream removed, broadcasting: 5 I0109 13:39:01.847682 8 log.go:172] (0xc000803c30) (0xc00272fb80) Stream removed, broadcasting: 1 I0109 13:39:01.847752 8 log.go:172] (0xc000803c30) Go away received I0109 13:39:01.847978 8 log.go:172] (0xc000803c30) (0xc00272fb80) Stream removed, broadcasting: 1 I0109 13:39:01.848003 8 log.go:172] (0xc000803c30) (0xc00272fc20) Stream removed, broadcasting: 3 I0109 13:39:01.848061 8 log.go:172] (0xc000803c30) (0xc0002157c0) Stream removed, broadcasting: 5 Jan 9 13:39:01.848: INFO: Exec stderr: "" Jan 9 13:39:01.848: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-8717 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 9 13:39:01.848: INFO: >>> kubeConfig: /root/.kube/config I0109 13:39:01.914911 8 log.go:172] (0xc0009d0f20) (0xc0025940a0) Create stream I0109 13:39:01.914999 8 log.go:172] (0xc0009d0f20) (0xc0025940a0) Stream added, broadcasting: 1 I0109 13:39:01.921939 8 log.go:172] (0xc0009d0f20) Reply frame received for 1 I0109 13:39:01.922001 8 log.go:172] (0xc0009d0f20) (0xc000d7f860) Create stream I0109 13:39:01.922011 8 log.go:172] (0xc0009d0f20) (0xc000d7f860) Stream added, broadcasting: 3 I0109 13:39:01.924652 8 log.go:172] (0xc0009d0f20) Reply frame received for 3 I0109 13:39:01.924723 8 log.go:172] (0xc0009d0f20) (0xc002594140) Create stream I0109 13:39:01.924732 8 log.go:172] (0xc0009d0f20) (0xc002594140) Stream added, broadcasting: 5 I0109 13:39:01.926492 8 log.go:172] (0xc0009d0f20) Reply frame received for 5 I0109 13:39:02.023955 8 log.go:172] (0xc0009d0f20) Data frame received for 3 I0109 13:39:02.024035 8 log.go:172] (0xc000d7f860) (3) Data frame handling I0109 13:39:02.024137 8 log.go:172] (0xc000d7f860) (3) Data frame sent I0109 13:39:02.150590 8 log.go:172] (0xc0009d0f20) (0xc000d7f860) Stream removed, broadcasting: 3 I0109 13:39:02.150866 8 log.go:172] (0xc0009d0f20) Data frame received for 1 I0109 13:39:02.151065 8 log.go:172] (0xc0025940a0) (1) Data frame handling I0109 13:39:02.151299 8 log.go:172] (0xc0025940a0) (1) Data frame sent I0109 13:39:02.151340 8 log.go:172] (0xc0009d0f20) (0xc002594140) Stream removed, broadcasting: 5 I0109 13:39:02.151449 8 log.go:172] (0xc0009d0f20) (0xc0025940a0) Stream removed, broadcasting: 1 I0109 13:39:02.151546 8 log.go:172] (0xc0009d0f20) Go away received I0109 13:39:02.152382 8 log.go:172] (0xc0009d0f20) (0xc0025940a0) Stream removed, broadcasting: 1 I0109 13:39:02.152431 8 log.go:172] (0xc0009d0f20) (0xc000d7f860) Stream removed, broadcasting: 3 I0109 13:39:02.152483 8 log.go:172] (0xc0009d0f20) (0xc002594140) Stream removed, broadcasting: 5 Jan 9 13:39:02.152: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount Jan 9 13:39:02.152: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-8717 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 9 13:39:02.152: INFO: >>> kubeConfig: /root/.kube/config I0109 13:39:02.225914 8 log.go:172] (0xc0009d1ad0) (0xc002594460) Create stream I0109 13:39:02.225978 8 log.go:172] (0xc0009d1ad0) (0xc002594460) Stream added, broadcasting: 1 I0109 13:39:02.232847 8 log.go:172] (0xc0009d1ad0) Reply frame received for 1 I0109 13:39:02.232885 8 log.go:172] (0xc0009d1ad0) (0xc0024ed180) Create stream I0109 13:39:02.232891 8 log.go:172] (0xc0009d1ad0) (0xc0024ed180) Stream added, broadcasting: 3 I0109 13:39:02.235324 8 log.go:172] (0xc0009d1ad0) Reply frame received for 3 I0109 13:39:02.235404 8 log.go:172] (0xc0009d1ad0) (0xc00176a8c0) Create stream I0109 13:39:02.235411 8 log.go:172] (0xc0009d1ad0) (0xc00176a8c0) Stream added, broadcasting: 5 I0109 13:39:02.236470 8 log.go:172] (0xc0009d1ad0) Reply frame received for 5 I0109 13:39:02.344039 8 log.go:172] (0xc0009d1ad0) Data frame received for 3 I0109 13:39:02.344283 8 log.go:172] (0xc0024ed180) (3) Data frame handling I0109 13:39:02.344514 8 log.go:172] (0xc0024ed180) (3) Data frame sent I0109 13:39:02.535652 8 log.go:172] (0xc0009d1ad0) Data frame received for 1 I0109 13:39:02.535818 8 log.go:172] (0xc002594460) (1) Data frame handling I0109 13:39:02.535863 8 log.go:172] (0xc002594460) (1) Data frame sent I0109 13:39:02.535904 8 log.go:172] (0xc0009d1ad0) (0xc002594460) Stream removed, broadcasting: 1 I0109 13:39:02.536198 8 log.go:172] (0xc0009d1ad0) (0xc0024ed180) Stream removed, broadcasting: 3 I0109 13:39:02.536363 8 log.go:172] (0xc0009d1ad0) (0xc00176a8c0) Stream removed, broadcasting: 5 I0109 13:39:02.536437 8 log.go:172] (0xc0009d1ad0) Go away received I0109 13:39:02.536599 8 log.go:172] (0xc0009d1ad0) (0xc002594460) Stream removed, broadcasting: 1 I0109 13:39:02.536630 8 log.go:172] (0xc0009d1ad0) (0xc0024ed180) Stream removed, broadcasting: 3 I0109 13:39:02.536649 8 log.go:172] (0xc0009d1ad0) (0xc00176a8c0) Stream removed, broadcasting: 5 Jan 9 13:39:02.536: INFO: Exec stderr: "" Jan 9 13:39:02.536: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-8717 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 9 13:39:02.537: INFO: >>> kubeConfig: /root/.kube/config I0109 13:39:02.622435 8 log.go:172] (0xc0027e8790) (0xc00272fe00) Create stream I0109 13:39:02.622610 8 log.go:172] (0xc0027e8790) (0xc00272fe00) Stream added, broadcasting: 1 I0109 13:39:02.629601 8 log.go:172] (0xc0027e8790) Reply frame received for 1 I0109 13:39:02.629653 8 log.go:172] (0xc0027e8790) (0xc0024ed220) Create stream I0109 13:39:02.629665 8 log.go:172] (0xc0027e8790) (0xc0024ed220) Stream added, broadcasting: 3 I0109 13:39:02.631471 8 log.go:172] (0xc0027e8790) Reply frame received for 3 I0109 13:39:02.631498 8 log.go:172] (0xc0027e8790) (0xc000d7f9a0) Create stream I0109 13:39:02.631514 8 log.go:172] (0xc0027e8790) (0xc000d7f9a0) Stream added, broadcasting: 5 I0109 13:39:02.632967 8 log.go:172] (0xc0027e8790) Reply frame received for 5 I0109 13:39:02.712820 8 log.go:172] (0xc0027e8790) Data frame received for 3 I0109 13:39:02.712842 8 log.go:172] (0xc0024ed220) (3) Data frame handling I0109 13:39:02.712858 8 log.go:172] (0xc0024ed220) (3) Data frame sent I0109 13:39:02.939783 8 log.go:172] (0xc0027e8790) (0xc0024ed220) Stream removed, broadcasting: 3 I0109 13:39:02.940109 8 log.go:172] (0xc0027e8790) Data frame received for 1 I0109 13:39:02.940130 8 log.go:172] (0xc00272fe00) (1) Data frame handling I0109 13:39:02.940147 8 log.go:172] (0xc0027e8790) (0xc000d7f9a0) Stream removed, broadcasting: 5 I0109 13:39:02.940237 8 log.go:172] (0xc00272fe00) (1) Data frame sent I0109 13:39:02.940268 8 log.go:172] (0xc0027e8790) (0xc00272fe00) Stream removed, broadcasting: 1 I0109 13:39:02.940341 8 log.go:172] (0xc0027e8790) Go away received I0109 13:39:02.941129 8 log.go:172] (0xc0027e8790) (0xc00272fe00) Stream removed, broadcasting: 1 I0109 13:39:02.941149 8 log.go:172] (0xc0027e8790) (0xc0024ed220) Stream removed, broadcasting: 3 I0109 13:39:02.941168 8 log.go:172] (0xc0027e8790) (0xc000d7f9a0) Stream removed, broadcasting: 5 Jan 9 13:39:02.941: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true Jan 9 13:39:02.941: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-8717 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 9 13:39:02.941: INFO: >>> kubeConfig: /root/.kube/config I0109 13:39:03.003753 8 log.go:172] (0xc0027e8dc0) (0xc00272fea0) Create stream I0109 13:39:03.003869 8 log.go:172] (0xc0027e8dc0) (0xc00272fea0) Stream added, broadcasting: 1 I0109 13:39:03.022003 8 log.go:172] (0xc0027e8dc0) Reply frame received for 1 I0109 13:39:03.022112 8 log.go:172] (0xc0027e8dc0) (0xc0024ed2c0) Create stream I0109 13:39:03.022134 8 log.go:172] (0xc0027e8dc0) (0xc0024ed2c0) Stream added, broadcasting: 3 I0109 13:39:03.043473 8 log.go:172] (0xc0027e8dc0) Reply frame received for 3 I0109 13:39:03.043603 8 log.go:172] (0xc0027e8dc0) (0xc00272ff40) Create stream I0109 13:39:03.043620 8 log.go:172] (0xc0027e8dc0) (0xc00272ff40) Stream added, broadcasting: 5 I0109 13:39:03.046573 8 log.go:172] (0xc0027e8dc0) Reply frame received for 5 I0109 13:39:03.195854 8 log.go:172] (0xc0027e8dc0) Data frame received for 3 I0109 13:39:03.195929 8 log.go:172] (0xc0024ed2c0) (3) Data frame handling I0109 13:39:03.195953 8 log.go:172] (0xc0024ed2c0) (3) Data frame sent I0109 13:39:03.338756 8 log.go:172] (0xc0027e8dc0) (0xc0024ed2c0) Stream removed, broadcasting: 3 I0109 13:39:03.338927 8 log.go:172] (0xc0027e8dc0) Data frame received for 1 I0109 13:39:03.338953 8 log.go:172] (0xc0027e8dc0) (0xc00272ff40) Stream removed, broadcasting: 5 I0109 13:39:03.339015 8 log.go:172] (0xc00272fea0) (1) Data frame handling I0109 13:39:03.339032 8 log.go:172] (0xc00272fea0) (1) Data frame sent I0109 13:39:03.339051 8 log.go:172] (0xc0027e8dc0) (0xc00272fea0) Stream removed, broadcasting: 1 I0109 13:39:03.339081 8 log.go:172] (0xc0027e8dc0) Go away received I0109 13:39:03.339887 8 log.go:172] (0xc0027e8dc0) (0xc00272fea0) Stream removed, broadcasting: 1 I0109 13:39:03.339987 8 log.go:172] (0xc0027e8dc0) (0xc0024ed2c0) Stream removed, broadcasting: 3 I0109 13:39:03.340005 8 log.go:172] (0xc0027e8dc0) (0xc00272ff40) Stream removed, broadcasting: 5 Jan 9 13:39:03.340: INFO: Exec stderr: "" Jan 9 13:39:03.340: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-8717 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 9 13:39:03.340: INFO: >>> kubeConfig: /root/.kube/config I0109 13:39:03.409493 8 log.go:172] (0xc002c54840) (0xc0024ed680) Create stream I0109 13:39:03.409594 8 log.go:172] (0xc002c54840) (0xc0024ed680) Stream added, broadcasting: 1 I0109 13:39:03.415695 8 log.go:172] (0xc002c54840) Reply frame received for 1 I0109 13:39:03.415742 8 log.go:172] (0xc002c54840) (0xc001e80000) Create stream I0109 13:39:03.415751 8 log.go:172] (0xc002c54840) (0xc001e80000) Stream added, broadcasting: 3 I0109 13:39:03.416945 8 log.go:172] (0xc002c54840) Reply frame received for 3 I0109 13:39:03.416998 8 log.go:172] (0xc002c54840) (0xc00176aaa0) Create stream I0109 13:39:03.417008 8 log.go:172] (0xc002c54840) (0xc00176aaa0) Stream added, broadcasting: 5 I0109 13:39:03.420056 8 log.go:172] (0xc002c54840) Reply frame received for 5 I0109 13:39:03.519163 8 log.go:172] (0xc002c54840) Data frame received for 3 I0109 13:39:03.519307 8 log.go:172] (0xc001e80000) (3) Data frame handling I0109 13:39:03.519341 8 log.go:172] (0xc001e80000) (3) Data frame sent I0109 13:39:03.678067 8 log.go:172] (0xc002c54840) Data frame received for 1 I0109 13:39:03.678155 8 log.go:172] (0xc0024ed680) (1) Data frame handling I0109 13:39:03.678174 8 log.go:172] (0xc0024ed680) (1) Data frame sent I0109 13:39:03.678278 8 log.go:172] (0xc002c54840) (0xc001e80000) Stream removed, broadcasting: 3 I0109 13:39:03.678327 8 log.go:172] (0xc002c54840) (0xc0024ed680) Stream removed, broadcasting: 1 I0109 13:39:03.678800 8 log.go:172] (0xc002c54840) (0xc00176aaa0) Stream removed, broadcasting: 5 I0109 13:39:03.678835 8 log.go:172] (0xc002c54840) (0xc0024ed680) Stream removed, broadcasting: 1 I0109 13:39:03.678852 8 log.go:172] (0xc002c54840) (0xc001e80000) Stream removed, broadcasting: 3 I0109 13:39:03.678861 8 log.go:172] (0xc002c54840) (0xc00176aaa0) Stream removed, broadcasting: 5 I0109 13:39:03.679195 8 log.go:172] (0xc002c54840) Go away received Jan 9 13:39:03.679: INFO: Exec stderr: "" Jan 9 13:39:03.680: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-8717 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 9 13:39:03.681: INFO: >>> kubeConfig: /root/.kube/config I0109 13:39:03.748821 8 log.go:172] (0xc002b03340) (0xc00176ae60) Create stream I0109 13:39:03.748936 8 log.go:172] (0xc002b03340) (0xc00176ae60) Stream added, broadcasting: 1 I0109 13:39:03.756957 8 log.go:172] (0xc002b03340) Reply frame received for 1 I0109 13:39:03.757121 8 log.go:172] (0xc002b03340) (0xc000d7fea0) Create stream I0109 13:39:03.757139 8 log.go:172] (0xc002b03340) (0xc000d7fea0) Stream added, broadcasting: 3 I0109 13:39:03.759674 8 log.go:172] (0xc002b03340) Reply frame received for 3 I0109 13:39:03.759700 8 log.go:172] (0xc002b03340) (0xc0024ed720) Create stream I0109 13:39:03.759784 8 log.go:172] (0xc002b03340) (0xc0024ed720) Stream added, broadcasting: 5 I0109 13:39:03.762202 8 log.go:172] (0xc002b03340) Reply frame received for 5 I0109 13:39:03.900418 8 log.go:172] (0xc002b03340) Data frame received for 3 I0109 13:39:03.900586 8 log.go:172] (0xc000d7fea0) (3) Data frame handling I0109 13:39:03.900623 8 log.go:172] (0xc000d7fea0) (3) Data frame sent I0109 13:39:04.056187 8 log.go:172] (0xc002b03340) Data frame received for 1 I0109 13:39:04.056284 8 log.go:172] (0xc002b03340) (0xc000d7fea0) Stream removed, broadcasting: 3 I0109 13:39:04.056391 8 log.go:172] (0xc00176ae60) (1) Data frame handling I0109 13:39:04.056434 8 log.go:172] (0xc002b03340) (0xc0024ed720) Stream removed, broadcasting: 5 I0109 13:39:04.056471 8 log.go:172] (0xc00176ae60) (1) Data frame sent I0109 13:39:04.056492 8 log.go:172] (0xc002b03340) (0xc00176ae60) Stream removed, broadcasting: 1 I0109 13:39:04.056571 8 log.go:172] (0xc002b03340) Go away received I0109 13:39:04.056759 8 log.go:172] (0xc002b03340) (0xc00176ae60) Stream removed, broadcasting: 1 I0109 13:39:04.056772 8 log.go:172] (0xc002b03340) (0xc000d7fea0) Stream removed, broadcasting: 3 I0109 13:39:04.056779 8 log.go:172] (0xc002b03340) (0xc0024ed720) Stream removed, broadcasting: 5 Jan 9 13:39:04.056: INFO: Exec stderr: "" Jan 9 13:39:04.056: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-8717 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 9 13:39:04.056: INFO: >>> kubeConfig: /root/.kube/config I0109 13:39:04.133907 8 log.go:172] (0xc0027e9c30) (0xc001e800a0) Create stream I0109 13:39:04.134059 8 log.go:172] (0xc0027e9c30) (0xc001e800a0) Stream added, broadcasting: 1 I0109 13:39:04.158505 8 log.go:172] (0xc0027e9c30) Reply frame received for 1 I0109 13:39:04.158777 8 log.go:172] (0xc0027e9c30) (0xc00049a3c0) Create stream I0109 13:39:04.158836 8 log.go:172] (0xc0027e9c30) (0xc00049a3c0) Stream added, broadcasting: 3 I0109 13:39:04.161484 8 log.go:172] (0xc0027e9c30) Reply frame received for 3 I0109 13:39:04.161534 8 log.go:172] (0xc0027e9c30) (0xc00049a460) Create stream I0109 13:39:04.161545 8 log.go:172] (0xc0027e9c30) (0xc00049a460) Stream added, broadcasting: 5 I0109 13:39:04.163582 8 log.go:172] (0xc0027e9c30) Reply frame received for 5 I0109 13:39:04.248525 8 log.go:172] (0xc0027e9c30) Data frame received for 3 I0109 13:39:04.248882 8 log.go:172] (0xc00049a3c0) (3) Data frame handling I0109 13:39:04.248960 8 log.go:172] (0xc00049a3c0) (3) Data frame sent I0109 13:39:04.333081 8 log.go:172] (0xc0027e9c30) Data frame received for 1 I0109 13:39:04.333191 8 log.go:172] (0xc0027e9c30) (0xc00049a3c0) Stream removed, broadcasting: 3 I0109 13:39:04.333283 8 log.go:172] (0xc001e800a0) (1) Data frame handling I0109 13:39:04.333311 8 log.go:172] (0xc0027e9c30) (0xc00049a460) Stream removed, broadcasting: 5 I0109 13:39:04.333330 8 log.go:172] (0xc001e800a0) (1) Data frame sent I0109 13:39:04.333389 8 log.go:172] (0xc0027e9c30) (0xc001e800a0) Stream removed, broadcasting: 1 I0109 13:39:04.333413 8 log.go:172] (0xc0027e9c30) Go away received I0109 13:39:04.333635 8 log.go:172] (0xc0027e9c30) (0xc001e800a0) Stream removed, broadcasting: 1 I0109 13:39:04.333652 8 log.go:172] (0xc0027e9c30) (0xc00049a3c0) Stream removed, broadcasting: 3 I0109 13:39:04.333658 8 log.go:172] (0xc0027e9c30) (0xc00049a460) Stream removed, broadcasting: 5 Jan 9 13:39:04.333: INFO: Exec stderr: "" [AfterEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 9 13:39:04.333: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-kubelet-etc-hosts-8717" for this suite. Jan 9 13:40:06.363: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 9 13:40:06.512: INFO: namespace e2e-kubelet-etc-hosts-8717 deletion completed in 1m2.170184104s • [SLOW TEST:85.913 seconds] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run job should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 9 13:40:06.513: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1612 [It] should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Jan 9 13:40:06.578: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-8156' Jan 9 13:40:09.086: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Jan 9 13:40:09.086: INFO: stdout: "job.batch/e2e-test-nginx-job created\n" STEP: verifying the job e2e-test-nginx-job was created [AfterEach] [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1617 Jan 9 13:40:09.112: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-nginx-job --namespace=kubectl-8156' Jan 9 13:40:09.313: INFO: stderr: "" Jan 9 13:40:09.313: INFO: stdout: "job.batch \"e2e-test-nginx-job\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 9 13:40:09.313: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8156" for this suite. Jan 9 13:40:15.338: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 9 13:40:15.545: INFO: namespace kubectl-8156 deletion completed in 6.226041659s • [SLOW TEST:9.032 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 9 13:40:15.545: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted STEP: Gathering metrics W0109 13:40:27.784382 8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jan 9 13:40:27.784: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 9 13:40:27.784: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-7373" for this suite. Jan 9 13:40:44.919: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 9 13:40:45.240: INFO: namespace gc-7373 deletion completed in 17.25288005s • [SLOW TEST:29.695 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 9 13:40:45.241: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Jan 9 13:40:47.707: INFO: Waiting up to 5m0s for pod "downwardapi-volume-deebeb14-e135-42f5-8c06-0904dcfe23c5" in namespace "projected-1060" to be "success or failure" Jan 9 13:40:48.434: INFO: Pod "downwardapi-volume-deebeb14-e135-42f5-8c06-0904dcfe23c5": Phase="Pending", Reason="", readiness=false. Elapsed: 727.303985ms Jan 9 13:40:50.738: INFO: Pod "downwardapi-volume-deebeb14-e135-42f5-8c06-0904dcfe23c5": Phase="Pending", Reason="", readiness=false. Elapsed: 3.031156018s Jan 9 13:40:52.746: INFO: Pod "downwardapi-volume-deebeb14-e135-42f5-8c06-0904dcfe23c5": Phase="Pending", Reason="", readiness=false. Elapsed: 5.038610759s Jan 9 13:40:54.759: INFO: Pod "downwardapi-volume-deebeb14-e135-42f5-8c06-0904dcfe23c5": Phase="Pending", Reason="", readiness=false. Elapsed: 7.051858532s Jan 9 13:40:56.871: INFO: Pod "downwardapi-volume-deebeb14-e135-42f5-8c06-0904dcfe23c5": Phase="Pending", Reason="", readiness=false. Elapsed: 9.164036127s Jan 9 13:40:58.937: INFO: Pod "downwardapi-volume-deebeb14-e135-42f5-8c06-0904dcfe23c5": Phase="Running", Reason="", readiness=true. Elapsed: 11.230353101s Jan 9 13:41:00.949: INFO: Pod "downwardapi-volume-deebeb14-e135-42f5-8c06-0904dcfe23c5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.241702017s STEP: Saw pod success Jan 9 13:41:00.949: INFO: Pod "downwardapi-volume-deebeb14-e135-42f5-8c06-0904dcfe23c5" satisfied condition "success or failure" Jan 9 13:41:00.952: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-deebeb14-e135-42f5-8c06-0904dcfe23c5 container client-container: STEP: delete the pod Jan 9 13:41:01.024: INFO: Waiting for pod downwardapi-volume-deebeb14-e135-42f5-8c06-0904dcfe23c5 to disappear Jan 9 13:41:01.035: INFO: Pod downwardapi-volume-deebeb14-e135-42f5-8c06-0904dcfe23c5 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 9 13:41:01.035: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1060" for this suite. Jan 9 13:41:07.136: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 9 13:41:07.255: INFO: namespace projected-1060 deletion completed in 6.211395103s • [SLOW TEST:22.014 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 9 13:41:07.256: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-1689 [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace statefulset-1689 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-1689 Jan 9 13:41:07.397: INFO: Found 0 stateful pods, waiting for 1 Jan 9 13:41:17.410: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod Jan 9 13:41:17.415: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1689 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jan 9 13:41:17.922: INFO: stderr: "I0109 13:41:17.581269 1516 log.go:172] (0xc0008c6370) (0xc0005948c0) Create stream\nI0109 13:41:17.581346 1516 log.go:172] (0xc0008c6370) (0xc0005948c0) Stream added, broadcasting: 1\nI0109 13:41:17.587610 1516 log.go:172] (0xc0008c6370) Reply frame received for 1\nI0109 13:41:17.587645 1516 log.go:172] (0xc0008c6370) (0xc0007d4000) Create stream\nI0109 13:41:17.587656 1516 log.go:172] (0xc0008c6370) (0xc0007d4000) Stream added, broadcasting: 3\nI0109 13:41:17.591191 1516 log.go:172] (0xc0008c6370) Reply frame received for 3\nI0109 13:41:17.591212 1516 log.go:172] (0xc0008c6370) (0xc000594960) Create stream\nI0109 13:41:17.591219 1516 log.go:172] (0xc0008c6370) (0xc000594960) Stream added, broadcasting: 5\nI0109 13:41:17.592774 1516 log.go:172] (0xc0008c6370) Reply frame received for 5\nI0109 13:41:17.712893 1516 log.go:172] (0xc0008c6370) Data frame received for 5\nI0109 13:41:17.712910 1516 log.go:172] (0xc000594960) (5) Data frame handling\nI0109 13:41:17.712922 1516 log.go:172] (0xc000594960) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0109 13:41:17.767973 1516 log.go:172] (0xc0008c6370) Data frame received for 3\nI0109 13:41:17.768068 1516 log.go:172] (0xc0007d4000) (3) Data frame handling\nI0109 13:41:17.768101 1516 log.go:172] (0xc0007d4000) (3) Data frame sent\nI0109 13:41:17.909536 1516 log.go:172] (0xc0008c6370) Data frame received for 1\nI0109 13:41:17.909862 1516 log.go:172] (0xc0008c6370) (0xc0007d4000) Stream removed, broadcasting: 3\nI0109 13:41:17.910051 1516 log.go:172] (0xc0005948c0) (1) Data frame handling\nI0109 13:41:17.910111 1516 log.go:172] (0xc0005948c0) (1) Data frame sent\nI0109 13:41:17.910274 1516 log.go:172] (0xc0008c6370) (0xc000594960) Stream removed, broadcasting: 5\nI0109 13:41:17.910325 1516 log.go:172] (0xc0008c6370) (0xc0005948c0) Stream removed, broadcasting: 1\nI0109 13:41:17.910350 1516 log.go:172] (0xc0008c6370) Go away received\nI0109 13:41:17.912711 1516 log.go:172] (0xc0008c6370) (0xc0005948c0) Stream removed, broadcasting: 1\nI0109 13:41:17.912802 1516 log.go:172] (0xc0008c6370) (0xc0007d4000) Stream removed, broadcasting: 3\nI0109 13:41:17.912814 1516 log.go:172] (0xc0008c6370) (0xc000594960) Stream removed, broadcasting: 5\n" Jan 9 13:41:17.922: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jan 9 13:41:17.922: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Jan 9 13:41:17.929: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Jan 9 13:41:27.946: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Jan 9 13:41:27.946: INFO: Waiting for statefulset status.replicas updated to 0 Jan 9 13:41:28.017: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999997813s Jan 9 13:41:29.032: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.947375266s Jan 9 13:41:30.071: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.931513712s Jan 9 13:41:31.089: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.892802453s Jan 9 13:41:32.100: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.875100888s Jan 9 13:41:33.111: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.864599589s Jan 9 13:41:34.135: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.853245121s Jan 9 13:41:35.161: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.829607985s Jan 9 13:41:36.172: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.802772753s Jan 9 13:41:37.182: INFO: Verifying statefulset ss doesn't scale past 1 for another 792.537059ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-1689 Jan 9 13:41:38.202: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1689 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 9 13:41:38.835: INFO: stderr: "I0109 13:41:38.464019 1534 log.go:172] (0xc00013adc0) (0xc0005a4820) Create stream\nI0109 13:41:38.464701 1534 log.go:172] (0xc00013adc0) (0xc0005a4820) Stream added, broadcasting: 1\nI0109 13:41:38.489972 1534 log.go:172] (0xc00013adc0) Reply frame received for 1\nI0109 13:41:38.490205 1534 log.go:172] (0xc00013adc0) (0xc0008ee000) Create stream\nI0109 13:41:38.490289 1534 log.go:172] (0xc00013adc0) (0xc0008ee000) Stream added, broadcasting: 3\nI0109 13:41:38.496286 1534 log.go:172] (0xc00013adc0) Reply frame received for 3\nI0109 13:41:38.496511 1534 log.go:172] (0xc00013adc0) (0xc0008ee0a0) Create stream\nI0109 13:41:38.496547 1534 log.go:172] (0xc00013adc0) (0xc0008ee0a0) Stream added, broadcasting: 5\nI0109 13:41:38.500035 1534 log.go:172] (0xc00013adc0) Reply frame received for 5\nI0109 13:41:38.702343 1534 log.go:172] (0xc00013adc0) Data frame received for 5\nI0109 13:41:38.702422 1534 log.go:172] (0xc0008ee0a0) (5) Data frame handling\nI0109 13:41:38.702438 1534 log.go:172] (0xc0008ee0a0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0109 13:41:38.702485 1534 log.go:172] (0xc00013adc0) Data frame received for 3\nI0109 13:41:38.702515 1534 log.go:172] (0xc0008ee000) (3) Data frame handling\nI0109 13:41:38.702540 1534 log.go:172] (0xc0008ee000) (3) Data frame sent\nI0109 13:41:38.822702 1534 log.go:172] (0xc00013adc0) (0xc0008ee000) Stream removed, broadcasting: 3\nI0109 13:41:38.823021 1534 log.go:172] (0xc00013adc0) Data frame received for 1\nI0109 13:41:38.823082 1534 log.go:172] (0xc0005a4820) (1) Data frame handling\nI0109 13:41:38.823143 1534 log.go:172] (0xc0005a4820) (1) Data frame sent\nI0109 13:41:38.823836 1534 log.go:172] (0xc00013adc0) (0xc0005a4820) Stream removed, broadcasting: 1\nI0109 13:41:38.826769 1534 log.go:172] (0xc00013adc0) (0xc0008ee0a0) Stream removed, broadcasting: 5\nI0109 13:41:38.826821 1534 log.go:172] (0xc00013adc0) Go away received\nI0109 13:41:38.828144 1534 log.go:172] (0xc00013adc0) (0xc0005a4820) Stream removed, broadcasting: 1\nI0109 13:41:38.828195 1534 log.go:172] (0xc00013adc0) (0xc0008ee000) Stream removed, broadcasting: 3\nI0109 13:41:38.828254 1534 log.go:172] (0xc00013adc0) (0xc0008ee0a0) Stream removed, broadcasting: 5\n" Jan 9 13:41:38.835: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Jan 9 13:41:38.835: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Jan 9 13:41:38.843: INFO: Found 1 stateful pods, waiting for 3 Jan 9 13:41:48.856: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Jan 9 13:41:48.856: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Jan 9 13:41:48.856: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Pending - Ready=false Jan 9 13:41:58.866: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Jan 9 13:41:58.866: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Jan 9 13:41:58.866: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod Jan 9 13:41:58.879: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1689 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jan 9 13:41:59.319: INFO: stderr: "I0109 13:41:59.065684 1555 log.go:172] (0xc0006eaa50) (0xc00026a820) Create stream\nI0109 13:41:59.065958 1555 log.go:172] (0xc0006eaa50) (0xc00026a820) Stream added, broadcasting: 1\nI0109 13:41:59.077469 1555 log.go:172] (0xc0006eaa50) Reply frame received for 1\nI0109 13:41:59.077590 1555 log.go:172] (0xc0006eaa50) (0xc00026a8c0) Create stream\nI0109 13:41:59.077613 1555 log.go:172] (0xc0006eaa50) (0xc00026a8c0) Stream added, broadcasting: 3\nI0109 13:41:59.079385 1555 log.go:172] (0xc0006eaa50) Reply frame received for 3\nI0109 13:41:59.079457 1555 log.go:172] (0xc0006eaa50) (0xc000812000) Create stream\nI0109 13:41:59.079489 1555 log.go:172] (0xc0006eaa50) (0xc000812000) Stream added, broadcasting: 5\nI0109 13:41:59.080828 1555 log.go:172] (0xc0006eaa50) Reply frame received for 5\nI0109 13:41:59.182351 1555 log.go:172] (0xc0006eaa50) Data frame received for 5\nI0109 13:41:59.186386 1555 log.go:172] (0xc000812000) (5) Data frame handling\nI0109 13:41:59.186636 1555 log.go:172] (0xc000812000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0109 13:41:59.187013 1555 log.go:172] (0xc0006eaa50) Data frame received for 3\nI0109 13:41:59.187200 1555 log.go:172] (0xc00026a8c0) (3) Data frame handling\nI0109 13:41:59.187318 1555 log.go:172] (0xc00026a8c0) (3) Data frame sent\nI0109 13:41:59.307892 1555 log.go:172] (0xc0006eaa50) Data frame received for 1\nI0109 13:41:59.307982 1555 log.go:172] (0xc00026a820) (1) Data frame handling\nI0109 13:41:59.308038 1555 log.go:172] (0xc00026a820) (1) Data frame sent\nI0109 13:41:59.308122 1555 log.go:172] (0xc0006eaa50) (0xc00026a820) Stream removed, broadcasting: 1\nI0109 13:41:59.308230 1555 log.go:172] (0xc0006eaa50) (0xc00026a8c0) Stream removed, broadcasting: 3\nI0109 13:41:59.308354 1555 log.go:172] (0xc0006eaa50) (0xc000812000) Stream removed, broadcasting: 5\nI0109 13:41:59.308457 1555 log.go:172] (0xc0006eaa50) Go away received\nI0109 13:41:59.309535 1555 log.go:172] (0xc0006eaa50) (0xc00026a820) Stream removed, broadcasting: 1\nI0109 13:41:59.309571 1555 log.go:172] (0xc0006eaa50) (0xc00026a8c0) Stream removed, broadcasting: 3\nI0109 13:41:59.309594 1555 log.go:172] (0xc0006eaa50) (0xc000812000) Stream removed, broadcasting: 5\n" Jan 9 13:41:59.319: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jan 9 13:41:59.319: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Jan 9 13:41:59.319: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1689 ss-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jan 9 13:41:59.970: INFO: stderr: "I0109 13:41:59.473990 1576 log.go:172] (0xc000a5e370) (0xc000326820) Create stream\nI0109 13:41:59.474103 1576 log.go:172] (0xc000a5e370) (0xc000326820) Stream added, broadcasting: 1\nI0109 13:41:59.477533 1576 log.go:172] (0xc000a5e370) Reply frame received for 1\nI0109 13:41:59.477620 1576 log.go:172] (0xc000a5e370) (0xc00063e460) Create stream\nI0109 13:41:59.477634 1576 log.go:172] (0xc000a5e370) (0xc00063e460) Stream added, broadcasting: 3\nI0109 13:41:59.482392 1576 log.go:172] (0xc000a5e370) Reply frame received for 3\nI0109 13:41:59.482842 1576 log.go:172] (0xc000a5e370) (0xc00071c000) Create stream\nI0109 13:41:59.482886 1576 log.go:172] (0xc000a5e370) (0xc00071c000) Stream added, broadcasting: 5\nI0109 13:41:59.485654 1576 log.go:172] (0xc000a5e370) Reply frame received for 5\nI0109 13:41:59.721778 1576 log.go:172] (0xc000a5e370) Data frame received for 5\nI0109 13:41:59.721819 1576 log.go:172] (0xc00071c000) (5) Data frame handling\nI0109 13:41:59.721840 1576 log.go:172] (0xc00071c000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0109 13:41:59.823112 1576 log.go:172] (0xc000a5e370) Data frame received for 3\nI0109 13:41:59.823191 1576 log.go:172] (0xc00063e460) (3) Data frame handling\nI0109 13:41:59.823231 1576 log.go:172] (0xc00063e460) (3) Data frame sent\nI0109 13:41:59.952993 1576 log.go:172] (0xc000a5e370) Data frame received for 1\nI0109 13:41:59.953144 1576 log.go:172] (0xc000a5e370) (0xc00063e460) Stream removed, broadcasting: 3\nI0109 13:41:59.953236 1576 log.go:172] (0xc000326820) (1) Data frame handling\nI0109 13:41:59.953264 1576 log.go:172] (0xc000326820) (1) Data frame sent\nI0109 13:41:59.953285 1576 log.go:172] (0xc000a5e370) (0xc000326820) Stream removed, broadcasting: 1\nI0109 13:41:59.954536 1576 log.go:172] (0xc000a5e370) (0xc00071c000) Stream removed, broadcasting: 5\nI0109 13:41:59.954927 1576 log.go:172] (0xc000a5e370) (0xc000326820) Stream removed, broadcasting: 1\nI0109 13:41:59.955300 1576 log.go:172] (0xc000a5e370) (0xc00063e460) Stream removed, broadcasting: 3\nI0109 13:41:59.955496 1576 log.go:172] (0xc000a5e370) (0xc00071c000) Stream removed, broadcasting: 5\nI0109 13:41:59.955627 1576 log.go:172] (0xc000a5e370) Go away received\n" Jan 9 13:41:59.971: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jan 9 13:41:59.971: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Jan 9 13:41:59.971: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1689 ss-2 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jan 9 13:42:00.714: INFO: stderr: "I0109 13:42:00.288258 1600 log.go:172] (0xc0008ca370) (0xc0009be640) Create stream\nI0109 13:42:00.288391 1600 log.go:172] (0xc0008ca370) (0xc0009be640) Stream added, broadcasting: 1\nI0109 13:42:00.294403 1600 log.go:172] (0xc0008ca370) Reply frame received for 1\nI0109 13:42:00.294445 1600 log.go:172] (0xc0008ca370) (0xc00095e000) Create stream\nI0109 13:42:00.294457 1600 log.go:172] (0xc0008ca370) (0xc00095e000) Stream added, broadcasting: 3\nI0109 13:42:00.296930 1600 log.go:172] (0xc0008ca370) Reply frame received for 3\nI0109 13:42:00.297030 1600 log.go:172] (0xc0008ca370) (0xc0005d6280) Create stream\nI0109 13:42:00.297048 1600 log.go:172] (0xc0008ca370) (0xc0005d6280) Stream added, broadcasting: 5\nI0109 13:42:00.299942 1600 log.go:172] (0xc0008ca370) Reply frame received for 5\nI0109 13:42:00.448809 1600 log.go:172] (0xc0008ca370) Data frame received for 5\nI0109 13:42:00.448878 1600 log.go:172] (0xc0005d6280) (5) Data frame handling\nI0109 13:42:00.448926 1600 log.go:172] (0xc0005d6280) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0109 13:42:00.492207 1600 log.go:172] (0xc0008ca370) Data frame received for 3\nI0109 13:42:00.492485 1600 log.go:172] (0xc00095e000) (3) Data frame handling\nI0109 13:42:00.492539 1600 log.go:172] (0xc00095e000) (3) Data frame sent\nI0109 13:42:00.702583 1600 log.go:172] (0xc0008ca370) Data frame received for 1\nI0109 13:42:00.702807 1600 log.go:172] (0xc0008ca370) (0xc0005d6280) Stream removed, broadcasting: 5\nI0109 13:42:00.702931 1600 log.go:172] (0xc0009be640) (1) Data frame handling\nI0109 13:42:00.702959 1600 log.go:172] (0xc0009be640) (1) Data frame sent\nI0109 13:42:00.703075 1600 log.go:172] (0xc0008ca370) (0xc00095e000) Stream removed, broadcasting: 3\nI0109 13:42:00.703683 1600 log.go:172] (0xc0008ca370) (0xc0009be640) Stream removed, broadcasting: 1\nI0109 13:42:00.703862 1600 log.go:172] (0xc0008ca370) Go away received\nI0109 13:42:00.705835 1600 log.go:172] (0xc0008ca370) (0xc0009be640) Stream removed, broadcasting: 1\nI0109 13:42:00.705929 1600 log.go:172] (0xc0008ca370) (0xc00095e000) Stream removed, broadcasting: 3\nI0109 13:42:00.705999 1600 log.go:172] (0xc0008ca370) (0xc0005d6280) Stream removed, broadcasting: 5\n" Jan 9 13:42:00.715: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jan 9 13:42:00.715: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Jan 9 13:42:00.715: INFO: Waiting for statefulset status.replicas updated to 0 Jan 9 13:42:00.722: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1 Jan 9 13:42:10.745: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Jan 9 13:42:10.745: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Jan 9 13:42:10.745: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Jan 9 13:42:10.779: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999426s Jan 9 13:42:11.801: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.981006291s Jan 9 13:42:12.809: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.958941469s Jan 9 13:42:13.832: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.95098678s Jan 9 13:42:14.842: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.927882071s Jan 9 13:42:15.854: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.918776683s Jan 9 13:42:17.062: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.906148278s Jan 9 13:42:18.077: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.698046042s Jan 9 13:42:19.089: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.682974628s Jan 9 13:42:20.103: INFO: Verifying statefulset ss doesn't scale past 3 for another 671.083773ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-1689 Jan 9 13:42:21.120: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1689 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 9 13:42:21.594: INFO: stderr: "I0109 13:42:21.295477 1620 log.go:172] (0xc00013ae70) (0xc000702640) Create stream\nI0109 13:42:21.295691 1620 log.go:172] (0xc00013ae70) (0xc000702640) Stream added, broadcasting: 1\nI0109 13:42:21.304515 1620 log.go:172] (0xc00013ae70) Reply frame received for 1\nI0109 13:42:21.304609 1620 log.go:172] (0xc00013ae70) (0xc0005f6280) Create stream\nI0109 13:42:21.304622 1620 log.go:172] (0xc00013ae70) (0xc0005f6280) Stream added, broadcasting: 3\nI0109 13:42:21.306179 1620 log.go:172] (0xc00013ae70) Reply frame received for 3\nI0109 13:42:21.306204 1620 log.go:172] (0xc00013ae70) (0xc000930000) Create stream\nI0109 13:42:21.306211 1620 log.go:172] (0xc00013ae70) (0xc000930000) Stream added, broadcasting: 5\nI0109 13:42:21.307792 1620 log.go:172] (0xc00013ae70) Reply frame received for 5\nI0109 13:42:21.441495 1620 log.go:172] (0xc00013ae70) Data frame received for 5\nI0109 13:42:21.441621 1620 log.go:172] (0xc000930000) (5) Data frame handling\nI0109 13:42:21.441680 1620 log.go:172] (0xc000930000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0109 13:42:21.442623 1620 log.go:172] (0xc00013ae70) Data frame received for 3\nI0109 13:42:21.442676 1620 log.go:172] (0xc0005f6280) (3) Data frame handling\nI0109 13:42:21.442692 1620 log.go:172] (0xc0005f6280) (3) Data frame sent\nI0109 13:42:21.585928 1620 log.go:172] (0xc00013ae70) Data frame received for 1\nI0109 13:42:21.586306 1620 log.go:172] (0xc00013ae70) (0xc000930000) Stream removed, broadcasting: 5\nI0109 13:42:21.586378 1620 log.go:172] (0xc000702640) (1) Data frame handling\nI0109 13:42:21.586407 1620 log.go:172] (0xc000702640) (1) Data frame sent\nI0109 13:42:21.586506 1620 log.go:172] (0xc00013ae70) (0xc0005f6280) Stream removed, broadcasting: 3\nI0109 13:42:21.586643 1620 log.go:172] (0xc00013ae70) (0xc000702640) Stream removed, broadcasting: 1\nI0109 13:42:21.586675 1620 log.go:172] (0xc00013ae70) Go away received\nI0109 13:42:21.587289 1620 log.go:172] (0xc00013ae70) (0xc000702640) Stream removed, broadcasting: 1\nI0109 13:42:21.587303 1620 log.go:172] (0xc00013ae70) (0xc0005f6280) Stream removed, broadcasting: 3\nI0109 13:42:21.587310 1620 log.go:172] (0xc00013ae70) (0xc000930000) Stream removed, broadcasting: 5\n" Jan 9 13:42:21.594: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Jan 9 13:42:21.594: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Jan 9 13:42:21.595: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1689 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 9 13:42:21.961: INFO: stderr: "I0109 13:42:21.775439 1636 log.go:172] (0xc000130dc0) (0xc000646780) Create stream\nI0109 13:42:21.775575 1636 log.go:172] (0xc000130dc0) (0xc000646780) Stream added, broadcasting: 1\nI0109 13:42:21.781164 1636 log.go:172] (0xc000130dc0) Reply frame received for 1\nI0109 13:42:21.781334 1636 log.go:172] (0xc000130dc0) (0xc000812000) Create stream\nI0109 13:42:21.781421 1636 log.go:172] (0xc000130dc0) (0xc000812000) Stream added, broadcasting: 3\nI0109 13:42:21.783317 1636 log.go:172] (0xc000130dc0) Reply frame received for 3\nI0109 13:42:21.783378 1636 log.go:172] (0xc000130dc0) (0xc00078c000) Create stream\nI0109 13:42:21.783396 1636 log.go:172] (0xc000130dc0) (0xc00078c000) Stream added, broadcasting: 5\nI0109 13:42:21.789451 1636 log.go:172] (0xc000130dc0) Reply frame received for 5\nI0109 13:42:21.882105 1636 log.go:172] (0xc000130dc0) Data frame received for 5\nI0109 13:42:21.882175 1636 log.go:172] (0xc00078c000) (5) Data frame handling\nI0109 13:42:21.882196 1636 log.go:172] (0xc00078c000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0109 13:42:21.882223 1636 log.go:172] (0xc000130dc0) Data frame received for 3\nI0109 13:42:21.882242 1636 log.go:172] (0xc000812000) (3) Data frame handling\nI0109 13:42:21.882252 1636 log.go:172] (0xc000812000) (3) Data frame sent\nI0109 13:42:21.953568 1636 log.go:172] (0xc000130dc0) Data frame received for 1\nI0109 13:42:21.953788 1636 log.go:172] (0xc000130dc0) (0xc000812000) Stream removed, broadcasting: 3\nI0109 13:42:21.953824 1636 log.go:172] (0xc000130dc0) (0xc00078c000) Stream removed, broadcasting: 5\nI0109 13:42:21.953962 1636 log.go:172] (0xc000646780) (1) Data frame handling\nI0109 13:42:21.954020 1636 log.go:172] (0xc000646780) (1) Data frame sent\nI0109 13:42:21.954030 1636 log.go:172] (0xc000130dc0) (0xc000646780) Stream removed, broadcasting: 1\nI0109 13:42:21.954049 1636 log.go:172] (0xc000130dc0) Go away received\nI0109 13:42:21.955173 1636 log.go:172] (0xc000130dc0) (0xc000646780) Stream removed, broadcasting: 1\nI0109 13:42:21.955190 1636 log.go:172] (0xc000130dc0) (0xc000812000) Stream removed, broadcasting: 3\nI0109 13:42:21.955194 1636 log.go:172] (0xc000130dc0) (0xc00078c000) Stream removed, broadcasting: 5\n" Jan 9 13:42:21.962: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Jan 9 13:42:21.962: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Jan 9 13:42:21.962: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1689 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 9 13:42:22.930: INFO: stderr: "I0109 13:42:22.311894 1657 log.go:172] (0xc0009f6420) (0xc00089e960) Create stream\nI0109 13:42:22.312693 1657 log.go:172] (0xc0009f6420) (0xc00089e960) Stream added, broadcasting: 1\nI0109 13:42:22.337587 1657 log.go:172] (0xc0009f6420) Reply frame received for 1\nI0109 13:42:22.337709 1657 log.go:172] (0xc0009f6420) (0xc00089e000) Create stream\nI0109 13:42:22.337728 1657 log.go:172] (0xc0009f6420) (0xc00089e000) Stream added, broadcasting: 3\nI0109 13:42:22.339482 1657 log.go:172] (0xc0009f6420) Reply frame received for 3\nI0109 13:42:22.339569 1657 log.go:172] (0xc0009f6420) (0xc0009e4000) Create stream\nI0109 13:42:22.339592 1657 log.go:172] (0xc0009f6420) (0xc0009e4000) Stream added, broadcasting: 5\nI0109 13:42:22.341446 1657 log.go:172] (0xc0009f6420) Reply frame received for 5\nI0109 13:42:22.596484 1657 log.go:172] (0xc0009f6420) Data frame received for 5\nI0109 13:42:22.596629 1657 log.go:172] (0xc0009e4000) (5) Data frame handling\nI0109 13:42:22.596654 1657 log.go:172] (0xc0009e4000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0109 13:42:22.596707 1657 log.go:172] (0xc0009f6420) Data frame received for 3\nI0109 13:42:22.596721 1657 log.go:172] (0xc00089e000) (3) Data frame handling\nI0109 13:42:22.596743 1657 log.go:172] (0xc00089e000) (3) Data frame sent\nI0109 13:42:22.903294 1657 log.go:172] (0xc0009f6420) (0xc0009e4000) Stream removed, broadcasting: 5\nI0109 13:42:22.903862 1657 log.go:172] (0xc0009f6420) Data frame received for 1\nI0109 13:42:22.903914 1657 log.go:172] (0xc00089e960) (1) Data frame handling\nI0109 13:42:22.904002 1657 log.go:172] (0xc00089e960) (1) Data frame sent\nI0109 13:42:22.904040 1657 log.go:172] (0xc0009f6420) (0xc00089e960) Stream removed, broadcasting: 1\nI0109 13:42:22.905480 1657 log.go:172] (0xc0009f6420) (0xc00089e000) Stream removed, broadcasting: 3\nI0109 13:42:22.905952 1657 log.go:172] (0xc0009f6420) Go away received\nI0109 13:42:22.906750 1657 log.go:172] (0xc0009f6420) (0xc00089e960) Stream removed, broadcasting: 1\nI0109 13:42:22.906797 1657 log.go:172] (0xc0009f6420) (0xc00089e000) Stream removed, broadcasting: 3\nI0109 13:42:22.906826 1657 log.go:172] (0xc0009f6420) (0xc0009e4000) Stream removed, broadcasting: 5\n" Jan 9 13:42:22.930: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Jan 9 13:42:22.931: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Jan 9 13:42:22.931: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 Jan 9 13:43:02.981: INFO: Deleting all statefulset in ns statefulset-1689 Jan 9 13:43:02.986: INFO: Scaling statefulset ss to 0 Jan 9 13:43:03.001: INFO: Waiting for statefulset status.replicas updated to 0 Jan 9 13:43:03.006: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 9 13:43:03.031: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-1689" for this suite. Jan 9 13:43:09.075: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 9 13:43:09.166: INFO: namespace statefulset-1689 deletion completed in 6.11594006s • [SLOW TEST:121.910 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 9 13:43:09.166: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jan 9 13:43:09.250: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. Jan 9 13:43:09.302: INFO: Number of nodes with available pods: 0 Jan 9 13:43:09.302: INFO: Node iruya-node is running more than one daemon pod Jan 9 13:43:11.172: INFO: Number of nodes with available pods: 0 Jan 9 13:43:11.172: INFO: Node iruya-node is running more than one daemon pod Jan 9 13:43:11.786: INFO: Number of nodes with available pods: 0 Jan 9 13:43:11.786: INFO: Node iruya-node is running more than one daemon pod Jan 9 13:43:12.534: INFO: Number of nodes with available pods: 0 Jan 9 13:43:12.534: INFO: Node iruya-node is running more than one daemon pod Jan 9 13:43:13.311: INFO: Number of nodes with available pods: 0 Jan 9 13:43:13.311: INFO: Node iruya-node is running more than one daemon pod Jan 9 13:43:14.349: INFO: Number of nodes with available pods: 0 Jan 9 13:43:14.349: INFO: Node iruya-node is running more than one daemon pod Jan 9 13:43:16.111: INFO: Number of nodes with available pods: 0 Jan 9 13:43:16.111: INFO: Node iruya-node is running more than one daemon pod Jan 9 13:43:16.556: INFO: Number of nodes with available pods: 0 Jan 9 13:43:16.556: INFO: Node iruya-node is running more than one daemon pod Jan 9 13:43:17.780: INFO: Number of nodes with available pods: 0 Jan 9 13:43:17.780: INFO: Node iruya-node is running more than one daemon pod Jan 9 13:43:18.372: INFO: Number of nodes with available pods: 0 Jan 9 13:43:18.372: INFO: Node iruya-node is running more than one daemon pod Jan 9 13:43:19.368: INFO: Number of nodes with available pods: 0 Jan 9 13:43:19.368: INFO: Node iruya-node is running more than one daemon pod Jan 9 13:43:20.327: INFO: Number of nodes with available pods: 2 Jan 9 13:43:20.327: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. Jan 9 13:43:20.437: INFO: Wrong image for pod: daemon-set-bzhth. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 9 13:43:20.437: INFO: Wrong image for pod: daemon-set-l4bm7. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 9 13:43:21.452: INFO: Wrong image for pod: daemon-set-bzhth. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 9 13:43:21.452: INFO: Wrong image for pod: daemon-set-l4bm7. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 9 13:43:23.097: INFO: Wrong image for pod: daemon-set-bzhth. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 9 13:43:23.097: INFO: Wrong image for pod: daemon-set-l4bm7. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 9 13:43:23.456: INFO: Wrong image for pod: daemon-set-bzhth. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 9 13:43:23.456: INFO: Wrong image for pod: daemon-set-l4bm7. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 9 13:43:24.531: INFO: Wrong image for pod: daemon-set-bzhth. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 9 13:43:24.531: INFO: Wrong image for pod: daemon-set-l4bm7. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 9 13:43:25.451: INFO: Wrong image for pod: daemon-set-bzhth. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 9 13:43:25.451: INFO: Wrong image for pod: daemon-set-l4bm7. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 9 13:43:26.464: INFO: Wrong image for pod: daemon-set-bzhth. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 9 13:43:26.464: INFO: Pod daemon-set-bzhth is not available Jan 9 13:43:26.464: INFO: Wrong image for pod: daemon-set-l4bm7. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 9 13:43:27.452: INFO: Wrong image for pod: daemon-set-bzhth. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 9 13:43:27.452: INFO: Pod daemon-set-bzhth is not available Jan 9 13:43:27.452: INFO: Wrong image for pod: daemon-set-l4bm7. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 9 13:43:28.458: INFO: Wrong image for pod: daemon-set-bzhth. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 9 13:43:28.458: INFO: Pod daemon-set-bzhth is not available Jan 9 13:43:28.458: INFO: Wrong image for pod: daemon-set-l4bm7. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 9 13:43:29.451: INFO: Wrong image for pod: daemon-set-bzhth. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 9 13:43:29.451: INFO: Pod daemon-set-bzhth is not available Jan 9 13:43:29.451: INFO: Wrong image for pod: daemon-set-l4bm7. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 9 13:43:30.455: INFO: Wrong image for pod: daemon-set-bzhth. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 9 13:43:30.455: INFO: Pod daemon-set-bzhth is not available Jan 9 13:43:30.455: INFO: Wrong image for pod: daemon-set-l4bm7. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 9 13:43:31.451: INFO: Wrong image for pod: daemon-set-bzhth. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 9 13:43:31.451: INFO: Pod daemon-set-bzhth is not available Jan 9 13:43:31.451: INFO: Wrong image for pod: daemon-set-l4bm7. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 9 13:43:32.467: INFO: Wrong image for pod: daemon-set-bzhth. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 9 13:43:32.467: INFO: Pod daemon-set-bzhth is not available Jan 9 13:43:32.467: INFO: Wrong image for pod: daemon-set-l4bm7. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 9 13:43:33.449: INFO: Wrong image for pod: daemon-set-bzhth. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 9 13:43:33.449: INFO: Pod daemon-set-bzhth is not available Jan 9 13:43:33.449: INFO: Wrong image for pod: daemon-set-l4bm7. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 9 13:43:34.453: INFO: Wrong image for pod: daemon-set-bzhth. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 9 13:43:34.453: INFO: Pod daemon-set-bzhth is not available Jan 9 13:43:34.453: INFO: Wrong image for pod: daemon-set-l4bm7. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 9 13:43:35.453: INFO: Wrong image for pod: daemon-set-bzhth. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 9 13:43:35.453: INFO: Pod daemon-set-bzhth is not available Jan 9 13:43:35.453: INFO: Wrong image for pod: daemon-set-l4bm7. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 9 13:43:36.455: INFO: Wrong image for pod: daemon-set-bzhth. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 9 13:43:36.455: INFO: Pod daemon-set-bzhth is not available Jan 9 13:43:36.455: INFO: Wrong image for pod: daemon-set-l4bm7. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 9 13:43:37.451: INFO: Wrong image for pod: daemon-set-bzhth. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 9 13:43:37.451: INFO: Pod daemon-set-bzhth is not available Jan 9 13:43:37.451: INFO: Wrong image for pod: daemon-set-l4bm7. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 9 13:43:38.453: INFO: Wrong image for pod: daemon-set-l4bm7. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 9 13:43:38.453: INFO: Pod daemon-set-rlg2m is not available Jan 9 13:43:39.947: INFO: Wrong image for pod: daemon-set-l4bm7. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 9 13:43:39.947: INFO: Pod daemon-set-rlg2m is not available Jan 9 13:43:40.506: INFO: Wrong image for pod: daemon-set-l4bm7. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 9 13:43:40.506: INFO: Pod daemon-set-rlg2m is not available Jan 9 13:43:41.447: INFO: Wrong image for pod: daemon-set-l4bm7. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 9 13:43:41.447: INFO: Pod daemon-set-rlg2m is not available Jan 9 13:43:42.457: INFO: Wrong image for pod: daemon-set-l4bm7. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 9 13:43:42.457: INFO: Pod daemon-set-rlg2m is not available Jan 9 13:43:44.124: INFO: Wrong image for pod: daemon-set-l4bm7. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 9 13:43:44.124: INFO: Pod daemon-set-rlg2m is not available Jan 9 13:43:44.464: INFO: Wrong image for pod: daemon-set-l4bm7. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 9 13:43:44.465: INFO: Pod daemon-set-rlg2m is not available Jan 9 13:43:45.451: INFO: Wrong image for pod: daemon-set-l4bm7. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 9 13:43:45.451: INFO: Pod daemon-set-rlg2m is not available Jan 9 13:43:46.461: INFO: Wrong image for pod: daemon-set-l4bm7. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 9 13:43:46.461: INFO: Pod daemon-set-rlg2m is not available Jan 9 13:43:47.451: INFO: Wrong image for pod: daemon-set-l4bm7. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 9 13:43:48.460: INFO: Wrong image for pod: daemon-set-l4bm7. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 9 13:43:49.449: INFO: Wrong image for pod: daemon-set-l4bm7. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 9 13:43:50.456: INFO: Wrong image for pod: daemon-set-l4bm7. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 9 13:43:51.452: INFO: Wrong image for pod: daemon-set-l4bm7. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 9 13:43:52.458: INFO: Wrong image for pod: daemon-set-l4bm7. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 9 13:43:52.458: INFO: Pod daemon-set-l4bm7 is not available Jan 9 13:43:53.457: INFO: Pod daemon-set-zltfr is not available STEP: Check that daemon pods are still running on every node of the cluster. Jan 9 13:43:53.478: INFO: Number of nodes with available pods: 1 Jan 9 13:43:53.478: INFO: Node iruya-node is running more than one daemon pod Jan 9 13:43:54.516: INFO: Number of nodes with available pods: 1 Jan 9 13:43:54.516: INFO: Node iruya-node is running more than one daemon pod Jan 9 13:43:55.500: INFO: Number of nodes with available pods: 1 Jan 9 13:43:55.500: INFO: Node iruya-node is running more than one daemon pod Jan 9 13:43:56.504: INFO: Number of nodes with available pods: 1 Jan 9 13:43:56.505: INFO: Node iruya-node is running more than one daemon pod Jan 9 13:43:57.497: INFO: Number of nodes with available pods: 1 Jan 9 13:43:57.497: INFO: Node iruya-node is running more than one daemon pod Jan 9 13:43:58.500: INFO: Number of nodes with available pods: 1 Jan 9 13:43:58.500: INFO: Node iruya-node is running more than one daemon pod Jan 9 13:43:59.495: INFO: Number of nodes with available pods: 1 Jan 9 13:43:59.495: INFO: Node iruya-node is running more than one daemon pod Jan 9 13:44:00.496: INFO: Number of nodes with available pods: 1 Jan 9 13:44:00.496: INFO: Node iruya-node is running more than one daemon pod Jan 9 13:44:01.490: INFO: Number of nodes with available pods: 2 Jan 9 13:44:01.490: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-8818, will wait for the garbage collector to delete the pods Jan 9 13:44:01.580: INFO: Deleting DaemonSet.extensions daemon-set took: 15.266084ms Jan 9 13:44:01.880: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.442346ms Jan 9 13:44:17.900: INFO: Number of nodes with available pods: 0 Jan 9 13:44:17.901: INFO: Number of running nodes: 0, number of available pods: 0 Jan 9 13:44:17.939: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-8818/daemonsets","resourceVersion":"19903720"},"items":null} Jan 9 13:44:17.945: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-8818/pods","resourceVersion":"19903720"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 9 13:44:17.962: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-8818" for this suite. Jan 9 13:44:23.993: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 9 13:44:24.077: INFO: namespace daemonsets-8818 deletion completed in 6.111847422s • [SLOW TEST:74.912 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 9 13:44:24.078: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update Jan 9 13:44:24.211: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:watch-1638,SelfLink:/api/v1/namespaces/watch-1638/configmaps/e2e-watch-test-resource-version,UID:c0848c8a-6a74-424e-abd8-77dedd477a8b,ResourceVersion:19903765,Generation:0,CreationTimestamp:2020-01-09 13:44:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Jan 9 13:44:24.211: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:watch-1638,SelfLink:/api/v1/namespaces/watch-1638/configmaps/e2e-watch-test-resource-version,UID:c0848c8a-6a74-424e-abd8-77dedd477a8b,ResourceVersion:19903766,Generation:0,CreationTimestamp:2020-01-09 13:44:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 9 13:44:24.211: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-1638" for this suite. Jan 9 13:44:30.323: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 9 13:44:30.452: INFO: namespace watch-1638 deletion completed in 6.237838066s • [SLOW TEST:6.374 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 9 13:44:30.453: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Jan 9 13:44:30.595: INFO: Waiting up to 5m0s for pod "downwardapi-volume-08da4e09-9f72-4f08-9842-e7e80a423cba" in namespace "downward-api-1131" to be "success or failure" Jan 9 13:44:30.612: INFO: Pod "downwardapi-volume-08da4e09-9f72-4f08-9842-e7e80a423cba": Phase="Pending", Reason="", readiness=false. Elapsed: 17.107329ms Jan 9 13:44:32.621: INFO: Pod "downwardapi-volume-08da4e09-9f72-4f08-9842-e7e80a423cba": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026257773s Jan 9 13:44:34.634: INFO: Pod "downwardapi-volume-08da4e09-9f72-4f08-9842-e7e80a423cba": Phase="Pending", Reason="", readiness=false. Elapsed: 4.038527314s Jan 9 13:44:36.729: INFO: Pod "downwardapi-volume-08da4e09-9f72-4f08-9842-e7e80a423cba": Phase="Pending", Reason="", readiness=false. Elapsed: 6.133362769s Jan 9 13:44:38.739: INFO: Pod "downwardapi-volume-08da4e09-9f72-4f08-9842-e7e80a423cba": Phase="Running", Reason="", readiness=true. Elapsed: 8.143765287s Jan 9 13:44:40.749: INFO: Pod "downwardapi-volume-08da4e09-9f72-4f08-9842-e7e80a423cba": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.154325814s STEP: Saw pod success Jan 9 13:44:40.750: INFO: Pod "downwardapi-volume-08da4e09-9f72-4f08-9842-e7e80a423cba" satisfied condition "success or failure" Jan 9 13:44:40.759: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-08da4e09-9f72-4f08-9842-e7e80a423cba container client-container: STEP: delete the pod Jan 9 13:44:40.935: INFO: Waiting for pod downwardapi-volume-08da4e09-9f72-4f08-9842-e7e80a423cba to disappear Jan 9 13:44:40.982: INFO: Pod downwardapi-volume-08da4e09-9f72-4f08-9842-e7e80a423cba no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 9 13:44:40.982: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1131" for this suite. Jan 9 13:44:47.018: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 9 13:44:47.112: INFO: namespace downward-api-1131 deletion completed in 6.124431062s • [SLOW TEST:16.659 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 9 13:44:47.113: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-3617 [It] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace statefulset-3617 STEP: Creating statefulset with conflicting port in namespace statefulset-3617 STEP: Waiting until pod test-pod will start running in namespace statefulset-3617 STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-3617 Jan 9 13:44:55.359: INFO: Observed stateful pod in namespace: statefulset-3617, name: ss-0, uid: 6a4c048d-ce5c-4eaa-98bc-83d95ca85961, status phase: Pending. Waiting for statefulset controller to delete. Jan 9 13:44:56.528: INFO: Observed stateful pod in namespace: statefulset-3617, name: ss-0, uid: 6a4c048d-ce5c-4eaa-98bc-83d95ca85961, status phase: Failed. Waiting for statefulset controller to delete. Jan 9 13:44:56.586: INFO: Observed stateful pod in namespace: statefulset-3617, name: ss-0, uid: 6a4c048d-ce5c-4eaa-98bc-83d95ca85961, status phase: Failed. Waiting for statefulset controller to delete. Jan 9 13:44:56.594: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-3617 STEP: Removing pod with conflicting port in namespace statefulset-3617 STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-3617 and will be in running state [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 Jan 9 13:45:06.876: INFO: Deleting all statefulset in ns statefulset-3617 Jan 9 13:45:06.880: INFO: Scaling statefulset ss to 0 Jan 9 13:45:16.922: INFO: Waiting for statefulset status.replicas updated to 0 Jan 9 13:45:16.930: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 9 13:45:16.952: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-3617" for this suite. Jan 9 13:45:22.982: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 9 13:45:23.113: INFO: namespace statefulset-3617 deletion completed in 6.155088858s • [SLOW TEST:36.001 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 9 13:45:23.113: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name s-test-opt-del-ad9a5c3f-a883-4df3-ab5a-f76b6d3d24ad STEP: Creating secret with name s-test-opt-upd-e9496022-9fc9-462e-8e8e-4ad07bb6dbf0 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-ad9a5c3f-a883-4df3-ab5a-f76b6d3d24ad STEP: Updating secret s-test-opt-upd-e9496022-9fc9-462e-8e8e-4ad07bb6dbf0 STEP: Creating secret with name s-test-opt-create-9c588c08-6c46-4801-a1c0-9b49946476fe STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 9 13:47:01.478: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7234" for this suite. Jan 9 13:47:23.590: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 9 13:47:23.731: INFO: namespace secrets-7234 deletion completed in 22.246572071s • [SLOW TEST:120.617 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 9 13:47:23.731: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-projected-all-test-volume-05db96ba-07bc-4148-8a9b-7de6d05d5f1b STEP: Creating secret with name secret-projected-all-test-volume-ead6515f-9fd3-4bc4-adba-5ef543f5d34f STEP: Creating a pod to test Check all projections for projected volume plugin Jan 9 13:47:23.936: INFO: Waiting up to 5m0s for pod "projected-volume-26f3e501-69af-43f3-91b7-b2428e2e63c1" in namespace "projected-8501" to be "success or failure" Jan 9 13:47:23.950: INFO: Pod "projected-volume-26f3e501-69af-43f3-91b7-b2428e2e63c1": Phase="Pending", Reason="", readiness=false. Elapsed: 13.57692ms Jan 9 13:47:25.958: INFO: Pod "projected-volume-26f3e501-69af-43f3-91b7-b2428e2e63c1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021760142s Jan 9 13:47:27.965: INFO: Pod "projected-volume-26f3e501-69af-43f3-91b7-b2428e2e63c1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.028346508s Jan 9 13:47:29.971: INFO: Pod "projected-volume-26f3e501-69af-43f3-91b7-b2428e2e63c1": Phase="Pending", Reason="", readiness=false. Elapsed: 6.034375943s Jan 9 13:47:31.986: INFO: Pod "projected-volume-26f3e501-69af-43f3-91b7-b2428e2e63c1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.049525071s STEP: Saw pod success Jan 9 13:47:31.986: INFO: Pod "projected-volume-26f3e501-69af-43f3-91b7-b2428e2e63c1" satisfied condition "success or failure" Jan 9 13:47:32.002: INFO: Trying to get logs from node iruya-node pod projected-volume-26f3e501-69af-43f3-91b7-b2428e2e63c1 container projected-all-volume-test: STEP: delete the pod Jan 9 13:47:32.118: INFO: Waiting for pod projected-volume-26f3e501-69af-43f3-91b7-b2428e2e63c1 to disappear Jan 9 13:47:32.145: INFO: Pod projected-volume-26f3e501-69af-43f3-91b7-b2428e2e63c1 no longer exists [AfterEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 9 13:47:32.146: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8501" for this suite. Jan 9 13:47:38.189: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 9 13:47:38.319: INFO: namespace projected-8501 deletion completed in 6.163818859s • [SLOW TEST:14.589 seconds] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:31 should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 9 13:47:38.320: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 9 13:47:44.760: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-5753" for this suite. Jan 9 13:47:50.787: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 9 13:47:50.962: INFO: namespace namespaces-5753 deletion completed in 6.196060597s STEP: Destroying namespace "nsdeletetest-3258" for this suite. Jan 9 13:47:50.965: INFO: Namespace nsdeletetest-3258 was already deleted STEP: Destroying namespace "nsdeletetest-142" for this suite. Jan 9 13:47:56.988: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 9 13:47:57.088: INFO: namespace nsdeletetest-142 deletion completed in 6.12252304s • [SLOW TEST:18.768 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 9 13:47:57.088: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted Jan 9 13:48:03.375: INFO: 0 pods remaining Jan 9 13:48:03.376: INFO: 0 pods has nil DeletionTimestamp Jan 9 13:48:03.376: INFO: STEP: Gathering metrics W0109 13:48:04.297460 8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jan 9 13:48:04.297: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 9 13:48:04.297: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-5307" for this suite. Jan 9 13:48:14.387: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 9 13:48:14.490: INFO: namespace gc-5307 deletion completed in 10.188353772s • [SLOW TEST:17.402 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [k8s.io] Pods should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 9 13:48:14.490: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Jan 9 13:48:23.219: INFO: Successfully updated pod "pod-update-81c138d8-c09c-43b1-a73d-3dedab198a0d" STEP: verifying the updated pod is in kubernetes Jan 9 13:48:23.228: INFO: Pod update OK [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 9 13:48:23.228: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-9572" for this suite. Jan 9 13:48:45.250: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 9 13:48:45.334: INFO: namespace pods-9572 deletion completed in 22.101910167s • [SLOW TEST:30.844 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 9 13:48:45.334: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0666 on node default medium Jan 9 13:48:45.412: INFO: Waiting up to 5m0s for pod "pod-da3403ca-7827-4cfb-84cc-ae11a820ddce" in namespace "emptydir-5398" to be "success or failure" Jan 9 13:48:45.463: INFO: Pod "pod-da3403ca-7827-4cfb-84cc-ae11a820ddce": Phase="Pending", Reason="", readiness=false. Elapsed: 51.454811ms Jan 9 13:48:47.475: INFO: Pod "pod-da3403ca-7827-4cfb-84cc-ae11a820ddce": Phase="Pending", Reason="", readiness=false. Elapsed: 2.063322684s Jan 9 13:48:49.481: INFO: Pod "pod-da3403ca-7827-4cfb-84cc-ae11a820ddce": Phase="Pending", Reason="", readiness=false. Elapsed: 4.069120615s Jan 9 13:48:51.488: INFO: Pod "pod-da3403ca-7827-4cfb-84cc-ae11a820ddce": Phase="Pending", Reason="", readiness=false. Elapsed: 6.0760791s Jan 9 13:48:53.498: INFO: Pod "pod-da3403ca-7827-4cfb-84cc-ae11a820ddce": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.085755262s STEP: Saw pod success Jan 9 13:48:53.498: INFO: Pod "pod-da3403ca-7827-4cfb-84cc-ae11a820ddce" satisfied condition "success or failure" Jan 9 13:48:53.502: INFO: Trying to get logs from node iruya-node pod pod-da3403ca-7827-4cfb-84cc-ae11a820ddce container test-container: STEP: delete the pod Jan 9 13:48:53.557: INFO: Waiting for pod pod-da3403ca-7827-4cfb-84cc-ae11a820ddce to disappear Jan 9 13:48:53.617: INFO: Pod pod-da3403ca-7827-4cfb-84cc-ae11a820ddce no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 9 13:48:53.618: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5398" for this suite. Jan 9 13:48:59.654: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 9 13:48:59.784: INFO: namespace emptydir-5398 deletion completed in 6.158688748s • [SLOW TEST:14.450 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 9 13:48:59.784: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-map-5a9e5ff9-5eb2-492f-9134-c3ec91245efa STEP: Creating a pod to test consume configMaps Jan 9 13:48:59.975: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-7bfe1985-238a-4474-bff8-3f7b443a8595" in namespace "projected-2951" to be "success or failure" Jan 9 13:48:59.998: INFO: Pod "pod-projected-configmaps-7bfe1985-238a-4474-bff8-3f7b443a8595": Phase="Pending", Reason="", readiness=false. Elapsed: 23.684575ms Jan 9 13:49:02.009: INFO: Pod "pod-projected-configmaps-7bfe1985-238a-4474-bff8-3f7b443a8595": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034643109s Jan 9 13:49:04.023: INFO: Pod "pod-projected-configmaps-7bfe1985-238a-4474-bff8-3f7b443a8595": Phase="Pending", Reason="", readiness=false. Elapsed: 4.048350266s Jan 9 13:49:06.032: INFO: Pod "pod-projected-configmaps-7bfe1985-238a-4474-bff8-3f7b443a8595": Phase="Pending", Reason="", readiness=false. Elapsed: 6.057254667s Jan 9 13:49:08.046: INFO: Pod "pod-projected-configmaps-7bfe1985-238a-4474-bff8-3f7b443a8595": Phase="Running", Reason="", readiness=true. Elapsed: 8.071621591s Jan 9 13:49:10.070: INFO: Pod "pod-projected-configmaps-7bfe1985-238a-4474-bff8-3f7b443a8595": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.095634074s STEP: Saw pod success Jan 9 13:49:10.070: INFO: Pod "pod-projected-configmaps-7bfe1985-238a-4474-bff8-3f7b443a8595" satisfied condition "success or failure" Jan 9 13:49:10.073: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-7bfe1985-238a-4474-bff8-3f7b443a8595 container projected-configmap-volume-test: STEP: delete the pod Jan 9 13:49:10.116: INFO: Waiting for pod pod-projected-configmaps-7bfe1985-238a-4474-bff8-3f7b443a8595 to disappear Jan 9 13:49:10.121: INFO: Pod pod-projected-configmaps-7bfe1985-238a-4474-bff8-3f7b443a8595 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 9 13:49:10.121: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2951" for this suite. Jan 9 13:49:18.159: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 9 13:49:18.290: INFO: namespace projected-2951 deletion completed in 8.164147179s • [SLOW TEST:18.506 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 9 13:49:18.291: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-map-bf4566c7-f5d9-4d26-871c-50f8b8493e44 STEP: Creating a pod to test consume configMaps Jan 9 13:49:18.551: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-95fafd36-5af9-4fc5-aca2-fd3eb6f61908" in namespace "projected-9497" to be "success or failure" Jan 9 13:49:18.568: INFO: Pod "pod-projected-configmaps-95fafd36-5af9-4fc5-aca2-fd3eb6f61908": Phase="Pending", Reason="", readiness=false. Elapsed: 16.628672ms Jan 9 13:49:20.598: INFO: Pod "pod-projected-configmaps-95fafd36-5af9-4fc5-aca2-fd3eb6f61908": Phase="Pending", Reason="", readiness=false. Elapsed: 2.046125637s Jan 9 13:49:22.626: INFO: Pod "pod-projected-configmaps-95fafd36-5af9-4fc5-aca2-fd3eb6f61908": Phase="Pending", Reason="", readiness=false. Elapsed: 4.074322221s Jan 9 13:49:24.637: INFO: Pod "pod-projected-configmaps-95fafd36-5af9-4fc5-aca2-fd3eb6f61908": Phase="Pending", Reason="", readiness=false. Elapsed: 6.085783966s Jan 9 13:49:26.650: INFO: Pod "pod-projected-configmaps-95fafd36-5af9-4fc5-aca2-fd3eb6f61908": Phase="Pending", Reason="", readiness=false. Elapsed: 8.098783149s Jan 9 13:49:28.666: INFO: Pod "pod-projected-configmaps-95fafd36-5af9-4fc5-aca2-fd3eb6f61908": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.11492561s STEP: Saw pod success Jan 9 13:49:28.667: INFO: Pod "pod-projected-configmaps-95fafd36-5af9-4fc5-aca2-fd3eb6f61908" satisfied condition "success or failure" Jan 9 13:49:28.673: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-95fafd36-5af9-4fc5-aca2-fd3eb6f61908 container projected-configmap-volume-test: STEP: delete the pod Jan 9 13:49:28.787: INFO: Waiting for pod pod-projected-configmaps-95fafd36-5af9-4fc5-aca2-fd3eb6f61908 to disappear Jan 9 13:49:28.794: INFO: Pod pod-projected-configmaps-95fafd36-5af9-4fc5-aca2-fd3eb6f61908 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 9 13:49:28.794: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9497" for this suite. Jan 9 13:49:34.826: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 9 13:49:34.969: INFO: namespace projected-9497 deletion completed in 6.163382189s • [SLOW TEST:16.678 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl api-versions should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 9 13:49:34.970: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: validating api versions Jan 9 13:49:35.070: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions' Jan 9 13:49:35.322: INFO: stderr: "" Jan 9 13:49:35.322: INFO: stdout: "admissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\napps/v1beta1\napps/v1beta2\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 9 13:49:35.323: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6822" for this suite. Jan 9 13:49:41.457: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 9 13:49:41.565: INFO: namespace kubectl-6822 deletion completed in 6.226799387s • [SLOW TEST:6.595 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl api-versions /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 9 13:49:41.565: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Performing setup for networking test in namespace pod-network-test-5223 STEP: creating a selector STEP: Creating the service pods in kubernetes Jan 9 13:49:41.653: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Jan 9 13:50:18.309: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.32.0.4 8081 | grep -v '^\s*$'] Namespace:pod-network-test-5223 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 9 13:50:18.310: INFO: >>> kubeConfig: /root/.kube/config I0109 13:50:18.386736 8 log.go:172] (0xc00187be40) (0xc0025950e0) Create stream I0109 13:50:18.386816 8 log.go:172] (0xc00187be40) (0xc0025950e0) Stream added, broadcasting: 1 I0109 13:50:18.400810 8 log.go:172] (0xc00187be40) Reply frame received for 1 I0109 13:50:18.400932 8 log.go:172] (0xc00187be40) (0xc0017a5860) Create stream I0109 13:50:18.400952 8 log.go:172] (0xc00187be40) (0xc0017a5860) Stream added, broadcasting: 3 I0109 13:50:18.404116 8 log.go:172] (0xc00187be40) Reply frame received for 3 I0109 13:50:18.404159 8 log.go:172] (0xc00187be40) (0xc0017a5900) Create stream I0109 13:50:18.404187 8 log.go:172] (0xc00187be40) (0xc0017a5900) Stream added, broadcasting: 5 I0109 13:50:18.406415 8 log.go:172] (0xc00187be40) Reply frame received for 5 I0109 13:50:19.742008 8 log.go:172] (0xc00187be40) Data frame received for 3 I0109 13:50:19.742201 8 log.go:172] (0xc0017a5860) (3) Data frame handling I0109 13:50:19.742256 8 log.go:172] (0xc0017a5860) (3) Data frame sent I0109 13:50:19.993330 8 log.go:172] (0xc00187be40) (0xc0017a5860) Stream removed, broadcasting: 3 I0109 13:50:19.993779 8 log.go:172] (0xc00187be40) Data frame received for 1 I0109 13:50:19.993809 8 log.go:172] (0xc0025950e0) (1) Data frame handling I0109 13:50:19.993831 8 log.go:172] (0xc0025950e0) (1) Data frame sent I0109 13:50:19.993848 8 log.go:172] (0xc00187be40) (0xc0025950e0) Stream removed, broadcasting: 1 I0109 13:50:19.994337 8 log.go:172] (0xc00187be40) (0xc0017a5900) Stream removed, broadcasting: 5 I0109 13:50:19.994436 8 log.go:172] (0xc00187be40) (0xc0025950e0) Stream removed, broadcasting: 1 I0109 13:50:19.994457 8 log.go:172] (0xc00187be40) (0xc0017a5860) Stream removed, broadcasting: 3 I0109 13:50:19.994480 8 log.go:172] (0xc00187be40) (0xc0017a5900) Stream removed, broadcasting: 5 I0109 13:50:19.994896 8 log.go:172] (0xc00187be40) Go away received Jan 9 13:50:19.995: INFO: Found all expected endpoints: [netserver-0] Jan 9 13:50:20.008: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.44.0.1 8081 | grep -v '^\s*$'] Namespace:pod-network-test-5223 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 9 13:50:20.008: INFO: >>> kubeConfig: /root/.kube/config I0109 13:50:20.084854 8 log.go:172] (0xc0017daf20) (0xc00247bc20) Create stream I0109 13:50:20.085048 8 log.go:172] (0xc0017daf20) (0xc00247bc20) Stream added, broadcasting: 1 I0109 13:50:20.100477 8 log.go:172] (0xc0017daf20) Reply frame received for 1 I0109 13:50:20.100659 8 log.go:172] (0xc0017daf20) (0xc0024ec320) Create stream I0109 13:50:20.100682 8 log.go:172] (0xc0017daf20) (0xc0024ec320) Stream added, broadcasting: 3 I0109 13:50:20.103061 8 log.go:172] (0xc0017daf20) Reply frame received for 3 I0109 13:50:20.103114 8 log.go:172] (0xc0017daf20) (0xc0024ec460) Create stream I0109 13:50:20.103121 8 log.go:172] (0xc0017daf20) (0xc0024ec460) Stream added, broadcasting: 5 I0109 13:50:20.104985 8 log.go:172] (0xc0017daf20) Reply frame received for 5 I0109 13:50:21.225704 8 log.go:172] (0xc0017daf20) Data frame received for 3 I0109 13:50:21.225830 8 log.go:172] (0xc0024ec320) (3) Data frame handling I0109 13:50:21.225848 8 log.go:172] (0xc0024ec320) (3) Data frame sent I0109 13:50:21.384135 8 log.go:172] (0xc0017daf20) (0xc0024ec460) Stream removed, broadcasting: 5 I0109 13:50:21.384228 8 log.go:172] (0xc0017daf20) Data frame received for 1 I0109 13:50:21.384245 8 log.go:172] (0xc00247bc20) (1) Data frame handling I0109 13:50:21.384259 8 log.go:172] (0xc00247bc20) (1) Data frame sent I0109 13:50:21.384320 8 log.go:172] (0xc0017daf20) (0xc00247bc20) Stream removed, broadcasting: 1 I0109 13:50:21.384518 8 log.go:172] (0xc0017daf20) (0xc0024ec320) Stream removed, broadcasting: 3 I0109 13:50:21.384552 8 log.go:172] (0xc0017daf20) Go away received I0109 13:50:21.385273 8 log.go:172] (0xc0017daf20) (0xc00247bc20) Stream removed, broadcasting: 1 I0109 13:50:21.385428 8 log.go:172] (0xc0017daf20) (0xc0024ec320) Stream removed, broadcasting: 3 I0109 13:50:21.385438 8 log.go:172] (0xc0017daf20) (0xc0024ec460) Stream removed, broadcasting: 5 Jan 9 13:50:21.385: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 9 13:50:21.385: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-5223" for this suite. Jan 9 13:50:45.435: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 9 13:50:45.548: INFO: namespace pod-network-test-5223 deletion completed in 24.144261769s • [SLOW TEST:63.983 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 9 13:50:45.548: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Jan 9 13:50:45.677: INFO: Waiting up to 5m0s for pod "downwardapi-volume-75a5ffb7-fecd-451a-8bb0-69cbfe098e25" in namespace "downward-api-3590" to be "success or failure" Jan 9 13:50:45.686: INFO: Pod "downwardapi-volume-75a5ffb7-fecd-451a-8bb0-69cbfe098e25": Phase="Pending", Reason="", readiness=false. Elapsed: 8.399922ms Jan 9 13:50:47.696: INFO: Pod "downwardapi-volume-75a5ffb7-fecd-451a-8bb0-69cbfe098e25": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018547542s Jan 9 13:50:49.707: INFO: Pod "downwardapi-volume-75a5ffb7-fecd-451a-8bb0-69cbfe098e25": Phase="Pending", Reason="", readiness=false. Elapsed: 4.0298344s Jan 9 13:50:51.719: INFO: Pod "downwardapi-volume-75a5ffb7-fecd-451a-8bb0-69cbfe098e25": Phase="Pending", Reason="", readiness=false. Elapsed: 6.041832812s Jan 9 13:50:53.735: INFO: Pod "downwardapi-volume-75a5ffb7-fecd-451a-8bb0-69cbfe098e25": Phase="Pending", Reason="", readiness=false. Elapsed: 8.057763142s Jan 9 13:50:55.742: INFO: Pod "downwardapi-volume-75a5ffb7-fecd-451a-8bb0-69cbfe098e25": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.064765926s STEP: Saw pod success Jan 9 13:50:55.742: INFO: Pod "downwardapi-volume-75a5ffb7-fecd-451a-8bb0-69cbfe098e25" satisfied condition "success or failure" Jan 9 13:50:55.745: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-75a5ffb7-fecd-451a-8bb0-69cbfe098e25 container client-container: STEP: delete the pod Jan 9 13:50:55.852: INFO: Waiting for pod downwardapi-volume-75a5ffb7-fecd-451a-8bb0-69cbfe098e25 to disappear Jan 9 13:50:55.862: INFO: Pod downwardapi-volume-75a5ffb7-fecd-451a-8bb0-69cbfe098e25 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 9 13:50:55.862: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3590" for this suite. Jan 9 13:51:01.975: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 9 13:51:02.108: INFO: namespace downward-api-3590 deletion completed in 6.158051652s • [SLOW TEST:16.559 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-network] Service endpoints latency should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 9 13:51:02.108: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating replication controller svc-latency-rc in namespace svc-latency-7546 I0109 13:51:02.290773 8 runners.go:180] Created replication controller with name: svc-latency-rc, namespace: svc-latency-7546, replica count: 1 I0109 13:51:03.341815 8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0109 13:51:04.342462 8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0109 13:51:05.342844 8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0109 13:51:06.343400 8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0109 13:51:07.343859 8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0109 13:51:08.344265 8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0109 13:51:09.345022 8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0109 13:51:10.345640 8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 9 13:51:10.488: INFO: Created: latency-svc-sv8p6 Jan 9 13:51:10.509: INFO: Got endpoints: latency-svc-sv8p6 [62.913754ms] Jan 9 13:51:10.625: INFO: Created: latency-svc-kn5vc Jan 9 13:51:10.637: INFO: Got endpoints: latency-svc-kn5vc [127.286767ms] Jan 9 13:51:10.819: INFO: Created: latency-svc-wd2h7 Jan 9 13:51:10.830: INFO: Got endpoints: latency-svc-wd2h7 [318.899049ms] Jan 9 13:51:10.891: INFO: Created: latency-svc-nhc9x Jan 9 13:51:10.902: INFO: Got endpoints: latency-svc-nhc9x [391.278763ms] Jan 9 13:51:10.983: INFO: Created: latency-svc-sq8sl Jan 9 13:51:10.993: INFO: Got endpoints: latency-svc-sq8sl [482.226396ms] Jan 9 13:51:11.036: INFO: Created: latency-svc-jhq7z Jan 9 13:51:11.051: INFO: Got endpoints: latency-svc-jhq7z [540.419376ms] Jan 9 13:51:11.086: INFO: Created: latency-svc-vbmww Jan 9 13:51:11.186: INFO: Got endpoints: latency-svc-vbmww [675.451891ms] Jan 9 13:51:11.198: INFO: Created: latency-svc-dnq2v Jan 9 13:51:11.203: INFO: Got endpoints: latency-svc-dnq2v [692.493552ms] Jan 9 13:51:11.250: INFO: Created: latency-svc-v6nlh Jan 9 13:51:11.260: INFO: Got endpoints: latency-svc-v6nlh [749.18996ms] Jan 9 13:51:11.385: INFO: Created: latency-svc-56697 Jan 9 13:51:11.407: INFO: Got endpoints: latency-svc-56697 [896.023755ms] Jan 9 13:51:11.461: INFO: Created: latency-svc-svb8h Jan 9 13:51:11.468: INFO: Got endpoints: latency-svc-svb8h [957.322331ms] Jan 9 13:51:11.616: INFO: Created: latency-svc-csd76 Jan 9 13:51:11.630: INFO: Got endpoints: latency-svc-csd76 [1.121054583s] Jan 9 13:51:11.677: INFO: Created: latency-svc-vvnwp Jan 9 13:51:11.678: INFO: Got endpoints: latency-svc-vvnwp [1.166808772s] Jan 9 13:51:11.803: INFO: Created: latency-svc-t8f7p Jan 9 13:51:11.835: INFO: Got endpoints: latency-svc-t8f7p [1.323893259s] Jan 9 13:51:11.965: INFO: Created: latency-svc-fj4vl Jan 9 13:51:11.978: INFO: Got endpoints: latency-svc-fj4vl [1.468723813s] Jan 9 13:51:12.028: INFO: Created: latency-svc-q2j5b Jan 9 13:51:12.038: INFO: Got endpoints: latency-svc-q2j5b [1.526761765s] Jan 9 13:51:12.126: INFO: Created: latency-svc-bqqt8 Jan 9 13:51:12.146: INFO: Got endpoints: latency-svc-bqqt8 [1.508632054s] Jan 9 13:51:12.201: INFO: Created: latency-svc-9tcnm Jan 9 13:51:12.320: INFO: Got endpoints: latency-svc-9tcnm [1.490261041s] Jan 9 13:51:12.335: INFO: Created: latency-svc-z7njm Jan 9 13:51:12.381: INFO: Got endpoints: latency-svc-z7njm [1.478759429s] Jan 9 13:51:12.461: INFO: Created: latency-svc-2qsmw Jan 9 13:51:12.470: INFO: Got endpoints: latency-svc-2qsmw [1.476476042s] Jan 9 13:51:12.538: INFO: Created: latency-svc-2knft Jan 9 13:51:12.555: INFO: Got endpoints: latency-svc-2knft [1.503070058s] Jan 9 13:51:12.716: INFO: Created: latency-svc-xg5jp Jan 9 13:51:12.831: INFO: Got endpoints: latency-svc-xg5jp [1.644624295s] Jan 9 13:51:12.840: INFO: Created: latency-svc-ks4s9 Jan 9 13:51:12.876: INFO: Got endpoints: latency-svc-ks4s9 [1.672660059s] Jan 9 13:51:12.907: INFO: Created: latency-svc-2wq5b Jan 9 13:51:12.921: INFO: Got endpoints: latency-svc-2wq5b [1.66043798s] Jan 9 13:51:12.999: INFO: Created: latency-svc-8nswd Jan 9 13:51:13.012: INFO: Got endpoints: latency-svc-8nswd [1.604654494s] Jan 9 13:51:13.059: INFO: Created: latency-svc-kjthh Jan 9 13:51:13.075: INFO: Got endpoints: latency-svc-kjthh [1.605963427s] Jan 9 13:51:13.176: INFO: Created: latency-svc-5qnld Jan 9 13:51:13.177: INFO: Got endpoints: latency-svc-5qnld [1.546561234s] Jan 9 13:51:13.255: INFO: Created: latency-svc-dh72m Jan 9 13:51:13.261: INFO: Got endpoints: latency-svc-dh72m [1.583205059s] Jan 9 13:51:13.337: INFO: Created: latency-svc-hscrk Jan 9 13:51:13.351: INFO: Got endpoints: latency-svc-hscrk [1.515539787s] Jan 9 13:51:13.389: INFO: Created: latency-svc-pkfq9 Jan 9 13:51:13.493: INFO: Got endpoints: latency-svc-pkfq9 [1.515002872s] Jan 9 13:51:13.511: INFO: Created: latency-svc-56pqs Jan 9 13:51:13.512: INFO: Got endpoints: latency-svc-56pqs [1.473732894s] Jan 9 13:51:13.572: INFO: Created: latency-svc-d9pm9 Jan 9 13:51:13.574: INFO: Got endpoints: latency-svc-d9pm9 [1.427431979s] Jan 9 13:51:13.819: INFO: Created: latency-svc-z8szw Jan 9 13:51:13.843: INFO: Got endpoints: latency-svc-z8szw [1.522414379s] Jan 9 13:51:13.964: INFO: Created: latency-svc-x9ghz Jan 9 13:51:14.018: INFO: Got endpoints: latency-svc-x9ghz [1.637108556s] Jan 9 13:51:14.031: INFO: Created: latency-svc-jf8kj Jan 9 13:51:14.032: INFO: Got endpoints: latency-svc-jf8kj [1.561610387s] Jan 9 13:51:14.140: INFO: Created: latency-svc-ql8kz Jan 9 13:51:14.145: INFO: Got endpoints: latency-svc-ql8kz [1.590196803s] Jan 9 13:51:14.201: INFO: Created: latency-svc-9c65m Jan 9 13:51:14.208: INFO: Got endpoints: latency-svc-9c65m [1.376461299s] Jan 9 13:51:14.309: INFO: Created: latency-svc-dfvkr Jan 9 13:51:14.320: INFO: Got endpoints: latency-svc-dfvkr [1.442908273s] Jan 9 13:51:14.370: INFO: Created: latency-svc-rpvns Jan 9 13:51:14.375: INFO: Got endpoints: latency-svc-rpvns [1.454536881s] Jan 9 13:51:14.489: INFO: Created: latency-svc-cl9nw Jan 9 13:51:14.510: INFO: Got endpoints: latency-svc-cl9nw [1.49857301s] Jan 9 13:51:14.559: INFO: Created: latency-svc-7w6bl Jan 9 13:51:14.621: INFO: Got endpoints: latency-svc-7w6bl [1.546624353s] Jan 9 13:51:14.649: INFO: Created: latency-svc-79hwh Jan 9 13:51:14.676: INFO: Got endpoints: latency-svc-79hwh [1.499521662s] Jan 9 13:51:14.834: INFO: Created: latency-svc-tsg2d Jan 9 13:51:14.848: INFO: Got endpoints: latency-svc-tsg2d [1.586258772s] Jan 9 13:51:14.910: INFO: Created: latency-svc-l7sg8 Jan 9 13:51:14.916: INFO: Got endpoints: latency-svc-l7sg8 [1.565421576s] Jan 9 13:51:15.016: INFO: Created: latency-svc-svp6m Jan 9 13:51:15.024: INFO: Got endpoints: latency-svc-svp6m [1.530156601s] Jan 9 13:51:15.074: INFO: Created: latency-svc-bw2m2 Jan 9 13:51:15.077: INFO: Got endpoints: latency-svc-bw2m2 [1.564209569s] Jan 9 13:51:15.159: INFO: Created: latency-svc-pzljn Jan 9 13:51:15.168: INFO: Got endpoints: latency-svc-pzljn [1.593564989s] Jan 9 13:51:15.212: INFO: Created: latency-svc-nzg9j Jan 9 13:51:15.222: INFO: Got endpoints: latency-svc-nzg9j [1.378465329s] Jan 9 13:51:15.306: INFO: Created: latency-svc-sx6z8 Jan 9 13:51:15.322: INFO: Got endpoints: latency-svc-sx6z8 [1.303543934s] Jan 9 13:51:15.399: INFO: Created: latency-svc-8dzhx Jan 9 13:51:15.449: INFO: Got endpoints: latency-svc-8dzhx [1.417099522s] Jan 9 13:51:15.518: INFO: Created: latency-svc-5m5k5 Jan 9 13:51:15.533: INFO: Got endpoints: latency-svc-5m5k5 [1.387238838s] Jan 9 13:51:15.644: INFO: Created: latency-svc-zl8j5 Jan 9 13:51:15.657: INFO: Got endpoints: latency-svc-zl8j5 [1.449501693s] Jan 9 13:51:15.707: INFO: Created: latency-svc-5wfcg Jan 9 13:51:15.716: INFO: Got endpoints: latency-svc-5wfcg [1.395677676s] Jan 9 13:51:15.843: INFO: Created: latency-svc-tv278 Jan 9 13:51:15.855: INFO: Got endpoints: latency-svc-tv278 [1.479111778s] Jan 9 13:51:15.897: INFO: Created: latency-svc-xnwv2 Jan 9 13:51:15.911: INFO: Got endpoints: latency-svc-xnwv2 [1.399936961s] Jan 9 13:51:16.054: INFO: Created: latency-svc-5ppmh Jan 9 13:51:16.059: INFO: Got endpoints: latency-svc-5ppmh [1.437521448s] Jan 9 13:51:16.098: INFO: Created: latency-svc-h9b95 Jan 9 13:51:16.109: INFO: Got endpoints: latency-svc-h9b95 [1.43290249s] Jan 9 13:51:16.205: INFO: Created: latency-svc-7w4gp Jan 9 13:51:16.229: INFO: Got endpoints: latency-svc-7w4gp [1.381336882s] Jan 9 13:51:16.380: INFO: Created: latency-svc-7qgsf Jan 9 13:51:16.391: INFO: Got endpoints: latency-svc-7qgsf [1.474646703s] Jan 9 13:51:16.475: INFO: Created: latency-svc-zng7p Jan 9 13:51:16.529: INFO: Got endpoints: latency-svc-zng7p [1.505624778s] Jan 9 13:51:16.565: INFO: Created: latency-svc-gvpzc Jan 9 13:51:16.578: INFO: Got endpoints: latency-svc-gvpzc [1.500743349s] Jan 9 13:51:16.724: INFO: Created: latency-svc-sw82r Jan 9 13:51:16.740: INFO: Got endpoints: latency-svc-sw82r [1.572388475s] Jan 9 13:51:16.808: INFO: Created: latency-svc-g4lxd Jan 9 13:51:17.121: INFO: Got endpoints: latency-svc-g4lxd [1.898211364s] Jan 9 13:51:17.162: INFO: Created: latency-svc-jdhkd Jan 9 13:51:17.178: INFO: Got endpoints: latency-svc-jdhkd [1.855104918s] Jan 9 13:51:17.387: INFO: Created: latency-svc-lnwnv Jan 9 13:51:17.404: INFO: Got endpoints: latency-svc-lnwnv [1.954870698s] Jan 9 13:51:17.460: INFO: Created: latency-svc-894vt Jan 9 13:51:17.548: INFO: Got endpoints: latency-svc-894vt [2.014859063s] Jan 9 13:51:17.581: INFO: Created: latency-svc-r6swm Jan 9 13:51:17.587: INFO: Got endpoints: latency-svc-r6swm [1.92944575s] Jan 9 13:51:17.726: INFO: Created: latency-svc-v7xzz Jan 9 13:51:17.790: INFO: Got endpoints: latency-svc-v7xzz [242.123518ms] Jan 9 13:51:17.792: INFO: Created: latency-svc-blpwt Jan 9 13:51:17.808: INFO: Got endpoints: latency-svc-blpwt [2.092521445s] Jan 9 13:51:17.887: INFO: Created: latency-svc-rsppv Jan 9 13:51:17.909: INFO: Got endpoints: latency-svc-rsppv [2.054342907s] Jan 9 13:51:17.969: INFO: Created: latency-svc-djk7s Jan 9 13:51:18.094: INFO: Got endpoints: latency-svc-djk7s [2.182944327s] Jan 9 13:51:18.147: INFO: Created: latency-svc-lwwz6 Jan 9 13:51:18.156: INFO: Got endpoints: latency-svc-lwwz6 [2.096116832s] Jan 9 13:51:18.198: INFO: Created: latency-svc-45qxj Jan 9 13:51:18.283: INFO: Got endpoints: latency-svc-45qxj [2.173450949s] Jan 9 13:51:18.316: INFO: Created: latency-svc-sfnvk Jan 9 13:51:18.359: INFO: Created: latency-svc-fjqtb Jan 9 13:51:18.359: INFO: Got endpoints: latency-svc-sfnvk [2.129957682s] Jan 9 13:51:18.378: INFO: Got endpoints: latency-svc-fjqtb [1.986220065s] Jan 9 13:51:18.478: INFO: Created: latency-svc-cnjmj Jan 9 13:51:18.489: INFO: Got endpoints: latency-svc-cnjmj [1.959139846s] Jan 9 13:51:18.624: INFO: Created: latency-svc-fvn4j Jan 9 13:51:18.645: INFO: Got endpoints: latency-svc-fvn4j [2.067461091s] Jan 9 13:51:18.702: INFO: Created: latency-svc-bzm6s Jan 9 13:51:18.756: INFO: Got endpoints: latency-svc-bzm6s [2.015522415s] Jan 9 13:51:18.881: INFO: Created: latency-svc-6brfk Jan 9 13:51:18.912: INFO: Got endpoints: latency-svc-6brfk [1.791091767s] Jan 9 13:51:19.048: INFO: Created: latency-svc-6m5jx Jan 9 13:51:19.056: INFO: Got endpoints: latency-svc-6m5jx [1.878408075s] Jan 9 13:51:19.136: INFO: Created: latency-svc-wggtd Jan 9 13:51:19.227: INFO: Got endpoints: latency-svc-wggtd [1.822867622s] Jan 9 13:51:19.261: INFO: Created: latency-svc-dq4lf Jan 9 13:51:19.275: INFO: Got endpoints: latency-svc-dq4lf [1.687665801s] Jan 9 13:51:19.324: INFO: Created: latency-svc-z9ltj Jan 9 13:51:19.396: INFO: Got endpoints: latency-svc-z9ltj [1.605092248s] Jan 9 13:51:19.420: INFO: Created: latency-svc-5k26b Jan 9 13:51:19.433: INFO: Got endpoints: latency-svc-5k26b [1.624210161s] Jan 9 13:51:19.481: INFO: Created: latency-svc-5d4sm Jan 9 13:51:19.557: INFO: Got endpoints: latency-svc-5d4sm [1.647441773s] Jan 9 13:51:19.595: INFO: Created: latency-svc-rct6k Jan 9 13:51:19.601: INFO: Got endpoints: latency-svc-rct6k [1.507173345s] Jan 9 13:51:19.848: INFO: Created: latency-svc-fhmzd Jan 9 13:51:19.992: INFO: Got endpoints: latency-svc-fhmzd [1.835922546s] Jan 9 13:51:19.993: INFO: Created: latency-svc-6mr4p Jan 9 13:51:20.011: INFO: Got endpoints: latency-svc-6mr4p [1.727903657s] Jan 9 13:51:20.061: INFO: Created: latency-svc-67lzl Jan 9 13:51:20.262: INFO: Got endpoints: latency-svc-67lzl [1.902520427s] Jan 9 13:51:20.289: INFO: Created: latency-svc-m7dfx Jan 9 13:51:20.304: INFO: Got endpoints: latency-svc-m7dfx [1.925925341s] Jan 9 13:51:20.356: INFO: Created: latency-svc-dsld2 Jan 9 13:51:20.484: INFO: Got endpoints: latency-svc-dsld2 [1.995289895s] Jan 9 13:51:20.489: INFO: Created: latency-svc-2jj78 Jan 9 13:51:20.504: INFO: Got endpoints: latency-svc-2jj78 [1.857903587s] Jan 9 13:51:20.675: INFO: Created: latency-svc-frvfh Jan 9 13:51:20.690: INFO: Got endpoints: latency-svc-frvfh [1.933699622s] Jan 9 13:51:20.736: INFO: Created: latency-svc-29bmh Jan 9 13:51:20.758: INFO: Got endpoints: latency-svc-29bmh [1.845975289s] Jan 9 13:51:20.877: INFO: Created: latency-svc-kn2x6 Jan 9 13:51:20.942: INFO: Got endpoints: latency-svc-kn2x6 [1.886051825s] Jan 9 13:51:20.948: INFO: Created: latency-svc-k79z6 Jan 9 13:51:21.178: INFO: Got endpoints: latency-svc-k79z6 [1.950270484s] Jan 9 13:51:21.393: INFO: Created: latency-svc-bfj2h Jan 9 13:51:21.398: INFO: Created: latency-svc-4rwhq Jan 9 13:51:21.406: INFO: Got endpoints: latency-svc-bfj2h [2.130598384s] Jan 9 13:51:21.449: INFO: Got endpoints: latency-svc-4rwhq [2.053464088s] Jan 9 13:51:21.455: INFO: Created: latency-svc-vvxnr Jan 9 13:51:21.472: INFO: Got endpoints: latency-svc-vvxnr [2.039544837s] Jan 9 13:51:21.641: INFO: Created: latency-svc-k6bc7 Jan 9 13:51:21.650: INFO: Got endpoints: latency-svc-k6bc7 [2.092384578s] Jan 9 13:51:21.699: INFO: Created: latency-svc-jknnr Jan 9 13:51:21.700: INFO: Got endpoints: latency-svc-jknnr [2.098115411s] Jan 9 13:51:21.833: INFO: Created: latency-svc-k7wvt Jan 9 13:51:21.841: INFO: Got endpoints: latency-svc-k7wvt [1.84818883s] Jan 9 13:51:21.909: INFO: Created: latency-svc-2gmnz Jan 9 13:51:21.928: INFO: Got endpoints: latency-svc-2gmnz [1.916319125s] Jan 9 13:51:22.086: INFO: Created: latency-svc-k4fjf Jan 9 13:51:22.107: INFO: Got endpoints: latency-svc-k4fjf [1.844234638s] Jan 9 13:51:22.323: INFO: Created: latency-svc-rgrdh Jan 9 13:51:22.328: INFO: Got endpoints: latency-svc-rgrdh [2.024255446s] Jan 9 13:51:22.412: INFO: Created: latency-svc-qq5nn Jan 9 13:51:22.686: INFO: Got endpoints: latency-svc-qq5nn [2.200788807s] Jan 9 13:51:22.715: INFO: Created: latency-svc-dpp6l Jan 9 13:51:22.766: INFO: Got endpoints: latency-svc-dpp6l [2.262477969s] Jan 9 13:51:22.880: INFO: Created: latency-svc-6ckdc Jan 9 13:51:22.934: INFO: Got endpoints: latency-svc-6ckdc [2.243331532s] Jan 9 13:51:22.940: INFO: Created: latency-svc-fl2zv Jan 9 13:51:22.954: INFO: Got endpoints: latency-svc-fl2zv [2.194801828s] Jan 9 13:51:23.145: INFO: Created: latency-svc-5n2qg Jan 9 13:51:23.197: INFO: Got endpoints: latency-svc-5n2qg [2.25441669s] Jan 9 13:51:23.213: INFO: Created: latency-svc-c5zhh Jan 9 13:51:23.235: INFO: Got endpoints: latency-svc-c5zhh [2.05763959s] Jan 9 13:51:23.330: INFO: Created: latency-svc-px4kj Jan 9 13:51:23.342: INFO: Got endpoints: latency-svc-px4kj [1.935484586s] Jan 9 13:51:23.492: INFO: Created: latency-svc-9jnf8 Jan 9 13:51:23.498: INFO: Got endpoints: latency-svc-9jnf8 [2.048548885s] Jan 9 13:51:23.583: INFO: Created: latency-svc-blth9 Jan 9 13:51:23.587: INFO: Got endpoints: latency-svc-blth9 [2.115017349s] Jan 9 13:51:23.759: INFO: Created: latency-svc-w6zd2 Jan 9 13:51:23.762: INFO: Got endpoints: latency-svc-w6zd2 [2.112705671s] Jan 9 13:51:23.911: INFO: Created: latency-svc-zlth7 Jan 9 13:51:23.996: INFO: Got endpoints: latency-svc-zlth7 [2.295716702s] Jan 9 13:51:24.003: INFO: Created: latency-svc-ks5ts Jan 9 13:51:24.109: INFO: Got endpoints: latency-svc-ks5ts [2.268080335s] Jan 9 13:51:24.136: INFO: Created: latency-svc-9bgz7 Jan 9 13:51:24.191: INFO: Created: latency-svc-t257q Jan 9 13:51:24.191: INFO: Got endpoints: latency-svc-9bgz7 [2.26319888s] Jan 9 13:51:24.322: INFO: Got endpoints: latency-svc-t257q [2.215293427s] Jan 9 13:51:24.375: INFO: Created: latency-svc-gdd6c Jan 9 13:51:24.602: INFO: Got endpoints: latency-svc-gdd6c [2.272754688s] Jan 9 13:51:24.618: INFO: Created: latency-svc-zbrjp Jan 9 13:51:24.629: INFO: Got endpoints: latency-svc-zbrjp [1.943361239s] Jan 9 13:51:24.869: INFO: Created: latency-svc-q59qf Jan 9 13:51:24.872: INFO: Got endpoints: latency-svc-q59qf [2.105322304s] Jan 9 13:51:25.066: INFO: Created: latency-svc-87gk8 Jan 9 13:51:25.074: INFO: Got endpoints: latency-svc-87gk8 [2.139260139s] Jan 9 13:51:25.148: INFO: Created: latency-svc-5tvdx Jan 9 13:51:25.318: INFO: Got endpoints: latency-svc-5tvdx [2.363903026s] Jan 9 13:51:25.339: INFO: Created: latency-svc-rkbc2 Jan 9 13:51:25.353: INFO: Got endpoints: latency-svc-rkbc2 [2.15495646s] Jan 9 13:51:25.394: INFO: Created: latency-svc-m6hf9 Jan 9 13:51:25.407: INFO: Got endpoints: latency-svc-m6hf9 [2.170749982s] Jan 9 13:51:25.526: INFO: Created: latency-svc-jzhj9 Jan 9 13:51:25.551: INFO: Got endpoints: latency-svc-jzhj9 [2.209362058s] Jan 9 13:51:25.585: INFO: Created: latency-svc-lhwgk Jan 9 13:51:25.678: INFO: Got endpoints: latency-svc-lhwgk [2.179518787s] Jan 9 13:51:25.681: INFO: Created: latency-svc-vbpsh Jan 9 13:51:25.689: INFO: Got endpoints: latency-svc-vbpsh [2.101218569s] Jan 9 13:51:25.734: INFO: Created: latency-svc-nbcbz Jan 9 13:51:25.745: INFO: Got endpoints: latency-svc-nbcbz [1.982768995s] Jan 9 13:51:25.876: INFO: Created: latency-svc-rll4d Jan 9 13:51:25.878: INFO: Got endpoints: latency-svc-rll4d [1.882130163s] Jan 9 13:51:25.940: INFO: Created: latency-svc-vcptz Jan 9 13:51:25.947: INFO: Got endpoints: latency-svc-vcptz [1.836230628s] Jan 9 13:51:26.064: INFO: Created: latency-svc-w8mnt Jan 9 13:51:26.069: INFO: Got endpoints: latency-svc-w8mnt [1.877518596s] Jan 9 13:51:26.127: INFO: Created: latency-svc-d4vcq Jan 9 13:51:26.136: INFO: Got endpoints: latency-svc-d4vcq [1.81374388s] Jan 9 13:51:26.256: INFO: Created: latency-svc-hxwpt Jan 9 13:51:26.262: INFO: Got endpoints: latency-svc-hxwpt [1.660412417s] Jan 9 13:51:26.342: INFO: Created: latency-svc-pdztz Jan 9 13:51:26.423: INFO: Got endpoints: latency-svc-pdztz [1.793460583s] Jan 9 13:51:26.432: INFO: Created: latency-svc-wthgf Jan 9 13:51:26.442: INFO: Got endpoints: latency-svc-wthgf [1.569716038s] Jan 9 13:51:26.488: INFO: Created: latency-svc-9lsjc Jan 9 13:51:26.505: INFO: Got endpoints: latency-svc-9lsjc [1.430698387s] Jan 9 13:51:26.600: INFO: Created: latency-svc-4x7n7 Jan 9 13:51:26.614: INFO: Got endpoints: latency-svc-4x7n7 [1.29554215s] Jan 9 13:51:26.698: INFO: Created: latency-svc-mfsdk Jan 9 13:51:26.871: INFO: Got endpoints: latency-svc-mfsdk [1.518400304s] Jan 9 13:51:26.872: INFO: Created: latency-svc-58lcz Jan 9 13:51:26.987: INFO: Got endpoints: latency-svc-58lcz [1.57927431s] Jan 9 13:51:26.987: INFO: Created: latency-svc-k2dm4 Jan 9 13:51:27.004: INFO: Got endpoints: latency-svc-k2dm4 [1.452321661s] Jan 9 13:51:27.045: INFO: Created: latency-svc-vzsd6 Jan 9 13:51:27.048: INFO: Got endpoints: latency-svc-vzsd6 [1.370582228s] Jan 9 13:51:27.121: INFO: Created: latency-svc-8rw9r Jan 9 13:51:27.143: INFO: Got endpoints: latency-svc-8rw9r [1.454168052s] Jan 9 13:51:27.181: INFO: Created: latency-svc-n8bz4 Jan 9 13:51:27.182: INFO: Got endpoints: latency-svc-n8bz4 [1.436668891s] Jan 9 13:51:27.336: INFO: Created: latency-svc-7r5m6 Jan 9 13:51:27.347: INFO: Got endpoints: latency-svc-7r5m6 [1.468297571s] Jan 9 13:51:27.400: INFO: Created: latency-svc-4mjhs Jan 9 13:51:27.424: INFO: Got endpoints: latency-svc-4mjhs [1.477493762s] Jan 9 13:51:27.483: INFO: Created: latency-svc-nvk4r Jan 9 13:51:27.492: INFO: Got endpoints: latency-svc-nvk4r [1.422047236s] Jan 9 13:51:27.527: INFO: Created: latency-svc-zj5l6 Jan 9 13:51:27.534: INFO: Got endpoints: latency-svc-zj5l6 [1.397517677s] Jan 9 13:51:27.577: INFO: Created: latency-svc-87rr2 Jan 9 13:51:27.645: INFO: Got endpoints: latency-svc-87rr2 [1.382290733s] Jan 9 13:51:27.685: INFO: Created: latency-svc-x558n Jan 9 13:51:27.686: INFO: Got endpoints: latency-svc-x558n [1.262306995s] Jan 9 13:51:27.726: INFO: Created: latency-svc-fxg8b Jan 9 13:51:27.729: INFO: Got endpoints: latency-svc-fxg8b [1.286791178s] Jan 9 13:51:27.830: INFO: Created: latency-svc-9r4pq Jan 9 13:51:27.833: INFO: Got endpoints: latency-svc-9r4pq [1.328084337s] Jan 9 13:51:27.878: INFO: Created: latency-svc-g5j4j Jan 9 13:51:27.966: INFO: Got endpoints: latency-svc-g5j4j [1.352215601s] Jan 9 13:51:27.967: INFO: Created: latency-svc-l79q2 Jan 9 13:51:27.974: INFO: Got endpoints: latency-svc-l79q2 [1.102212705s] Jan 9 13:51:28.059: INFO: Created: latency-svc-hbwmz Jan 9 13:51:28.126: INFO: Got endpoints: latency-svc-hbwmz [1.13951884s] Jan 9 13:51:28.172: INFO: Created: latency-svc-466tz Jan 9 13:51:28.178: INFO: Got endpoints: latency-svc-466tz [1.174503971s] Jan 9 13:51:28.315: INFO: Created: latency-svc-kz2l8 Jan 9 13:51:28.388: INFO: Created: latency-svc-s5vm4 Jan 9 13:51:28.393: INFO: Got endpoints: latency-svc-kz2l8 [1.344858551s] Jan 9 13:51:28.468: INFO: Got endpoints: latency-svc-s5vm4 [1.324561128s] Jan 9 13:51:28.516: INFO: Created: latency-svc-zqn9d Jan 9 13:51:28.519: INFO: Got endpoints: latency-svc-zqn9d [1.336261719s] Jan 9 13:51:28.614: INFO: Created: latency-svc-z97jn Jan 9 13:51:28.618: INFO: Got endpoints: latency-svc-z97jn [1.270767615s] Jan 9 13:51:28.682: INFO: Created: latency-svc-ksrgq Jan 9 13:51:28.687: INFO: Got endpoints: latency-svc-ksrgq [1.262336205s] Jan 9 13:51:28.882: INFO: Created: latency-svc-q6h7q Jan 9 13:51:28.892: INFO: Got endpoints: latency-svc-q6h7q [1.400544288s] Jan 9 13:51:28.953: INFO: Created: latency-svc-xztgx Jan 9 13:51:29.020: INFO: Got endpoints: latency-svc-xztgx [1.485667691s] Jan 9 13:51:29.030: INFO: Created: latency-svc-mgmjf Jan 9 13:51:29.030: INFO: Got endpoints: latency-svc-mgmjf [1.384504608s] Jan 9 13:51:29.074: INFO: Created: latency-svc-tff6p Jan 9 13:51:29.082: INFO: Got endpoints: latency-svc-tff6p [1.396213603s] Jan 9 13:51:29.185: INFO: Created: latency-svc-rsjl4 Jan 9 13:51:29.185: INFO: Got endpoints: latency-svc-rsjl4 [1.455651639s] Jan 9 13:51:29.382: INFO: Created: latency-svc-mg8dm Jan 9 13:51:29.390: INFO: Got endpoints: latency-svc-mg8dm [1.55634945s] Jan 9 13:51:29.448: INFO: Created: latency-svc-h247t Jan 9 13:51:29.451: INFO: Got endpoints: latency-svc-h247t [1.484202125s] Jan 9 13:51:29.552: INFO: Created: latency-svc-49vfv Jan 9 13:51:29.611: INFO: Got endpoints: latency-svc-49vfv [1.636858101s] Jan 9 13:51:29.612: INFO: Created: latency-svc-wj6k8 Jan 9 13:51:29.622: INFO: Got endpoints: latency-svc-wj6k8 [1.495237977s] Jan 9 13:51:29.703: INFO: Created: latency-svc-p2q7p Jan 9 13:51:29.739: INFO: Got endpoints: latency-svc-p2q7p [1.560189045s] Jan 9 13:51:29.843: INFO: Created: latency-svc-vzllc Jan 9 13:51:29.901: INFO: Got endpoints: latency-svc-vzllc [1.507947186s] Jan 9 13:51:29.910: INFO: Created: latency-svc-r67wj Jan 9 13:51:29.917: INFO: Got endpoints: latency-svc-r67wj [1.44858724s] Jan 9 13:51:30.006: INFO: Created: latency-svc-lmrlx Jan 9 13:51:30.025: INFO: Got endpoints: latency-svc-lmrlx [1.50653549s] Jan 9 13:51:30.079: INFO: Created: latency-svc-95vm7 Jan 9 13:51:30.147: INFO: Got endpoints: latency-svc-95vm7 [1.529278016s] Jan 9 13:51:30.183: INFO: Created: latency-svc-strwn Jan 9 13:51:30.186: INFO: Got endpoints: latency-svc-strwn [1.499023198s] Jan 9 13:51:30.244: INFO: Created: latency-svc-jg8cs Jan 9 13:51:30.355: INFO: Got endpoints: latency-svc-jg8cs [1.462654348s] Jan 9 13:51:30.394: INFO: Created: latency-svc-mdfzf Jan 9 13:51:30.418: INFO: Got endpoints: latency-svc-mdfzf [1.397922294s] Jan 9 13:51:30.532: INFO: Created: latency-svc-rp2sk Jan 9 13:51:30.548: INFO: Got endpoints: latency-svc-rp2sk [1.518053872s] Jan 9 13:51:30.597: INFO: Created: latency-svc-nvc7r Jan 9 13:51:30.610: INFO: Got endpoints: latency-svc-nvc7r [1.528330919s] Jan 9 13:51:30.689: INFO: Created: latency-svc-l787b Jan 9 13:51:30.693: INFO: Got endpoints: latency-svc-l787b [1.507608898s] Jan 9 13:51:30.745: INFO: Created: latency-svc-kj4hl Jan 9 13:51:30.781: INFO: Got endpoints: latency-svc-kj4hl [1.391277656s] Jan 9 13:51:30.910: INFO: Created: latency-svc-gsf45 Jan 9 13:51:30.974: INFO: Got endpoints: latency-svc-gsf45 [1.522710232s] Jan 9 13:51:30.977: INFO: Created: latency-svc-94l47 Jan 9 13:51:31.044: INFO: Got endpoints: latency-svc-94l47 [1.433216913s] Jan 9 13:51:31.102: INFO: Created: latency-svc-jb9dx Jan 9 13:51:31.230: INFO: Got endpoints: latency-svc-jb9dx [1.607527397s] Jan 9 13:51:31.233: INFO: Created: latency-svc-ln28c Jan 9 13:51:31.254: INFO: Got endpoints: latency-svc-ln28c [1.515186883s] Jan 9 13:51:31.332: INFO: Created: latency-svc-dn2tm Jan 9 13:51:31.459: INFO: Got endpoints: latency-svc-dn2tm [1.557109122s] Jan 9 13:51:31.489: INFO: Created: latency-svc-4c9cf Jan 9 13:51:31.572: INFO: Created: latency-svc-2ll7f Jan 9 13:51:31.572: INFO: Got endpoints: latency-svc-4c9cf [1.654437716s] Jan 9 13:51:31.659: INFO: Got endpoints: latency-svc-2ll7f [1.633644804s] Jan 9 13:51:31.695: INFO: Created: latency-svc-mrslv Jan 9 13:51:31.699: INFO: Got endpoints: latency-svc-mrslv [1.55138659s] Jan 9 13:51:32.256: INFO: Created: latency-svc-n4n8s Jan 9 13:51:32.456: INFO: Got endpoints: latency-svc-n4n8s [2.270304603s] Jan 9 13:51:32.614: INFO: Created: latency-svc-xnc9m Jan 9 13:51:32.639: INFO: Got endpoints: latency-svc-xnc9m [2.283284094s] Jan 9 13:51:32.664: INFO: Created: latency-svc-vfl4v Jan 9 13:51:32.664: INFO: Got endpoints: latency-svc-vfl4v [2.245734969s] Jan 9 13:51:32.707: INFO: Created: latency-svc-x84fl Jan 9 13:51:32.788: INFO: Got endpoints: latency-svc-x84fl [2.239476026s] Jan 9 13:51:32.833: INFO: Created: latency-svc-c2tqk Jan 9 13:51:32.845: INFO: Got endpoints: latency-svc-c2tqk [2.234968166s] Jan 9 13:51:32.961: INFO: Created: latency-svc-96l5w Jan 9 13:51:32.963: INFO: Got endpoints: latency-svc-96l5w [2.269646431s] Jan 9 13:51:33.026: INFO: Created: latency-svc-gs7vc Jan 9 13:51:33.044: INFO: Got endpoints: latency-svc-gs7vc [2.26227145s] Jan 9 13:51:33.156: INFO: Created: latency-svc-wxj5l Jan 9 13:51:33.176: INFO: Got endpoints: latency-svc-wxj5l [2.202195716s] Jan 9 13:51:33.183: INFO: Created: latency-svc-pr86b Jan 9 13:51:33.191: INFO: Got endpoints: latency-svc-pr86b [2.146855544s] Jan 9 13:51:33.313: INFO: Created: latency-svc-dhsmw Jan 9 13:51:33.351: INFO: Got endpoints: latency-svc-dhsmw [2.120118182s] Jan 9 13:51:33.351: INFO: Latencies: [127.286767ms 242.123518ms 318.899049ms 391.278763ms 482.226396ms 540.419376ms 675.451891ms 692.493552ms 749.18996ms 896.023755ms 957.322331ms 1.102212705s 1.121054583s 1.13951884s 1.166808772s 1.174503971s 1.262306995s 1.262336205s 1.270767615s 1.286791178s 1.29554215s 1.303543934s 1.323893259s 1.324561128s 1.328084337s 1.336261719s 1.344858551s 1.352215601s 1.370582228s 1.376461299s 1.378465329s 1.381336882s 1.382290733s 1.384504608s 1.387238838s 1.391277656s 1.395677676s 1.396213603s 1.397517677s 1.397922294s 1.399936961s 1.400544288s 1.417099522s 1.422047236s 1.427431979s 1.430698387s 1.43290249s 1.433216913s 1.436668891s 1.437521448s 1.442908273s 1.44858724s 1.449501693s 1.452321661s 1.454168052s 1.454536881s 1.455651639s 1.462654348s 1.468297571s 1.468723813s 1.473732894s 1.474646703s 1.476476042s 1.477493762s 1.478759429s 1.479111778s 1.484202125s 1.485667691s 1.490261041s 1.495237977s 1.49857301s 1.499023198s 1.499521662s 1.500743349s 1.503070058s 1.505624778s 1.50653549s 1.507173345s 1.507608898s 1.507947186s 1.508632054s 1.515002872s 1.515186883s 1.515539787s 1.518053872s 1.518400304s 1.522414379s 1.522710232s 1.526761765s 1.528330919s 1.529278016s 1.530156601s 1.546561234s 1.546624353s 1.55138659s 1.55634945s 1.557109122s 1.560189045s 1.561610387s 1.564209569s 1.565421576s 1.569716038s 1.572388475s 1.57927431s 1.583205059s 1.586258772s 1.590196803s 1.593564989s 1.604654494s 1.605092248s 1.605963427s 1.607527397s 1.624210161s 1.633644804s 1.636858101s 1.637108556s 1.644624295s 1.647441773s 1.654437716s 1.660412417s 1.66043798s 1.672660059s 1.687665801s 1.727903657s 1.791091767s 1.793460583s 1.81374388s 1.822867622s 1.835922546s 1.836230628s 1.844234638s 1.845975289s 1.84818883s 1.855104918s 1.857903587s 1.877518596s 1.878408075s 1.882130163s 1.886051825s 1.898211364s 1.902520427s 1.916319125s 1.925925341s 1.92944575s 1.933699622s 1.935484586s 1.943361239s 1.950270484s 1.954870698s 1.959139846s 1.982768995s 1.986220065s 1.995289895s 2.014859063s 2.015522415s 2.024255446s 2.039544837s 2.048548885s 2.053464088s 2.054342907s 2.05763959s 2.067461091s 2.092384578s 2.092521445s 2.096116832s 2.098115411s 2.101218569s 2.105322304s 2.112705671s 2.115017349s 2.120118182s 2.129957682s 2.130598384s 2.139260139s 2.146855544s 2.15495646s 2.170749982s 2.173450949s 2.179518787s 2.182944327s 2.194801828s 2.200788807s 2.202195716s 2.209362058s 2.215293427s 2.234968166s 2.239476026s 2.243331532s 2.245734969s 2.25441669s 2.26227145s 2.262477969s 2.26319888s 2.268080335s 2.269646431s 2.270304603s 2.272754688s 2.283284094s 2.295716702s 2.363903026s] Jan 9 13:51:33.352: INFO: 50 %ile: 1.565421576s Jan 9 13:51:33.352: INFO: 90 %ile: 2.194801828s Jan 9 13:51:33.352: INFO: 99 %ile: 2.295716702s Jan 9 13:51:33.352: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 9 13:51:33.353: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svc-latency-7546" for this suite. Jan 9 13:52:17.385: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 9 13:52:17.532: INFO: namespace svc-latency-7546 deletion completed in 44.170114628s • [SLOW TEST:75.424 seconds] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 9 13:52:17.532: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test substitution in container's command Jan 9 13:52:17.668: INFO: Waiting up to 5m0s for pod "var-expansion-df5dcb35-09c5-4fc8-a932-4f77276138ec" in namespace "var-expansion-691" to be "success or failure" Jan 9 13:52:17.676: INFO: Pod "var-expansion-df5dcb35-09c5-4fc8-a932-4f77276138ec": Phase="Pending", Reason="", readiness=false. Elapsed: 8.058125ms Jan 9 13:52:19.687: INFO: Pod "var-expansion-df5dcb35-09c5-4fc8-a932-4f77276138ec": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018589101s Jan 9 13:52:21.697: INFO: Pod "var-expansion-df5dcb35-09c5-4fc8-a932-4f77276138ec": Phase="Pending", Reason="", readiness=false. Elapsed: 4.028462039s Jan 9 13:52:23.708: INFO: Pod "var-expansion-df5dcb35-09c5-4fc8-a932-4f77276138ec": Phase="Pending", Reason="", readiness=false. Elapsed: 6.039339994s Jan 9 13:52:25.715: INFO: Pod "var-expansion-df5dcb35-09c5-4fc8-a932-4f77276138ec": Phase="Pending", Reason="", readiness=false. Elapsed: 8.047052353s Jan 9 13:52:27.723: INFO: Pod "var-expansion-df5dcb35-09c5-4fc8-a932-4f77276138ec": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.05485197s STEP: Saw pod success Jan 9 13:52:27.723: INFO: Pod "var-expansion-df5dcb35-09c5-4fc8-a932-4f77276138ec" satisfied condition "success or failure" Jan 9 13:52:27.728: INFO: Trying to get logs from node iruya-node pod var-expansion-df5dcb35-09c5-4fc8-a932-4f77276138ec container dapi-container: STEP: delete the pod Jan 9 13:52:27.889: INFO: Waiting for pod var-expansion-df5dcb35-09c5-4fc8-a932-4f77276138ec to disappear Jan 9 13:52:27.909: INFO: Pod var-expansion-df5dcb35-09c5-4fc8-a932-4f77276138ec no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 9 13:52:27.909: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-691" for this suite. Jan 9 13:52:33.962: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 9 13:52:34.075: INFO: namespace var-expansion-691 deletion completed in 6.153896041s • [SLOW TEST:16.543 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 9 13:52:34.075: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-map-b6193cc1-e109-457b-9995-3eb9e4e5ad0e STEP: Creating a pod to test consume configMaps Jan 9 13:52:34.179: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-01fc9444-4b76-4ae6-88e7-135d1c382a92" in namespace "projected-147" to be "success or failure" Jan 9 13:52:34.183: INFO: Pod "pod-projected-configmaps-01fc9444-4b76-4ae6-88e7-135d1c382a92": Phase="Pending", Reason="", readiness=false. Elapsed: 3.435066ms Jan 9 13:52:36.191: INFO: Pod "pod-projected-configmaps-01fc9444-4b76-4ae6-88e7-135d1c382a92": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011477656s Jan 9 13:52:38.219: INFO: Pod "pod-projected-configmaps-01fc9444-4b76-4ae6-88e7-135d1c382a92": Phase="Pending", Reason="", readiness=false. Elapsed: 4.039937525s Jan 9 13:52:40.230: INFO: Pod "pod-projected-configmaps-01fc9444-4b76-4ae6-88e7-135d1c382a92": Phase="Pending", Reason="", readiness=false. Elapsed: 6.050417332s Jan 9 13:52:42.240: INFO: Pod "pod-projected-configmaps-01fc9444-4b76-4ae6-88e7-135d1c382a92": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.060818529s STEP: Saw pod success Jan 9 13:52:42.240: INFO: Pod "pod-projected-configmaps-01fc9444-4b76-4ae6-88e7-135d1c382a92" satisfied condition "success or failure" Jan 9 13:52:42.245: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-01fc9444-4b76-4ae6-88e7-135d1c382a92 container projected-configmap-volume-test: STEP: delete the pod Jan 9 13:52:42.435: INFO: Waiting for pod pod-projected-configmaps-01fc9444-4b76-4ae6-88e7-135d1c382a92 to disappear Jan 9 13:52:42.446: INFO: Pod pod-projected-configmaps-01fc9444-4b76-4ae6-88e7-135d1c382a92 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 9 13:52:42.446: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-147" for this suite. Jan 9 13:52:48.475: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 9 13:52:48.597: INFO: namespace projected-147 deletion completed in 6.146416088s • [SLOW TEST:14.522 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 9 13:52:48.600: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-downwardapi-2nrb STEP: Creating a pod to test atomic-volume-subpath Jan 9 13:52:48.732: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-2nrb" in namespace "subpath-2642" to be "success or failure" Jan 9 13:52:48.737: INFO: Pod "pod-subpath-test-downwardapi-2nrb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.450874ms Jan 9 13:52:50.746: INFO: Pod "pod-subpath-test-downwardapi-2nrb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013605647s Jan 9 13:52:52.757: INFO: Pod "pod-subpath-test-downwardapi-2nrb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.024605022s Jan 9 13:52:54.768: INFO: Pod "pod-subpath-test-downwardapi-2nrb": Phase="Pending", Reason="", readiness=false. Elapsed: 6.035358533s Jan 9 13:52:56.775: INFO: Pod "pod-subpath-test-downwardapi-2nrb": Phase="Running", Reason="", readiness=true. Elapsed: 8.042977107s Jan 9 13:52:58.783: INFO: Pod "pod-subpath-test-downwardapi-2nrb": Phase="Running", Reason="", readiness=true. Elapsed: 10.050892697s Jan 9 13:53:00.852: INFO: Pod "pod-subpath-test-downwardapi-2nrb": Phase="Running", Reason="", readiness=true. Elapsed: 12.119723174s Jan 9 13:53:02.866: INFO: Pod "pod-subpath-test-downwardapi-2nrb": Phase="Running", Reason="", readiness=true. Elapsed: 14.134061778s Jan 9 13:53:04.873: INFO: Pod "pod-subpath-test-downwardapi-2nrb": Phase="Running", Reason="", readiness=true. Elapsed: 16.140528166s Jan 9 13:53:06.881: INFO: Pod "pod-subpath-test-downwardapi-2nrb": Phase="Running", Reason="", readiness=true. Elapsed: 18.148309267s Jan 9 13:53:08.890: INFO: Pod "pod-subpath-test-downwardapi-2nrb": Phase="Running", Reason="", readiness=true. Elapsed: 20.15789934s Jan 9 13:53:11.112: INFO: Pod "pod-subpath-test-downwardapi-2nrb": Phase="Running", Reason="", readiness=true. Elapsed: 22.379220364s Jan 9 13:53:13.123: INFO: Pod "pod-subpath-test-downwardapi-2nrb": Phase="Running", Reason="", readiness=true. Elapsed: 24.390385594s Jan 9 13:53:15.136: INFO: Pod "pod-subpath-test-downwardapi-2nrb": Phase="Running", Reason="", readiness=true. Elapsed: 26.404097545s Jan 9 13:53:17.151: INFO: Pod "pod-subpath-test-downwardapi-2nrb": Phase="Running", Reason="", readiness=true. Elapsed: 28.418613757s Jan 9 13:53:19.159: INFO: Pod "pod-subpath-test-downwardapi-2nrb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 30.426287054s STEP: Saw pod success Jan 9 13:53:19.159: INFO: Pod "pod-subpath-test-downwardapi-2nrb" satisfied condition "success or failure" Jan 9 13:53:19.163: INFO: Trying to get logs from node iruya-node pod pod-subpath-test-downwardapi-2nrb container test-container-subpath-downwardapi-2nrb: STEP: delete the pod Jan 9 13:53:19.315: INFO: Waiting for pod pod-subpath-test-downwardapi-2nrb to disappear Jan 9 13:53:19.321: INFO: Pod pod-subpath-test-downwardapi-2nrb no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-2nrb Jan 9 13:53:19.321: INFO: Deleting pod "pod-subpath-test-downwardapi-2nrb" in namespace "subpath-2642" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 9 13:53:19.324: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-2642" for this suite. Jan 9 13:53:25.357: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 9 13:53:25.479: INFO: namespace subpath-2642 deletion completed in 6.150430546s • [SLOW TEST:36.879 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-apps] ReplicationController should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 9 13:53:25.479: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change Jan 9 13:53:25.627: INFO: Pod name pod-release: Found 0 pods out of 1 Jan 9 13:53:30.641: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 9 13:53:31.709: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-2614" for this suite. Jan 9 13:53:37.775: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 9 13:53:37.971: INFO: namespace replication-controller-2614 deletion completed in 6.231922809s • [SLOW TEST:12.492 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 9 13:53:37.972: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-3746 [It] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a new StatefulSet Jan 9 13:53:38.264: INFO: Found 0 stateful pods, waiting for 3 Jan 9 13:53:48.276: INFO: Found 1 stateful pods, waiting for 3 Jan 9 13:53:58.294: INFO: Found 2 stateful pods, waiting for 3 Jan 9 13:54:08.280: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jan 9 13:54:08.280: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jan 9 13:54:08.280: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true Jan 9 13:54:08.299: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3746 ss2-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jan 9 13:54:10.697: INFO: stderr: "I0109 13:54:10.399451 1694 log.go:172] (0xc0006c6210) (0xc0005f8820) Create stream\nI0109 13:54:10.399565 1694 log.go:172] (0xc0006c6210) (0xc0005f8820) Stream added, broadcasting: 1\nI0109 13:54:10.403446 1694 log.go:172] (0xc0006c6210) Reply frame received for 1\nI0109 13:54:10.403511 1694 log.go:172] (0xc0006c6210) (0xc000188000) Create stream\nI0109 13:54:10.403521 1694 log.go:172] (0xc0006c6210) (0xc000188000) Stream added, broadcasting: 3\nI0109 13:54:10.404526 1694 log.go:172] (0xc0006c6210) Reply frame received for 3\nI0109 13:54:10.404547 1694 log.go:172] (0xc0006c6210) (0xc0001880a0) Create stream\nI0109 13:54:10.404554 1694 log.go:172] (0xc0006c6210) (0xc0001880a0) Stream added, broadcasting: 5\nI0109 13:54:10.405469 1694 log.go:172] (0xc0006c6210) Reply frame received for 5\nI0109 13:54:10.565260 1694 log.go:172] (0xc0006c6210) Data frame received for 5\nI0109 13:54:10.565332 1694 log.go:172] (0xc0001880a0) (5) Data frame handling\nI0109 13:54:10.565356 1694 log.go:172] (0xc0001880a0) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0109 13:54:10.609777 1694 log.go:172] (0xc0006c6210) Data frame received for 3\nI0109 13:54:10.609850 1694 log.go:172] (0xc000188000) (3) Data frame handling\nI0109 13:54:10.609874 1694 log.go:172] (0xc000188000) (3) Data frame sent\nI0109 13:54:10.689019 1694 log.go:172] (0xc0006c6210) Data frame received for 1\nI0109 13:54:10.689085 1694 log.go:172] (0xc0006c6210) (0xc0001880a0) Stream removed, broadcasting: 5\nI0109 13:54:10.689155 1694 log.go:172] (0xc0005f8820) (1) Data frame handling\nI0109 13:54:10.689186 1694 log.go:172] (0xc0005f8820) (1) Data frame sent\nI0109 13:54:10.689214 1694 log.go:172] (0xc0006c6210) (0xc000188000) Stream removed, broadcasting: 3\nI0109 13:54:10.689240 1694 log.go:172] (0xc0006c6210) (0xc0005f8820) Stream removed, broadcasting: 1\nI0109 13:54:10.689257 1694 log.go:172] (0xc0006c6210) Go away received\nI0109 13:54:10.690295 1694 log.go:172] (0xc0006c6210) (0xc0005f8820) Stream removed, broadcasting: 1\nI0109 13:54:10.690312 1694 log.go:172] (0xc0006c6210) (0xc000188000) Stream removed, broadcasting: 3\nI0109 13:54:10.690319 1694 log.go:172] (0xc0006c6210) (0xc0001880a0) Stream removed, broadcasting: 5\n" Jan 9 13:54:10.697: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jan 9 13:54:10.697: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine Jan 9 13:54:20.739: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order Jan 9 13:54:30.883: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3746 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 9 13:54:31.294: INFO: stderr: "I0109 13:54:31.100998 1727 log.go:172] (0xc0009d4420) (0xc0003b46e0) Create stream\nI0109 13:54:31.101193 1727 log.go:172] (0xc0009d4420) (0xc0003b46e0) Stream added, broadcasting: 1\nI0109 13:54:31.106806 1727 log.go:172] (0xc0009d4420) Reply frame received for 1\nI0109 13:54:31.107126 1727 log.go:172] (0xc0009d4420) (0xc000a58000) Create stream\nI0109 13:54:31.107204 1727 log.go:172] (0xc0009d4420) (0xc000a58000) Stream added, broadcasting: 3\nI0109 13:54:31.113586 1727 log.go:172] (0xc0009d4420) Reply frame received for 3\nI0109 13:54:31.114593 1727 log.go:172] (0xc0009d4420) (0xc000a580a0) Create stream\nI0109 13:54:31.114633 1727 log.go:172] (0xc0009d4420) (0xc000a580a0) Stream added, broadcasting: 5\nI0109 13:54:31.117591 1727 log.go:172] (0xc0009d4420) Reply frame received for 5\nI0109 13:54:31.209902 1727 log.go:172] (0xc0009d4420) Data frame received for 3\nI0109 13:54:31.210030 1727 log.go:172] (0xc000a58000) (3) Data frame handling\nI0109 13:54:31.210066 1727 log.go:172] (0xc000a58000) (3) Data frame sent\nI0109 13:54:31.210116 1727 log.go:172] (0xc0009d4420) Data frame received for 5\nI0109 13:54:31.210149 1727 log.go:172] (0xc000a580a0) (5) Data frame handling\nI0109 13:54:31.210167 1727 log.go:172] (0xc000a580a0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0109 13:54:31.284761 1727 log.go:172] (0xc0009d4420) (0xc000a58000) Stream removed, broadcasting: 3\nI0109 13:54:31.284941 1727 log.go:172] (0xc0009d4420) Data frame received for 1\nI0109 13:54:31.285012 1727 log.go:172] (0xc0003b46e0) (1) Data frame handling\nI0109 13:54:31.285048 1727 log.go:172] (0xc0003b46e0) (1) Data frame sent\nI0109 13:54:31.285090 1727 log.go:172] (0xc0009d4420) (0xc0003b46e0) Stream removed, broadcasting: 1\nI0109 13:54:31.285136 1727 log.go:172] (0xc0009d4420) (0xc000a580a0) Stream removed, broadcasting: 5\nI0109 13:54:31.285168 1727 log.go:172] (0xc0009d4420) Go away received\nI0109 13:54:31.286726 1727 log.go:172] (0xc0009d4420) (0xc0003b46e0) Stream removed, broadcasting: 1\nI0109 13:54:31.286741 1727 log.go:172] (0xc0009d4420) (0xc000a58000) Stream removed, broadcasting: 3\nI0109 13:54:31.286750 1727 log.go:172] (0xc0009d4420) (0xc000a580a0) Stream removed, broadcasting: 5\n" Jan 9 13:54:31.294: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Jan 9 13:54:31.294: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Jan 9 13:54:41.337: INFO: Waiting for StatefulSet statefulset-3746/ss2 to complete update Jan 9 13:54:41.337: INFO: Waiting for Pod statefulset-3746/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Jan 9 13:54:41.337: INFO: Waiting for Pod statefulset-3746/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Jan 9 13:54:51.952: INFO: Waiting for StatefulSet statefulset-3746/ss2 to complete update Jan 9 13:54:51.952: INFO: Waiting for Pod statefulset-3746/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Jan 9 13:55:01.358: INFO: Waiting for StatefulSet statefulset-3746/ss2 to complete update Jan 9 13:55:01.358: INFO: Waiting for Pod statefulset-3746/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Jan 9 13:55:11.380: INFO: Waiting for StatefulSet statefulset-3746/ss2 to complete update STEP: Rolling back to a previous revision Jan 9 13:55:21.376: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3746 ss2-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jan 9 13:55:21.811: INFO: stderr: "I0109 13:55:21.567499 1749 log.go:172] (0xc0001160b0) (0xc000734640) Create stream\nI0109 13:55:21.567668 1749 log.go:172] (0xc0001160b0) (0xc000734640) Stream added, broadcasting: 1\nI0109 13:55:21.574043 1749 log.go:172] (0xc0001160b0) Reply frame received for 1\nI0109 13:55:21.574125 1749 log.go:172] (0xc0001160b0) (0xc00084e000) Create stream\nI0109 13:55:21.574144 1749 log.go:172] (0xc0001160b0) (0xc00084e000) Stream added, broadcasting: 3\nI0109 13:55:21.576535 1749 log.go:172] (0xc0001160b0) Reply frame received for 3\nI0109 13:55:21.576615 1749 log.go:172] (0xc0001160b0) (0xc000562320) Create stream\nI0109 13:55:21.576646 1749 log.go:172] (0xc0001160b0) (0xc000562320) Stream added, broadcasting: 5\nI0109 13:55:21.579188 1749 log.go:172] (0xc0001160b0) Reply frame received for 5\nI0109 13:55:21.687242 1749 log.go:172] (0xc0001160b0) Data frame received for 5\nI0109 13:55:21.687300 1749 log.go:172] (0xc000562320) (5) Data frame handling\nI0109 13:55:21.687318 1749 log.go:172] (0xc000562320) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0109 13:55:21.728038 1749 log.go:172] (0xc0001160b0) Data frame received for 3\nI0109 13:55:21.728075 1749 log.go:172] (0xc00084e000) (3) Data frame handling\nI0109 13:55:21.728097 1749 log.go:172] (0xc00084e000) (3) Data frame sent\nI0109 13:55:21.805217 1749 log.go:172] (0xc0001160b0) (0xc00084e000) Stream removed, broadcasting: 3\nI0109 13:55:21.805335 1749 log.go:172] (0xc0001160b0) Data frame received for 1\nI0109 13:55:21.805343 1749 log.go:172] (0xc000734640) (1) Data frame handling\nI0109 13:55:21.805354 1749 log.go:172] (0xc000734640) (1) Data frame sent\nI0109 13:55:21.805364 1749 log.go:172] (0xc0001160b0) (0xc000734640) Stream removed, broadcasting: 1\nI0109 13:55:21.805704 1749 log.go:172] (0xc0001160b0) (0xc000562320) Stream removed, broadcasting: 5\nI0109 13:55:21.805733 1749 log.go:172] (0xc0001160b0) (0xc000734640) Stream removed, broadcasting: 1\nI0109 13:55:21.805743 1749 log.go:172] (0xc0001160b0) (0xc00084e000) Stream removed, broadcasting: 3\nI0109 13:55:21.805749 1749 log.go:172] (0xc0001160b0) (0xc000562320) Stream removed, broadcasting: 5\n" Jan 9 13:55:21.811: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jan 9 13:55:21.811: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Jan 9 13:55:31.878: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order Jan 9 13:55:42.197: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3746 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 9 13:55:42.642: INFO: stderr: "I0109 13:55:42.429125 1764 log.go:172] (0xc00012afd0) (0xc0005d2b40) Create stream\nI0109 13:55:42.429437 1764 log.go:172] (0xc00012afd0) (0xc0005d2b40) Stream added, broadcasting: 1\nI0109 13:55:42.434630 1764 log.go:172] (0xc00012afd0) Reply frame received for 1\nI0109 13:55:42.434684 1764 log.go:172] (0xc00012afd0) (0xc0005d2be0) Create stream\nI0109 13:55:42.434691 1764 log.go:172] (0xc00012afd0) (0xc0005d2be0) Stream added, broadcasting: 3\nI0109 13:55:42.437827 1764 log.go:172] (0xc00012afd0) Reply frame received for 3\nI0109 13:55:42.437997 1764 log.go:172] (0xc00012afd0) (0xc0005d2c80) Create stream\nI0109 13:55:42.438028 1764 log.go:172] (0xc00012afd0) (0xc0005d2c80) Stream added, broadcasting: 5\nI0109 13:55:42.443158 1764 log.go:172] (0xc00012afd0) Reply frame received for 5\nI0109 13:55:42.536045 1764 log.go:172] (0xc00012afd0) Data frame received for 5\nI0109 13:55:42.536240 1764 log.go:172] (0xc0005d2c80) (5) Data frame handling\nI0109 13:55:42.536307 1764 log.go:172] (0xc0005d2c80) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0109 13:55:42.536395 1764 log.go:172] (0xc00012afd0) Data frame received for 3\nI0109 13:55:42.536409 1764 log.go:172] (0xc0005d2be0) (3) Data frame handling\nI0109 13:55:42.536432 1764 log.go:172] (0xc0005d2be0) (3) Data frame sent\nI0109 13:55:42.634041 1764 log.go:172] (0xc00012afd0) Data frame received for 1\nI0109 13:55:42.634253 1764 log.go:172] (0xc00012afd0) (0xc0005d2be0) Stream removed, broadcasting: 3\nI0109 13:55:42.634476 1764 log.go:172] (0xc00012afd0) (0xc0005d2c80) Stream removed, broadcasting: 5\nI0109 13:55:42.634842 1764 log.go:172] (0xc0005d2b40) (1) Data frame handling\nI0109 13:55:42.634911 1764 log.go:172] (0xc0005d2b40) (1) Data frame sent\nI0109 13:55:42.634922 1764 log.go:172] (0xc00012afd0) (0xc0005d2b40) Stream removed, broadcasting: 1\nI0109 13:55:42.634940 1764 log.go:172] (0xc00012afd0) Go away received\nI0109 13:55:42.635929 1764 log.go:172] (0xc00012afd0) (0xc0005d2b40) Stream removed, broadcasting: 1\nI0109 13:55:42.635941 1764 log.go:172] (0xc00012afd0) (0xc0005d2be0) Stream removed, broadcasting: 3\nI0109 13:55:42.635945 1764 log.go:172] (0xc00012afd0) (0xc0005d2c80) Stream removed, broadcasting: 5\n" Jan 9 13:55:42.642: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Jan 9 13:55:42.642: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Jan 9 13:55:52.669: INFO: Waiting for StatefulSet statefulset-3746/ss2 to complete update Jan 9 13:55:52.669: INFO: Waiting for Pod statefulset-3746/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Jan 9 13:55:52.669: INFO: Waiting for Pod statefulset-3746/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Jan 9 13:56:03.084: INFO: Waiting for StatefulSet statefulset-3746/ss2 to complete update Jan 9 13:56:03.085: INFO: Waiting for Pod statefulset-3746/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Jan 9 13:56:03.085: INFO: Waiting for Pod statefulset-3746/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Jan 9 13:56:13.040: INFO: Waiting for StatefulSet statefulset-3746/ss2 to complete update Jan 9 13:56:13.041: INFO: Waiting for Pod statefulset-3746/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Jan 9 13:56:22.710: INFO: Waiting for StatefulSet statefulset-3746/ss2 to complete update Jan 9 13:56:22.710: INFO: Waiting for Pod statefulset-3746/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Jan 9 13:56:32.739: INFO: Waiting for StatefulSet statefulset-3746/ss2 to complete update [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 Jan 9 13:56:42.677: INFO: Deleting all statefulset in ns statefulset-3746 Jan 9 13:56:42.680: INFO: Scaling statefulset ss2 to 0 Jan 9 13:57:12.765: INFO: Waiting for statefulset status.replicas updated to 0 Jan 9 13:57:12.772: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 9 13:57:12.839: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-3746" for this suite. Jan 9 13:57:20.900: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 9 13:57:21.000: INFO: namespace statefulset-3746 deletion completed in 8.155808978s • [SLOW TEST:223.028 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 9 13:57:21.000: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating the pod Jan 9 13:57:31.822: INFO: Successfully updated pod "annotationupdatef6946c0d-fb2b-4794-bde9-a61106899b5e" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 9 13:57:33.914: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9774" for this suite. Jan 9 13:57:55.965: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 9 13:57:56.089: INFO: namespace projected-9774 deletion completed in 22.165713461s • [SLOW TEST:35.089 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 9 13:57:56.090: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods Jan 9 13:57:57.174: INFO: Pod name wrapped-volume-race-a038a03c-36ad-4776-8341-7870bbf20bad: Found 0 pods out of 5 Jan 9 13:58:02.195: INFO: Pod name wrapped-volume-race-a038a03c-36ad-4776-8341-7870bbf20bad: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-a038a03c-36ad-4776-8341-7870bbf20bad in namespace emptydir-wrapper-6055, will wait for the garbage collector to delete the pods Jan 9 13:58:28.317: INFO: Deleting ReplicationController wrapped-volume-race-a038a03c-36ad-4776-8341-7870bbf20bad took: 22.443196ms Jan 9 13:58:28.718: INFO: Terminating ReplicationController wrapped-volume-race-a038a03c-36ad-4776-8341-7870bbf20bad pods took: 400.719161ms STEP: Creating RC which spawns configmap-volume pods Jan 9 13:59:17.754: INFO: Pod name wrapped-volume-race-58e7ea49-9a10-41b5-980e-511ed4b11ad9: Found 0 pods out of 5 Jan 9 13:59:22.851: INFO: Pod name wrapped-volume-race-58e7ea49-9a10-41b5-980e-511ed4b11ad9: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-58e7ea49-9a10-41b5-980e-511ed4b11ad9 in namespace emptydir-wrapper-6055, will wait for the garbage collector to delete the pods Jan 9 13:59:54.972: INFO: Deleting ReplicationController wrapped-volume-race-58e7ea49-9a10-41b5-980e-511ed4b11ad9 took: 17.725775ms Jan 9 13:59:55.473: INFO: Terminating ReplicationController wrapped-volume-race-58e7ea49-9a10-41b5-980e-511ed4b11ad9 pods took: 500.994531ms STEP: Creating RC which spawns configmap-volume pods Jan 9 14:00:40.774: INFO: Pod name wrapped-volume-race-41f9ac34-f2e0-49ee-af67-70e602a0eb2a: Found 0 pods out of 5 Jan 9 14:00:45.799: INFO: Pod name wrapped-volume-race-41f9ac34-f2e0-49ee-af67-70e602a0eb2a: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-41f9ac34-f2e0-49ee-af67-70e602a0eb2a in namespace emptydir-wrapper-6055, will wait for the garbage collector to delete the pods Jan 9 14:01:19.983: INFO: Deleting ReplicationController wrapped-volume-race-41f9ac34-f2e0-49ee-af67-70e602a0eb2a took: 16.101688ms Jan 9 14:01:20.283: INFO: Terminating ReplicationController wrapped-volume-race-41f9ac34-f2e0-49ee-af67-70e602a0eb2a pods took: 300.849408ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 9 14:02:08.744: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-6055" for this suite. Jan 9 14:02:18.813: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 9 14:02:19.120: INFO: namespace emptydir-wrapper-6055 deletion completed in 10.370940365s • [SLOW TEST:263.031 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 9 14:02:19.121: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars Jan 9 14:02:19.204: INFO: Waiting up to 5m0s for pod "downward-api-722937cf-b48f-446a-bb5e-d78c8771927d" in namespace "downward-api-4059" to be "success or failure" Jan 9 14:02:19.211: INFO: Pod "downward-api-722937cf-b48f-446a-bb5e-d78c8771927d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.919066ms Jan 9 14:02:21.222: INFO: Pod "downward-api-722937cf-b48f-446a-bb5e-d78c8771927d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017991687s Jan 9 14:02:23.276: INFO: Pod "downward-api-722937cf-b48f-446a-bb5e-d78c8771927d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.071684127s Jan 9 14:02:25.282: INFO: Pod "downward-api-722937cf-b48f-446a-bb5e-d78c8771927d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.077999299s Jan 9 14:02:27.291: INFO: Pod "downward-api-722937cf-b48f-446a-bb5e-d78c8771927d": Phase="Pending", Reason="", readiness=false. Elapsed: 8.08654984s Jan 9 14:02:29.361: INFO: Pod "downward-api-722937cf-b48f-446a-bb5e-d78c8771927d": Phase="Pending", Reason="", readiness=false. Elapsed: 10.156459814s Jan 9 14:02:31.374: INFO: Pod "downward-api-722937cf-b48f-446a-bb5e-d78c8771927d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.169364026s STEP: Saw pod success Jan 9 14:02:31.374: INFO: Pod "downward-api-722937cf-b48f-446a-bb5e-d78c8771927d" satisfied condition "success or failure" Jan 9 14:02:31.381: INFO: Trying to get logs from node iruya-node pod downward-api-722937cf-b48f-446a-bb5e-d78c8771927d container dapi-container: STEP: delete the pod Jan 9 14:02:31.652: INFO: Waiting for pod downward-api-722937cf-b48f-446a-bb5e-d78c8771927d to disappear Jan 9 14:02:31.660: INFO: Pod downward-api-722937cf-b48f-446a-bb5e-d78c8771927d no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 9 14:02:31.660: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4059" for this suite. Jan 9 14:02:37.693: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 9 14:02:37.844: INFO: namespace downward-api-4059 deletion completed in 6.176049988s • [SLOW TEST:18.724 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 9 14:02:37.845: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 9 14:03:28.084: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-1574" for this suite. Jan 9 14:03:34.141: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 9 14:03:34.281: INFO: namespace container-runtime-1574 deletion completed in 6.190603143s • [SLOW TEST:56.436 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 when starting a container that exits /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:39 should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 9 14:03:34.282: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir volume type on node default medium Jan 9 14:03:34.434: INFO: Waiting up to 5m0s for pod "pod-a5af37bd-e74e-4d79-b13f-e2a112b0d092" in namespace "emptydir-5568" to be "success or failure" Jan 9 14:03:34.442: INFO: Pod "pod-a5af37bd-e74e-4d79-b13f-e2a112b0d092": Phase="Pending", Reason="", readiness=false. Elapsed: 8.011366ms Jan 9 14:03:36.460: INFO: Pod "pod-a5af37bd-e74e-4d79-b13f-e2a112b0d092": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026525069s Jan 9 14:03:38.476: INFO: Pod "pod-a5af37bd-e74e-4d79-b13f-e2a112b0d092": Phase="Pending", Reason="", readiness=false. Elapsed: 4.042063889s Jan 9 14:03:40.488: INFO: Pod "pod-a5af37bd-e74e-4d79-b13f-e2a112b0d092": Phase="Pending", Reason="", readiness=false. Elapsed: 6.054383859s Jan 9 14:03:42.504: INFO: Pod "pod-a5af37bd-e74e-4d79-b13f-e2a112b0d092": Phase="Pending", Reason="", readiness=false. Elapsed: 8.07071182s Jan 9 14:03:44.520: INFO: Pod "pod-a5af37bd-e74e-4d79-b13f-e2a112b0d092": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.086336636s STEP: Saw pod success Jan 9 14:03:44.520: INFO: Pod "pod-a5af37bd-e74e-4d79-b13f-e2a112b0d092" satisfied condition "success or failure" Jan 9 14:03:44.528: INFO: Trying to get logs from node iruya-node pod pod-a5af37bd-e74e-4d79-b13f-e2a112b0d092 container test-container: STEP: delete the pod Jan 9 14:03:44.591: INFO: Waiting for pod pod-a5af37bd-e74e-4d79-b13f-e2a112b0d092 to disappear Jan 9 14:03:44.597: INFO: Pod pod-a5af37bd-e74e-4d79-b13f-e2a112b0d092 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 9 14:03:44.597: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5568" for this suite. Jan 9 14:03:50.631: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 9 14:03:50.753: INFO: namespace emptydir-5568 deletion completed in 6.149066524s • [SLOW TEST:16.472 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 9 14:03:50.753: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-configmap-zwmm STEP: Creating a pod to test atomic-volume-subpath Jan 9 14:03:50.946: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-zwmm" in namespace "subpath-7158" to be "success or failure" Jan 9 14:03:51.070: INFO: Pod "pod-subpath-test-configmap-zwmm": Phase="Pending", Reason="", readiness=false. Elapsed: 124.047984ms Jan 9 14:03:53.101: INFO: Pod "pod-subpath-test-configmap-zwmm": Phase="Pending", Reason="", readiness=false. Elapsed: 2.155497752s Jan 9 14:03:55.111: INFO: Pod "pod-subpath-test-configmap-zwmm": Phase="Pending", Reason="", readiness=false. Elapsed: 4.16508742s Jan 9 14:03:57.127: INFO: Pod "pod-subpath-test-configmap-zwmm": Phase="Pending", Reason="", readiness=false. Elapsed: 6.181413365s Jan 9 14:03:59.136: INFO: Pod "pod-subpath-test-configmap-zwmm": Phase="Pending", Reason="", readiness=false. Elapsed: 8.190701171s Jan 9 14:04:01.147: INFO: Pod "pod-subpath-test-configmap-zwmm": Phase="Running", Reason="", readiness=true. Elapsed: 10.201042309s Jan 9 14:04:03.164: INFO: Pod "pod-subpath-test-configmap-zwmm": Phase="Running", Reason="", readiness=true. Elapsed: 12.218805846s Jan 9 14:04:05.177: INFO: Pod "pod-subpath-test-configmap-zwmm": Phase="Running", Reason="", readiness=true. Elapsed: 14.231800661s Jan 9 14:04:07.186: INFO: Pod "pod-subpath-test-configmap-zwmm": Phase="Running", Reason="", readiness=true. Elapsed: 16.24071389s Jan 9 14:04:09.197: INFO: Pod "pod-subpath-test-configmap-zwmm": Phase="Running", Reason="", readiness=true. Elapsed: 18.251247809s Jan 9 14:04:11.205: INFO: Pod "pod-subpath-test-configmap-zwmm": Phase="Running", Reason="", readiness=true. Elapsed: 20.259429853s Jan 9 14:04:13.218: INFO: Pod "pod-subpath-test-configmap-zwmm": Phase="Running", Reason="", readiness=true. Elapsed: 22.272537664s Jan 9 14:04:15.226: INFO: Pod "pod-subpath-test-configmap-zwmm": Phase="Running", Reason="", readiness=true. Elapsed: 24.280254453s Jan 9 14:04:17.234: INFO: Pod "pod-subpath-test-configmap-zwmm": Phase="Running", Reason="", readiness=true. Elapsed: 26.288626031s Jan 9 14:04:19.247: INFO: Pod "pod-subpath-test-configmap-zwmm": Phase="Running", Reason="", readiness=true. Elapsed: 28.301167044s Jan 9 14:04:21.252: INFO: Pod "pod-subpath-test-configmap-zwmm": Phase="Succeeded", Reason="", readiness=false. Elapsed: 30.306741107s STEP: Saw pod success Jan 9 14:04:21.252: INFO: Pod "pod-subpath-test-configmap-zwmm" satisfied condition "success or failure" Jan 9 14:04:21.257: INFO: Trying to get logs from node iruya-node pod pod-subpath-test-configmap-zwmm container test-container-subpath-configmap-zwmm: STEP: delete the pod Jan 9 14:04:21.322: INFO: Waiting for pod pod-subpath-test-configmap-zwmm to disappear Jan 9 14:04:21.370: INFO: Pod pod-subpath-test-configmap-zwmm no longer exists STEP: Deleting pod pod-subpath-test-configmap-zwmm Jan 9 14:04:21.370: INFO: Deleting pod "pod-subpath-test-configmap-zwmm" in namespace "subpath-7158" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 9 14:04:21.374: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-7158" for this suite. Jan 9 14:04:27.402: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 9 14:04:27.529: INFO: namespace subpath-7158 deletion completed in 6.149117922s • [SLOW TEST:36.775 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 9 14:04:27.529: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Jan 9 14:04:47.788: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jan 9 14:04:47.821: INFO: Pod pod-with-prestop-http-hook still exists Jan 9 14:04:49.822: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jan 9 14:04:49.840: INFO: Pod pod-with-prestop-http-hook still exists Jan 9 14:04:51.822: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jan 9 14:04:51.832: INFO: Pod pod-with-prestop-http-hook still exists Jan 9 14:04:53.822: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jan 9 14:04:53.833: INFO: Pod pod-with-prestop-http-hook still exists Jan 9 14:04:55.822: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jan 9 14:04:55.831: INFO: Pod pod-with-prestop-http-hook still exists Jan 9 14:04:57.822: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jan 9 14:04:57.833: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 9 14:04:57.877: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-3975" for this suite. Jan 9 14:05:19.924: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 9 14:05:20.007: INFO: namespace container-lifecycle-hook-3975 deletion completed in 22.121198606s • [SLOW TEST:52.478 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 9 14:05:20.007: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Jan 9 14:05:20.075: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e2f70d48-f064-4fea-be02-4b4e58902457" in namespace "projected-2827" to be "success or failure" Jan 9 14:05:20.082: INFO: Pod "downwardapi-volume-e2f70d48-f064-4fea-be02-4b4e58902457": Phase="Pending", Reason="", readiness=false. Elapsed: 6.57064ms Jan 9 14:05:22.092: INFO: Pod "downwardapi-volume-e2f70d48-f064-4fea-be02-4b4e58902457": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016421597s Jan 9 14:05:24.100: INFO: Pod "downwardapi-volume-e2f70d48-f064-4fea-be02-4b4e58902457": Phase="Pending", Reason="", readiness=false. Elapsed: 4.024803514s Jan 9 14:05:26.111: INFO: Pod "downwardapi-volume-e2f70d48-f064-4fea-be02-4b4e58902457": Phase="Pending", Reason="", readiness=false. Elapsed: 6.035710716s Jan 9 14:05:28.121: INFO: Pod "downwardapi-volume-e2f70d48-f064-4fea-be02-4b4e58902457": Phase="Pending", Reason="", readiness=false. Elapsed: 8.04620863s Jan 9 14:05:30.128: INFO: Pod "downwardapi-volume-e2f70d48-f064-4fea-be02-4b4e58902457": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.053153089s STEP: Saw pod success Jan 9 14:05:30.128: INFO: Pod "downwardapi-volume-e2f70d48-f064-4fea-be02-4b4e58902457" satisfied condition "success or failure" Jan 9 14:05:30.132: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-e2f70d48-f064-4fea-be02-4b4e58902457 container client-container: STEP: delete the pod Jan 9 14:05:30.306: INFO: Waiting for pod downwardapi-volume-e2f70d48-f064-4fea-be02-4b4e58902457 to disappear Jan 9 14:05:30.312: INFO: Pod downwardapi-volume-e2f70d48-f064-4fea-be02-4b4e58902457 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 9 14:05:30.312: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2827" for this suite. Jan 9 14:05:36.350: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 9 14:05:36.462: INFO: namespace projected-2827 deletion completed in 6.139589584s • [SLOW TEST:16.455 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 9 14:05:36.463: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test override arguments Jan 9 14:05:36.592: INFO: Waiting up to 5m0s for pod "client-containers-db4dcbb4-5359-4e8d-b03d-b583d0346e8d" in namespace "containers-1290" to be "success or failure" Jan 9 14:05:36.608: INFO: Pod "client-containers-db4dcbb4-5359-4e8d-b03d-b583d0346e8d": Phase="Pending", Reason="", readiness=false. Elapsed: 15.644788ms Jan 9 14:05:38.629: INFO: Pod "client-containers-db4dcbb4-5359-4e8d-b03d-b583d0346e8d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036032997s Jan 9 14:05:40.637: INFO: Pod "client-containers-db4dcbb4-5359-4e8d-b03d-b583d0346e8d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.044520713s Jan 9 14:05:42.665: INFO: Pod "client-containers-db4dcbb4-5359-4e8d-b03d-b583d0346e8d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.072655985s Jan 9 14:05:44.904: INFO: Pod "client-containers-db4dcbb4-5359-4e8d-b03d-b583d0346e8d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.311571238s STEP: Saw pod success Jan 9 14:05:44.904: INFO: Pod "client-containers-db4dcbb4-5359-4e8d-b03d-b583d0346e8d" satisfied condition "success or failure" Jan 9 14:05:44.945: INFO: Trying to get logs from node iruya-node pod client-containers-db4dcbb4-5359-4e8d-b03d-b583d0346e8d container test-container: STEP: delete the pod Jan 9 14:05:45.048: INFO: Waiting for pod client-containers-db4dcbb4-5359-4e8d-b03d-b583d0346e8d to disappear Jan 9 14:05:45.094: INFO: Pod client-containers-db4dcbb4-5359-4e8d-b03d-b583d0346e8d no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 9 14:05:45.094: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-1290" for this suite. Jan 9 14:05:51.173: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 9 14:05:51.284: INFO: namespace containers-1290 deletion completed in 6.185391896s • [SLOW TEST:14.821 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 9 14:05:51.284: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-068ad9b4-5794-4d66-be0e-ef04360715b7 STEP: Creating a pod to test consume secrets Jan 9 14:05:51.420: INFO: Waiting up to 5m0s for pod "pod-secrets-e7911306-8a83-4c42-82af-b54344df2548" in namespace "secrets-7400" to be "success or failure" Jan 9 14:05:51.458: INFO: Pod "pod-secrets-e7911306-8a83-4c42-82af-b54344df2548": Phase="Pending", Reason="", readiness=false. Elapsed: 38.196158ms Jan 9 14:05:53.468: INFO: Pod "pod-secrets-e7911306-8a83-4c42-82af-b54344df2548": Phase="Pending", Reason="", readiness=false. Elapsed: 2.048186148s Jan 9 14:05:55.475: INFO: Pod "pod-secrets-e7911306-8a83-4c42-82af-b54344df2548": Phase="Pending", Reason="", readiness=false. Elapsed: 4.055371433s Jan 9 14:05:57.486: INFO: Pod "pod-secrets-e7911306-8a83-4c42-82af-b54344df2548": Phase="Pending", Reason="", readiness=false. Elapsed: 6.065968653s Jan 9 14:05:59.511: INFO: Pod "pod-secrets-e7911306-8a83-4c42-82af-b54344df2548": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.090635455s STEP: Saw pod success Jan 9 14:05:59.511: INFO: Pod "pod-secrets-e7911306-8a83-4c42-82af-b54344df2548" satisfied condition "success or failure" Jan 9 14:05:59.524: INFO: Trying to get logs from node iruya-node pod pod-secrets-e7911306-8a83-4c42-82af-b54344df2548 container secret-env-test: STEP: delete the pod Jan 9 14:05:59.657: INFO: Waiting for pod pod-secrets-e7911306-8a83-4c42-82af-b54344df2548 to disappear Jan 9 14:05:59.660: INFO: Pod pod-secrets-e7911306-8a83-4c42-82af-b54344df2548 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 9 14:05:59.660: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7400" for this suite. Jan 9 14:06:05.682: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 9 14:06:05.826: INFO: namespace secrets-7400 deletion completed in 6.160754462s • [SLOW TEST:14.542 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31 should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 9 14:06:05.826: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod Jan 9 14:06:14.022: INFO: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:send-events-e8b9907a-20ff-4f39-84db-76f0d3110779,GenerateName:,Namespace:events-6932,SelfLink:/api/v1/namespaces/events-6932/pods/send-events-e8b9907a-20ff-4f39-84db-76f0d3110779,UID:5fa2c9cf-da4b-455f-b5bf-566952c33605,ResourceVersion:19908859,Generation:0,CreationTimestamp:2020-01-09 14:06:05 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: foo,time: 975123074,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-xbrz5 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-xbrz5,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{p gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 [] [] [{ 0 80 TCP }] [] [] {map[] map[]} [{default-token-xbrz5 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0005e0c40} {node.kubernetes.io/unreachable Exists NoExecute 0xc0005e0c70}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-09 14:06:06 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-09 14:06:12 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-09 14:06:12 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-09 14:06:05 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.1,StartTime:2020-01-09 14:06:06 +0000 UTC,ContainerStatuses:[{p {nil ContainerStateRunning{StartedAt:2020-01-09 14:06:12 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 docker-pullable://gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 docker://667abef5a04b7943c92b9ec5af05c7d5649022134fc8e31f14b3ed783eafd635}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} STEP: checking for scheduler event about the pod Jan 9 14:06:16.684: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod Jan 9 14:06:18.701: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 9 14:06:18.717: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-6932" for this suite. Jan 9 14:06:56.883: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 9 14:06:57.013: INFO: namespace events-6932 deletion completed in 38.201456853s • [SLOW TEST:51.187 seconds] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 9 14:06:57.013: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod test-webserver-36ec101b-5ee3-487b-a1ae-473a5bb65e0d in namespace container-probe-2155 Jan 9 14:07:05.179: INFO: Started pod test-webserver-36ec101b-5ee3-487b-a1ae-473a5bb65e0d in namespace container-probe-2155 STEP: checking the pod's current state and verifying that restartCount is present Jan 9 14:07:05.195: INFO: Initial restart count of pod test-webserver-36ec101b-5ee3-487b-a1ae-473a5bb65e0d is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 9 14:11:05.685: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-2155" for this suite. Jan 9 14:11:11.852: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 9 14:11:12.019: INFO: namespace container-probe-2155 deletion completed in 6.251779772s • [SLOW TEST:255.006 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 9 14:11:12.020: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0777 on tmpfs Jan 9 14:11:12.201: INFO: Waiting up to 5m0s for pod "pod-dbedee4a-06c9-47e3-83a7-6ca4352f6289" in namespace "emptydir-4005" to be "success or failure" Jan 9 14:11:12.218: INFO: Pod "pod-dbedee4a-06c9-47e3-83a7-6ca4352f6289": Phase="Pending", Reason="", readiness=false. Elapsed: 16.091381ms Jan 9 14:11:14.225: INFO: Pod "pod-dbedee4a-06c9-47e3-83a7-6ca4352f6289": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0230156s Jan 9 14:11:16.252: INFO: Pod "pod-dbedee4a-06c9-47e3-83a7-6ca4352f6289": Phase="Pending", Reason="", readiness=false. Elapsed: 4.050436s Jan 9 14:11:18.260: INFO: Pod "pod-dbedee4a-06c9-47e3-83a7-6ca4352f6289": Phase="Pending", Reason="", readiness=false. Elapsed: 6.058888125s Jan 9 14:11:20.280: INFO: Pod "pod-dbedee4a-06c9-47e3-83a7-6ca4352f6289": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.078068734s STEP: Saw pod success Jan 9 14:11:20.280: INFO: Pod "pod-dbedee4a-06c9-47e3-83a7-6ca4352f6289" satisfied condition "success or failure" Jan 9 14:11:20.290: INFO: Trying to get logs from node iruya-node pod pod-dbedee4a-06c9-47e3-83a7-6ca4352f6289 container test-container: STEP: delete the pod Jan 9 14:11:20.445: INFO: Waiting for pod pod-dbedee4a-06c9-47e3-83a7-6ca4352f6289 to disappear Jan 9 14:11:20.456: INFO: Pod pod-dbedee4a-06c9-47e3-83a7-6ca4352f6289 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 9 14:11:20.456: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4005" for this suite. Jan 9 14:11:26.494: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 9 14:11:26.606: INFO: namespace emptydir-4005 deletion completed in 6.143643459s • [SLOW TEST:14.587 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 9 14:11:26.607: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-map-9e370959-6d79-4a29-999a-a3364c8f6d32 STEP: Creating a pod to test consume configMaps Jan 9 14:11:26.744: INFO: Waiting up to 5m0s for pod "pod-configmaps-f31b179f-bcf1-47d9-b36c-ca280b570b23" in namespace "configmap-6318" to be "success or failure" Jan 9 14:11:26.751: INFO: Pod "pod-configmaps-f31b179f-bcf1-47d9-b36c-ca280b570b23": Phase="Pending", Reason="", readiness=false. Elapsed: 6.949989ms Jan 9 14:11:28.762: INFO: Pod "pod-configmaps-f31b179f-bcf1-47d9-b36c-ca280b570b23": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017867259s Jan 9 14:11:30.771: INFO: Pod "pod-configmaps-f31b179f-bcf1-47d9-b36c-ca280b570b23": Phase="Pending", Reason="", readiness=false. Elapsed: 4.027667631s Jan 9 14:11:32.785: INFO: Pod "pod-configmaps-f31b179f-bcf1-47d9-b36c-ca280b570b23": Phase="Pending", Reason="", readiness=false. Elapsed: 6.041142671s Jan 9 14:11:34.797: INFO: Pod "pod-configmaps-f31b179f-bcf1-47d9-b36c-ca280b570b23": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.052783542s STEP: Saw pod success Jan 9 14:11:34.797: INFO: Pod "pod-configmaps-f31b179f-bcf1-47d9-b36c-ca280b570b23" satisfied condition "success or failure" Jan 9 14:11:34.803: INFO: Trying to get logs from node iruya-node pod pod-configmaps-f31b179f-bcf1-47d9-b36c-ca280b570b23 container configmap-volume-test: STEP: delete the pod Jan 9 14:11:34.848: INFO: Waiting for pod pod-configmaps-f31b179f-bcf1-47d9-b36c-ca280b570b23 to disappear Jan 9 14:11:34.863: INFO: Pod pod-configmaps-f31b179f-bcf1-47d9-b36c-ca280b570b23 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 9 14:11:34.863: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6318" for this suite. Jan 9 14:11:41.004: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 9 14:11:41.124: INFO: namespace configmap-6318 deletion completed in 6.255208114s • [SLOW TEST:14.517 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 9 14:11:41.124: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-map-688bb2e7-d8af-4abf-9168-00fa9fcef8d0 STEP: Creating a pod to test consume secrets Jan 9 14:11:41.197: INFO: Waiting up to 5m0s for pod "pod-secrets-1d2fd035-dba8-43bf-a7a7-99b94f874cc9" in namespace "secrets-3386" to be "success or failure" Jan 9 14:11:41.203: INFO: Pod "pod-secrets-1d2fd035-dba8-43bf-a7a7-99b94f874cc9": Phase="Pending", Reason="", readiness=false. Elapsed: 5.740715ms Jan 9 14:11:43.211: INFO: Pod "pod-secrets-1d2fd035-dba8-43bf-a7a7-99b94f874cc9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013627563s Jan 9 14:11:45.217: INFO: Pod "pod-secrets-1d2fd035-dba8-43bf-a7a7-99b94f874cc9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.020401092s Jan 9 14:11:47.226: INFO: Pod "pod-secrets-1d2fd035-dba8-43bf-a7a7-99b94f874cc9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.028456585s Jan 9 14:11:49.233: INFO: Pod "pod-secrets-1d2fd035-dba8-43bf-a7a7-99b94f874cc9": Phase="Pending", Reason="", readiness=false. Elapsed: 8.035578294s Jan 9 14:11:51.241: INFO: Pod "pod-secrets-1d2fd035-dba8-43bf-a7a7-99b94f874cc9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.04403839s STEP: Saw pod success Jan 9 14:11:51.241: INFO: Pod "pod-secrets-1d2fd035-dba8-43bf-a7a7-99b94f874cc9" satisfied condition "success or failure" Jan 9 14:11:51.245: INFO: Trying to get logs from node iruya-node pod pod-secrets-1d2fd035-dba8-43bf-a7a7-99b94f874cc9 container secret-volume-test: STEP: delete the pod Jan 9 14:11:51.338: INFO: Waiting for pod pod-secrets-1d2fd035-dba8-43bf-a7a7-99b94f874cc9 to disappear Jan 9 14:11:51.391: INFO: Pod pod-secrets-1d2fd035-dba8-43bf-a7a7-99b94f874cc9 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 9 14:11:51.391: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-3386" for this suite. Jan 9 14:11:57.430: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 9 14:11:57.547: INFO: namespace secrets-3386 deletion completed in 6.148165432s • [SLOW TEST:16.423 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 9 14:11:57.547: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Given a Pod with a 'name' label pod-adoption is created STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 9 14:12:06.789: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-9600" for this suite. Jan 9 14:12:28.840: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 9 14:12:28.936: INFO: namespace replication-controller-9600 deletion completed in 22.133025046s • [SLOW TEST:31.388 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 9 14:12:28.936: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Jan 9 14:12:29.220: INFO: Number of nodes with available pods: 0 Jan 9 14:12:29.220: INFO: Node iruya-node is running more than one daemon pod Jan 9 14:12:30.239: INFO: Number of nodes with available pods: 0 Jan 9 14:12:30.240: INFO: Node iruya-node is running more than one daemon pod Jan 9 14:12:31.404: INFO: Number of nodes with available pods: 0 Jan 9 14:12:31.404: INFO: Node iruya-node is running more than one daemon pod Jan 9 14:12:32.235: INFO: Number of nodes with available pods: 0 Jan 9 14:12:32.235: INFO: Node iruya-node is running more than one daemon pod Jan 9 14:12:33.247: INFO: Number of nodes with available pods: 0 Jan 9 14:12:33.247: INFO: Node iruya-node is running more than one daemon pod Jan 9 14:12:34.699: INFO: Number of nodes with available pods: 0 Jan 9 14:12:34.699: INFO: Node iruya-node is running more than one daemon pod Jan 9 14:12:35.401: INFO: Number of nodes with available pods: 0 Jan 9 14:12:35.401: INFO: Node iruya-node is running more than one daemon pod Jan 9 14:12:36.305: INFO: Number of nodes with available pods: 0 Jan 9 14:12:36.305: INFO: Node iruya-node is running more than one daemon pod Jan 9 14:12:37.234: INFO: Number of nodes with available pods: 0 Jan 9 14:12:37.234: INFO: Node iruya-node is running more than one daemon pod Jan 9 14:12:38.235: INFO: Number of nodes with available pods: 1 Jan 9 14:12:38.235: INFO: Node iruya-node is running more than one daemon pod Jan 9 14:12:39.233: INFO: Number of nodes with available pods: 1 Jan 9 14:12:39.233: INFO: Node iruya-node is running more than one daemon pod Jan 9 14:12:40.242: INFO: Number of nodes with available pods: 2 Jan 9 14:12:40.242: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Stop a daemon pod, check that the daemon pod is revived. Jan 9 14:12:40.320: INFO: Number of nodes with available pods: 1 Jan 9 14:12:40.320: INFO: Node iruya-node is running more than one daemon pod Jan 9 14:12:41.335: INFO: Number of nodes with available pods: 1 Jan 9 14:12:41.335: INFO: Node iruya-node is running more than one daemon pod Jan 9 14:12:42.334: INFO: Number of nodes with available pods: 1 Jan 9 14:12:42.334: INFO: Node iruya-node is running more than one daemon pod Jan 9 14:12:43.335: INFO: Number of nodes with available pods: 1 Jan 9 14:12:43.335: INFO: Node iruya-node is running more than one daemon pod Jan 9 14:12:44.331: INFO: Number of nodes with available pods: 1 Jan 9 14:12:44.331: INFO: Node iruya-node is running more than one daemon pod Jan 9 14:12:45.340: INFO: Number of nodes with available pods: 1 Jan 9 14:12:45.340: INFO: Node iruya-node is running more than one daemon pod Jan 9 14:12:46.348: INFO: Number of nodes with available pods: 1 Jan 9 14:12:46.348: INFO: Node iruya-node is running more than one daemon pod Jan 9 14:12:47.337: INFO: Number of nodes with available pods: 1 Jan 9 14:12:47.338: INFO: Node iruya-node is running more than one daemon pod Jan 9 14:12:48.335: INFO: Number of nodes with available pods: 1 Jan 9 14:12:48.335: INFO: Node iruya-node is running more than one daemon pod Jan 9 14:12:49.335: INFO: Number of nodes with available pods: 1 Jan 9 14:12:49.335: INFO: Node iruya-node is running more than one daemon pod Jan 9 14:12:50.334: INFO: Number of nodes with available pods: 1 Jan 9 14:12:50.334: INFO: Node iruya-node is running more than one daemon pod Jan 9 14:12:51.341: INFO: Number of nodes with available pods: 1 Jan 9 14:12:51.342: INFO: Node iruya-node is running more than one daemon pod Jan 9 14:12:52.332: INFO: Number of nodes with available pods: 1 Jan 9 14:12:52.332: INFO: Node iruya-node is running more than one daemon pod Jan 9 14:12:53.335: INFO: Number of nodes with available pods: 1 Jan 9 14:12:53.335: INFO: Node iruya-node is running more than one daemon pod Jan 9 14:12:54.337: INFO: Number of nodes with available pods: 1 Jan 9 14:12:54.337: INFO: Node iruya-node is running more than one daemon pod Jan 9 14:12:55.357: INFO: Number of nodes with available pods: 1 Jan 9 14:12:55.357: INFO: Node iruya-node is running more than one daemon pod Jan 9 14:12:56.376: INFO: Number of nodes with available pods: 1 Jan 9 14:12:56.376: INFO: Node iruya-node is running more than one daemon pod Jan 9 14:12:57.334: INFO: Number of nodes with available pods: 1 Jan 9 14:12:57.334: INFO: Node iruya-node is running more than one daemon pod Jan 9 14:12:58.343: INFO: Number of nodes with available pods: 1 Jan 9 14:12:58.343: INFO: Node iruya-node is running more than one daemon pod Jan 9 14:12:59.333: INFO: Number of nodes with available pods: 1 Jan 9 14:12:59.333: INFO: Node iruya-node is running more than one daemon pod Jan 9 14:13:00.355: INFO: Number of nodes with available pods: 1 Jan 9 14:13:00.355: INFO: Node iruya-node is running more than one daemon pod Jan 9 14:13:01.340: INFO: Number of nodes with available pods: 1 Jan 9 14:13:01.340: INFO: Node iruya-node is running more than one daemon pod Jan 9 14:13:02.344: INFO: Number of nodes with available pods: 1 Jan 9 14:13:02.344: INFO: Node iruya-node is running more than one daemon pod Jan 9 14:13:03.408: INFO: Number of nodes with available pods: 1 Jan 9 14:13:03.408: INFO: Node iruya-node is running more than one daemon pod Jan 9 14:13:04.350: INFO: Number of nodes with available pods: 2 Jan 9 14:13:04.350: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-6412, will wait for the garbage collector to delete the pods Jan 9 14:13:04.433: INFO: Deleting DaemonSet.extensions daemon-set took: 23.327892ms Jan 9 14:13:04.834: INFO: Terminating DaemonSet.extensions daemon-set pods took: 401.229827ms Jan 9 14:13:11.343: INFO: Number of nodes with available pods: 0 Jan 9 14:13:11.343: INFO: Number of running nodes: 0, number of available pods: 0 Jan 9 14:13:11.349: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-6412/daemonsets","resourceVersion":"19909613"},"items":null} Jan 9 14:13:11.354: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-6412/pods","resourceVersion":"19909613"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 9 14:13:11.369: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-6412" for this suite. Jan 9 14:13:17.412: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 9 14:13:17.503: INFO: namespace daemonsets-6412 deletion completed in 6.127508793s • [SLOW TEST:48.567 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-api-machinery] Aggregator Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 9 14:13:17.503: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename aggregator STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76 Jan 9 14:13:17.561: INFO: >>> kubeConfig: /root/.kube/config [It] Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Registering the sample API server. Jan 9 14:13:18.069: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set Jan 9 14:13:20.477: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714175998, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714175998, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714175998, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714175997, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 9 14:13:22.487: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714175998, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714175998, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714175998, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714175997, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 9 14:13:24.490: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714175998, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714175998, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714175998, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714175997, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 9 14:13:26.495: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714175998, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714175998, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714175998, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714175997, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 9 14:13:28.493: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714175998, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714175998, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714175998, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714175997, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 9 14:13:34.345: INFO: Waited 3.839046562s for the sample-apiserver to be ready to handle requests. [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67 [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 9 14:13:35.111: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "aggregator-7343" for this suite. Jan 9 14:13:41.428: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 9 14:13:41.547: INFO: namespace aggregator-7343 deletion completed in 6.349200938s • [SLOW TEST:24.044 seconds] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 9 14:13:41.549: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-7330.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-7330.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-7330.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-7330.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-7330.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-7330.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-7330.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-7330.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-7330.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-7330.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-7330.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-7330.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7330.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 36.221.102.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.102.221.36_udp@PTR;check="$$(dig +tcp +noall +answer +search 36.221.102.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.102.221.36_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-7330.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-7330.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-7330.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-7330.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-7330.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-7330.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-7330.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-7330.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-7330.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-7330.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-7330.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-7330.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7330.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 36.221.102.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.102.221.36_udp@PTR;check="$$(dig +tcp +noall +answer +search 36.221.102.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.102.221.36_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jan 9 14:13:55.940: INFO: Unable to read wheezy_udp@dns-test-service.dns-7330.svc.cluster.local from pod dns-7330/dns-test-0aeb86cb-3175-44e4-a231-9fd405d259ef: the server could not find the requested resource (get pods dns-test-0aeb86cb-3175-44e4-a231-9fd405d259ef) Jan 9 14:13:55.950: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7330.svc.cluster.local from pod dns-7330/dns-test-0aeb86cb-3175-44e4-a231-9fd405d259ef: the server could not find the requested resource (get pods dns-test-0aeb86cb-3175-44e4-a231-9fd405d259ef) Jan 9 14:13:55.959: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-7330.svc.cluster.local from pod dns-7330/dns-test-0aeb86cb-3175-44e4-a231-9fd405d259ef: the server could not find the requested resource (get pods dns-test-0aeb86cb-3175-44e4-a231-9fd405d259ef) Jan 9 14:13:55.966: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-7330.svc.cluster.local from pod dns-7330/dns-test-0aeb86cb-3175-44e4-a231-9fd405d259ef: the server could not find the requested resource (get pods dns-test-0aeb86cb-3175-44e4-a231-9fd405d259ef) Jan 9 14:13:55.973: INFO: Unable to read wheezy_udp@_http._tcp.test-service-2.dns-7330.svc.cluster.local from pod dns-7330/dns-test-0aeb86cb-3175-44e4-a231-9fd405d259ef: the server could not find the requested resource (get pods dns-test-0aeb86cb-3175-44e4-a231-9fd405d259ef) Jan 9 14:13:55.978: INFO: Unable to read wheezy_tcp@_http._tcp.test-service-2.dns-7330.svc.cluster.local from pod dns-7330/dns-test-0aeb86cb-3175-44e4-a231-9fd405d259ef: the server could not find the requested resource (get pods dns-test-0aeb86cb-3175-44e4-a231-9fd405d259ef) Jan 9 14:13:55.982: INFO: Unable to read wheezy_udp@PodARecord from pod dns-7330/dns-test-0aeb86cb-3175-44e4-a231-9fd405d259ef: the server could not find the requested resource (get pods dns-test-0aeb86cb-3175-44e4-a231-9fd405d259ef) Jan 9 14:13:55.989: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-7330/dns-test-0aeb86cb-3175-44e4-a231-9fd405d259ef: the server could not find the requested resource (get pods dns-test-0aeb86cb-3175-44e4-a231-9fd405d259ef) Jan 9 14:13:55.994: INFO: Unable to read 10.102.221.36_udp@PTR from pod dns-7330/dns-test-0aeb86cb-3175-44e4-a231-9fd405d259ef: the server could not find the requested resource (get pods dns-test-0aeb86cb-3175-44e4-a231-9fd405d259ef) Jan 9 14:13:56.004: INFO: Unable to read 10.102.221.36_tcp@PTR from pod dns-7330/dns-test-0aeb86cb-3175-44e4-a231-9fd405d259ef: the server could not find the requested resource (get pods dns-test-0aeb86cb-3175-44e4-a231-9fd405d259ef) Jan 9 14:13:56.010: INFO: Unable to read jessie_udp@dns-test-service.dns-7330.svc.cluster.local from pod dns-7330/dns-test-0aeb86cb-3175-44e4-a231-9fd405d259ef: the server could not find the requested resource (get pods dns-test-0aeb86cb-3175-44e4-a231-9fd405d259ef) Jan 9 14:13:56.014: INFO: Unable to read jessie_tcp@dns-test-service.dns-7330.svc.cluster.local from pod dns-7330/dns-test-0aeb86cb-3175-44e4-a231-9fd405d259ef: the server could not find the requested resource (get pods dns-test-0aeb86cb-3175-44e4-a231-9fd405d259ef) Jan 9 14:13:56.018: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-7330.svc.cluster.local from pod dns-7330/dns-test-0aeb86cb-3175-44e4-a231-9fd405d259ef: the server could not find the requested resource (get pods dns-test-0aeb86cb-3175-44e4-a231-9fd405d259ef) Jan 9 14:13:56.024: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7330.svc.cluster.local from pod dns-7330/dns-test-0aeb86cb-3175-44e4-a231-9fd405d259ef: the server could not find the requested resource (get pods dns-test-0aeb86cb-3175-44e4-a231-9fd405d259ef) Jan 9 14:13:56.027: INFO: Unable to read jessie_udp@_http._tcp.test-service-2.dns-7330.svc.cluster.local from pod dns-7330/dns-test-0aeb86cb-3175-44e4-a231-9fd405d259ef: the server could not find the requested resource (get pods dns-test-0aeb86cb-3175-44e4-a231-9fd405d259ef) Jan 9 14:13:56.031: INFO: Unable to read jessie_tcp@_http._tcp.test-service-2.dns-7330.svc.cluster.local from pod dns-7330/dns-test-0aeb86cb-3175-44e4-a231-9fd405d259ef: the server could not find the requested resource (get pods dns-test-0aeb86cb-3175-44e4-a231-9fd405d259ef) Jan 9 14:13:56.035: INFO: Unable to read jessie_udp@PodARecord from pod dns-7330/dns-test-0aeb86cb-3175-44e4-a231-9fd405d259ef: the server could not find the requested resource (get pods dns-test-0aeb86cb-3175-44e4-a231-9fd405d259ef) Jan 9 14:13:56.040: INFO: Unable to read jessie_tcp@PodARecord from pod dns-7330/dns-test-0aeb86cb-3175-44e4-a231-9fd405d259ef: the server could not find the requested resource (get pods dns-test-0aeb86cb-3175-44e4-a231-9fd405d259ef) Jan 9 14:13:56.045: INFO: Unable to read 10.102.221.36_udp@PTR from pod dns-7330/dns-test-0aeb86cb-3175-44e4-a231-9fd405d259ef: the server could not find the requested resource (get pods dns-test-0aeb86cb-3175-44e4-a231-9fd405d259ef) Jan 9 14:13:56.049: INFO: Unable to read 10.102.221.36_tcp@PTR from pod dns-7330/dns-test-0aeb86cb-3175-44e4-a231-9fd405d259ef: the server could not find the requested resource (get pods dns-test-0aeb86cb-3175-44e4-a231-9fd405d259ef) Jan 9 14:13:56.049: INFO: Lookups using dns-7330/dns-test-0aeb86cb-3175-44e4-a231-9fd405d259ef failed for: [wheezy_udp@dns-test-service.dns-7330.svc.cluster.local wheezy_tcp@dns-test-service.dns-7330.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-7330.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-7330.svc.cluster.local wheezy_udp@_http._tcp.test-service-2.dns-7330.svc.cluster.local wheezy_tcp@_http._tcp.test-service-2.dns-7330.svc.cluster.local wheezy_udp@PodARecord wheezy_tcp@PodARecord 10.102.221.36_udp@PTR 10.102.221.36_tcp@PTR jessie_udp@dns-test-service.dns-7330.svc.cluster.local jessie_tcp@dns-test-service.dns-7330.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-7330.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-7330.svc.cluster.local jessie_udp@_http._tcp.test-service-2.dns-7330.svc.cluster.local jessie_tcp@_http._tcp.test-service-2.dns-7330.svc.cluster.local jessie_udp@PodARecord jessie_tcp@PodARecord 10.102.221.36_udp@PTR 10.102.221.36_tcp@PTR] Jan 9 14:14:01.276: INFO: DNS probes using dns-7330/dns-test-0aeb86cb-3175-44e4-a231-9fd405d259ef succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 9 14:14:01.513: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-7330" for this suite. Jan 9 14:14:09.610: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 9 14:14:09.758: INFO: namespace dns-7330 deletion completed in 8.235851537s • [SLOW TEST:28.210 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 9 14:14:09.759: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Jan 9 14:14:10.123: INFO: Waiting up to 5m0s for pod "downwardapi-volume-01224b4a-a4b9-44dd-8311-f70e4508752c" in namespace "projected-3169" to be "success or failure" Jan 9 14:14:10.133: INFO: Pod "downwardapi-volume-01224b4a-a4b9-44dd-8311-f70e4508752c": Phase="Pending", Reason="", readiness=false. Elapsed: 9.181354ms Jan 9 14:14:12.195: INFO: Pod "downwardapi-volume-01224b4a-a4b9-44dd-8311-f70e4508752c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.072019444s Jan 9 14:14:14.205: INFO: Pod "downwardapi-volume-01224b4a-a4b9-44dd-8311-f70e4508752c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.081514335s Jan 9 14:14:16.221: INFO: Pod "downwardapi-volume-01224b4a-a4b9-44dd-8311-f70e4508752c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.097187494s Jan 9 14:14:18.233: INFO: Pod "downwardapi-volume-01224b4a-a4b9-44dd-8311-f70e4508752c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.109275981s STEP: Saw pod success Jan 9 14:14:18.233: INFO: Pod "downwardapi-volume-01224b4a-a4b9-44dd-8311-f70e4508752c" satisfied condition "success or failure" Jan 9 14:14:18.237: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-01224b4a-a4b9-44dd-8311-f70e4508752c container client-container: STEP: delete the pod Jan 9 14:14:18.311: INFO: Waiting for pod downwardapi-volume-01224b4a-a4b9-44dd-8311-f70e4508752c to disappear Jan 9 14:14:18.322: INFO: Pod downwardapi-volume-01224b4a-a4b9-44dd-8311-f70e4508752c no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 9 14:14:18.323: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3169" for this suite. Jan 9 14:14:24.357: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 9 14:14:24.452: INFO: namespace projected-3169 deletion completed in 6.119329467s • [SLOW TEST:14.694 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 9 14:14:24.453: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name secret-emptykey-test-edc11326-ac56-4d4d-9734-e43314bd00a3 [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 9 14:14:24.732: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-9919" for this suite. Jan 9 14:14:30.772: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 9 14:14:30.931: INFO: namespace secrets-9919 deletion completed in 6.187314777s • [SLOW TEST:6.478 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31 should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 9 14:14:30.932: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Jan 9 14:14:39.605: INFO: Successfully updated pod "pod-update-activedeadlineseconds-7788f579-f835-49ee-ba6b-2243f2d98336" Jan 9 14:14:39.605: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-7788f579-f835-49ee-ba6b-2243f2d98336" in namespace "pods-4903" to be "terminated due to deadline exceeded" Jan 9 14:14:39.612: INFO: Pod "pod-update-activedeadlineseconds-7788f579-f835-49ee-ba6b-2243f2d98336": Phase="Running", Reason="", readiness=true. Elapsed: 6.941827ms Jan 9 14:14:41.802: INFO: Pod "pod-update-activedeadlineseconds-7788f579-f835-49ee-ba6b-2243f2d98336": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.196546031s Jan 9 14:14:41.802: INFO: Pod "pod-update-activedeadlineseconds-7788f579-f835-49ee-ba6b-2243f2d98336" satisfied condition "terminated due to deadline exceeded" [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 9 14:14:41.802: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-4903" for this suite. Jan 9 14:14:47.841: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 9 14:14:47.977: INFO: namespace pods-4903 deletion completed in 6.165807059s • [SLOW TEST:17.046 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 9 14:14:47.978: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars Jan 9 14:14:48.138: INFO: Waiting up to 5m0s for pod "downward-api-516b1973-0656-4c9a-a8c1-418896041ed8" in namespace "downward-api-1991" to be "success or failure" Jan 9 14:14:48.147: INFO: Pod "downward-api-516b1973-0656-4c9a-a8c1-418896041ed8": Phase="Pending", Reason="", readiness=false. Elapsed: 8.817688ms Jan 9 14:14:50.155: INFO: Pod "downward-api-516b1973-0656-4c9a-a8c1-418896041ed8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016724794s Jan 9 14:14:52.162: INFO: Pod "downward-api-516b1973-0656-4c9a-a8c1-418896041ed8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.023833858s Jan 9 14:14:54.172: INFO: Pod "downward-api-516b1973-0656-4c9a-a8c1-418896041ed8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.033741806s Jan 9 14:14:56.182: INFO: Pod "downward-api-516b1973-0656-4c9a-a8c1-418896041ed8": Phase="Pending", Reason="", readiness=false. Elapsed: 8.044287086s Jan 9 14:14:58.190: INFO: Pod "downward-api-516b1973-0656-4c9a-a8c1-418896041ed8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.052423804s STEP: Saw pod success Jan 9 14:14:58.190: INFO: Pod "downward-api-516b1973-0656-4c9a-a8c1-418896041ed8" satisfied condition "success or failure" Jan 9 14:14:58.195: INFO: Trying to get logs from node iruya-node pod downward-api-516b1973-0656-4c9a-a8c1-418896041ed8 container dapi-container: STEP: delete the pod Jan 9 14:14:58.253: INFO: Waiting for pod downward-api-516b1973-0656-4c9a-a8c1-418896041ed8 to disappear Jan 9 14:14:58.257: INFO: Pod downward-api-516b1973-0656-4c9a-a8c1-418896041ed8 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 9 14:14:58.258: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1991" for this suite. Jan 9 14:15:04.300: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 9 14:15:04.438: INFO: namespace downward-api-1991 deletion completed in 6.174037267s • [SLOW TEST:16.460 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 9 14:15:04.439: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44 [It] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod Jan 9 14:15:04.521: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 9 14:15:19.560: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-7565" for this suite. Jan 9 14:15:25.662: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 9 14:15:25.808: INFO: namespace init-container-7565 deletion completed in 6.220141406s • [SLOW TEST:21.369 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Proxy server should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 9 14:15:25.808: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: starting the proxy server Jan 9 14:15:25.956: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter' STEP: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 9 14:15:26.141: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6478" for this suite. Jan 9 14:15:32.174: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 9 14:15:32.302: INFO: namespace kubectl-6478 deletion completed in 6.151553379s • [SLOW TEST:6.494 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Proxy server /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 9 14:15:32.303: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jan 9 14:15:32.461: INFO: Creating deployment "nginx-deployment" Jan 9 14:15:32.472: INFO: Waiting for observed generation 1 Jan 9 14:15:35.379: INFO: Waiting for all required pods to come up Jan 9 14:15:35.391: INFO: Pod name nginx: Found 10 pods out of 10 STEP: ensuring each pod is running Jan 9 14:16:02.034: INFO: Waiting for deployment "nginx-deployment" to complete Jan 9 14:16:02.045: INFO: Updating deployment "nginx-deployment" with a non-existent image Jan 9 14:16:02.063: INFO: Updating deployment nginx-deployment Jan 9 14:16:02.063: INFO: Waiting for observed generation 2 Jan 9 14:16:05.432: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 Jan 9 14:16:05.499: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 Jan 9 14:16:05.512: INFO: Waiting for the first rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas Jan 9 14:16:05.728: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 Jan 9 14:16:05.728: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 Jan 9 14:16:05.733: INFO: Waiting for the second rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas Jan 9 14:16:05.743: INFO: Verifying that deployment "nginx-deployment" has minimum required number of available replicas Jan 9 14:16:05.743: INFO: Scaling up the deployment "nginx-deployment" from 10 to 30 Jan 9 14:16:05.758: INFO: Updating deployment nginx-deployment Jan 9 14:16:05.758: INFO: Waiting for the replicasets of deployment "nginx-deployment" to have desired number of replicas Jan 9 14:16:07.024: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 Jan 9 14:16:07.076: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 Jan 9 14:16:12.381: INFO: Deployment "nginx-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment,GenerateName:,Namespace:deployment-6573,SelfLink:/apis/apps/v1/namespaces/deployment-6573/deployments/nginx-deployment,UID:b0c4be32-bf8d-4630-a522-db170e9c15e3,ResourceVersion:19910362,Generation:3,CreationTimestamp:2020-01-09 14:15:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*30,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:21,UpdatedReplicas:13,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[{Available False 2020-01-09 14:16:06 +0000 UTC 2020-01-09 14:16:06 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2020-01-09 14:16:08 +0000 UTC 2020-01-09 14:15:32 +0000 UTC ReplicaSetUpdated ReplicaSet "nginx-deployment-55fb7cb77f" is progressing.}],ReadyReplicas:8,CollisionCount:nil,},} Jan 9 14:16:15.318: INFO: New ReplicaSet "nginx-deployment-55fb7cb77f" of Deployment "nginx-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f,GenerateName:,Namespace:deployment-6573,SelfLink:/apis/apps/v1/namespaces/deployment-6573/replicasets/nginx-deployment-55fb7cb77f,UID:c73edf97-2336-4677-b581-ec7775404ea2,ResourceVersion:19910359,Generation:3,CreationTimestamp:2020-01-09 14:16:02 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment nginx-deployment b0c4be32-bf8d-4630-a522-db170e9c15e3 0xc00332f707 0xc00332f708}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*13,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Jan 9 14:16:15.318: INFO: All old ReplicaSets of Deployment "nginx-deployment": Jan 9 14:16:15.318: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498,GenerateName:,Namespace:deployment-6573,SelfLink:/apis/apps/v1/namespaces/deployment-6573/replicasets/nginx-deployment-7b8c6f4498,UID:bf0d0aef-9951-4d35-95f9-614d1b9c9c8e,ResourceVersion:19910370,Generation:3,CreationTimestamp:2020-01-09 14:15:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment nginx-deployment b0c4be32-bf8d-4630-a522-db170e9c15e3 0xc00332f7d7 0xc00332f7d8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*20,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[],},} Jan 9 14:16:17.108: INFO: Pod "nginx-deployment-55fb7cb77f-524lw" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-524lw,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-6573,SelfLink:/api/v1/namespaces/deployment-6573/pods/nginx-deployment-55fb7cb77f-524lw,UID:69f83502-deac-4c95-9b17-b3654f0d5809,ResourceVersion:19910329,Generation:0,CreationTimestamp:2020-01-09 14:16:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f c73edf97-2336-4677-b581-ec7775404ea2 0xc002698147 0xc002698148}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-j74k5 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-j74k5,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-j74k5 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0026981c0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0026981e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-09 14:16:07 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 9 14:16:17.108: INFO: Pod "nginx-deployment-55fb7cb77f-6lw2g" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-6lw2g,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-6573,SelfLink:/api/v1/namespaces/deployment-6573/pods/nginx-deployment-55fb7cb77f-6lw2g,UID:e086e71a-c187-4436-874c-3a24512437f9,ResourceVersion:19910291,Generation:0,CreationTimestamp:2020-01-09 14:16:02 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f c73edf97-2336-4677-b581-ec7775404ea2 0xc002698267 0xc002698268}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-j74k5 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-j74k5,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-j74k5 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0026982d0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0026982f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-09 14:16:02 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-09 14:16:02 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-09 14:16:02 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-09 14:16:02 +0000 UTC }],Message:,Reason:,HostIP:10.96.2.216,PodIP:,StartTime:2020-01-09 14:16:02 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 9 14:16:17.108: INFO: Pod "nginx-deployment-55fb7cb77f-72wnm" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-72wnm,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-6573,SelfLink:/api/v1/namespaces/deployment-6573/pods/nginx-deployment-55fb7cb77f-72wnm,UID:349c80b4-db1b-4e2a-b410-36a3b40cd8fa,ResourceVersion:19910298,Generation:0,CreationTimestamp:2020-01-09 14:16:02 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f c73edf97-2336-4677-b581-ec7775404ea2 0xc0026983c7 0xc0026983c8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-j74k5 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-j74k5,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-j74k5 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002698440} {node.kubernetes.io/unreachable Exists NoExecute 0xc002698460}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-09 14:16:04 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-09 14:16:04 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-09 14:16:04 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-09 14:16:02 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2020-01-09 14:16:04 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 9 14:16:17.109: INFO: Pod "nginx-deployment-55fb7cb77f-b6rfs" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-b6rfs,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-6573,SelfLink:/api/v1/namespaces/deployment-6573/pods/nginx-deployment-55fb7cb77f-b6rfs,UID:78d00bea-60f4-468b-a4fa-d5c205d76452,ResourceVersion:19910346,Generation:0,CreationTimestamp:2020-01-09 14:16:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f c73edf97-2336-4677-b581-ec7775404ea2 0xc002698537 0xc002698538}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-j74k5 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-j74k5,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-j74k5 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0026985a0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0026985c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-09 14:16:07 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 9 14:16:17.109: INFO: Pod "nginx-deployment-55fb7cb77f-bqxkm" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-bqxkm,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-6573,SelfLink:/api/v1/namespaces/deployment-6573/pods/nginx-deployment-55fb7cb77f-bqxkm,UID:2088801c-cb08-4a43-977f-d61577afde1a,ResourceVersion:19910279,Generation:0,CreationTimestamp:2020-01-09 14:16:02 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f c73edf97-2336-4677-b581-ec7775404ea2 0xc002698647 0xc002698648}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-j74k5 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-j74k5,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-j74k5 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0026986c0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0026986e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-09 14:16:02 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-09 14:16:02 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-09 14:16:02 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-09 14:16:02 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2020-01-09 14:16:02 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 9 14:16:17.109: INFO: Pod "nginx-deployment-55fb7cb77f-g22rb" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-g22rb,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-6573,SelfLink:/api/v1/namespaces/deployment-6573/pods/nginx-deployment-55fb7cb77f-g22rb,UID:5870bf00-3ee7-4d45-a921-26f42f59653c,ResourceVersion:19910353,Generation:0,CreationTimestamp:2020-01-09 14:16:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f c73edf97-2336-4677-b581-ec7775404ea2 0xc0026987b7 0xc0026987b8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-j74k5 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-j74k5,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-j74k5 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002698820} {node.kubernetes.io/unreachable Exists NoExecute 0xc002698840}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-09 14:16:07 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 9 14:16:17.109: INFO: Pod "nginx-deployment-55fb7cb77f-ggjz4" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-ggjz4,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-6573,SelfLink:/api/v1/namespaces/deployment-6573/pods/nginx-deployment-55fb7cb77f-ggjz4,UID:5d6d1843-48fa-46e1-88cc-e53270b56f46,ResourceVersion:19910354,Generation:0,CreationTimestamp:2020-01-09 14:16:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f c73edf97-2336-4677-b581-ec7775404ea2 0xc0026988d7 0xc0026988d8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-j74k5 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-j74k5,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-j74k5 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002698950} {node.kubernetes.io/unreachable Exists NoExecute 0xc002698970}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-09 14:16:07 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 9 14:16:17.110: INFO: Pod "nginx-deployment-55fb7cb77f-kjm7d" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-kjm7d,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-6573,SelfLink:/api/v1/namespaces/deployment-6573/pods/nginx-deployment-55fb7cb77f-kjm7d,UID:14c19b19-5a13-4d65-8111-e900bd0c215f,ResourceVersion:19910347,Generation:0,CreationTimestamp:2020-01-09 14:16:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f c73edf97-2336-4677-b581-ec7775404ea2 0xc0026989f7 0xc0026989f8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-j74k5 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-j74k5,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-j74k5 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002698a70} {node.kubernetes.io/unreachable Exists NoExecute 0xc002698a90}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-09 14:16:07 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 9 14:16:17.110: INFO: Pod "nginx-deployment-55fb7cb77f-msz6b" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-msz6b,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-6573,SelfLink:/api/v1/namespaces/deployment-6573/pods/nginx-deployment-55fb7cb77f-msz6b,UID:fd1f2692-e0c9-4233-9945-b21f22bd7604,ResourceVersion:19910266,Generation:0,CreationTimestamp:2020-01-09 14:16:02 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f c73edf97-2336-4677-b581-ec7775404ea2 0xc002698b17 0xc002698b18}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-j74k5 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-j74k5,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-j74k5 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002698b90} {node.kubernetes.io/unreachable Exists NoExecute 0xc002698bb0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-09 14:16:02 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-09 14:16:02 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-09 14:16:02 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-09 14:16:02 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2020-01-09 14:16:02 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 9 14:16:17.110: INFO: Pod "nginx-deployment-55fb7cb77f-ndhlt" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-ndhlt,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-6573,SelfLink:/api/v1/namespaces/deployment-6573/pods/nginx-deployment-55fb7cb77f-ndhlt,UID:a45dc329-d408-4e13-9e6b-24ec97eb5c54,ResourceVersion:19910270,Generation:0,CreationTimestamp:2020-01-09 14:16:02 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f c73edf97-2336-4677-b581-ec7775404ea2 0xc002698c87 0xc002698c88}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-j74k5 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-j74k5,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-j74k5 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002698cf0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002698d10}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-09 14:16:02 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-09 14:16:02 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-09 14:16:02 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-09 14:16:02 +0000 UTC }],Message:,Reason:,HostIP:10.96.2.216,PodIP:,StartTime:2020-01-09 14:16:02 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 9 14:16:17.110: INFO: Pod "nginx-deployment-55fb7cb77f-qzpj6" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-qzpj6,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-6573,SelfLink:/api/v1/namespaces/deployment-6573/pods/nginx-deployment-55fb7cb77f-qzpj6,UID:5943a603-fe41-44d2-9c60-addd3cb48312,ResourceVersion:19910355,Generation:0,CreationTimestamp:2020-01-09 14:16:06 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f c73edf97-2336-4677-b581-ec7775404ea2 0xc002698e07 0xc002698e08}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-j74k5 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-j74k5,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-j74k5 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002698ea0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002698f10}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-09 14:16:07 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-09 14:16:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-09 14:16:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-09 14:16:07 +0000 UTC }],Message:,Reason:,HostIP:10.96.2.216,PodIP:,StartTime:2020-01-09 14:16:07 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 9 14:16:17.110: INFO: Pod "nginx-deployment-55fb7cb77f-rrdzk" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-rrdzk,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-6573,SelfLink:/api/v1/namespaces/deployment-6573/pods/nginx-deployment-55fb7cb77f-rrdzk,UID:16669ca3-3254-4a4e-969f-c602993f66cc,ResourceVersion:19910335,Generation:0,CreationTimestamp:2020-01-09 14:16:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f c73edf97-2336-4677-b581-ec7775404ea2 0xc002699107 0xc002699108}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-j74k5 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-j74k5,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-j74k5 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0026991f0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002699290}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-09 14:16:07 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 9 14:16:17.111: INFO: Pod "nginx-deployment-55fb7cb77f-zl2c9" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-zl2c9,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-6573,SelfLink:/api/v1/namespaces/deployment-6573/pods/nginx-deployment-55fb7cb77f-zl2c9,UID:290fcf95-7f8b-4970-b42c-28879992e379,ResourceVersion:19910351,Generation:0,CreationTimestamp:2020-01-09 14:16:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f c73edf97-2336-4677-b581-ec7775404ea2 0xc002699327 0xc002699328}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-j74k5 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-j74k5,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-j74k5 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0026993b0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0026993d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-09 14:16:07 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 9 14:16:17.111: INFO: Pod "nginx-deployment-7b8c6f4498-2876l" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-2876l,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6573,SelfLink:/api/v1/namespaces/deployment-6573/pods/nginx-deployment-7b8c6f4498-2876l,UID:91be916d-9b24-48e3-bdc0-325352a19ef2,ResourceVersion:19910224,Generation:0,CreationTimestamp:2020-01-09 14:15:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 bf0d0aef-9951-4d35-95f9-614d1b9c9c8e 0xc002699457 0xc002699458}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-j74k5 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-j74k5,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-j74k5 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002699540} {node.kubernetes.io/unreachable Exists NoExecute 0xc002699560}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-09 14:15:32 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-09 14:16:01 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-09 14:16:01 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-09 14:15:32 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.2,StartTime:2020-01-09 14:15:32 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-09 14:15:59 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://85d048c2b2884dde1494e1768a743d5fdb9b1ad229ebafd39eaa48670bc20941}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 9 14:16:17.111: INFO: Pod "nginx-deployment-7b8c6f4498-49dnl" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-49dnl,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6573,SelfLink:/api/v1/namespaces/deployment-6573/pods/nginx-deployment-7b8c6f4498-49dnl,UID:3bc87029-f9fe-43c1-a2b5-f238418c8a5e,ResourceVersion:19910350,Generation:0,CreationTimestamp:2020-01-09 14:16:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 bf0d0aef-9951-4d35-95f9-614d1b9c9c8e 0xc002699637 0xc002699638}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-j74k5 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-j74k5,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-j74k5 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0026996e0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002699740}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-09 14:16:07 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 9 14:16:17.111: INFO: Pod "nginx-deployment-7b8c6f4498-7dkkh" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-7dkkh,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6573,SelfLink:/api/v1/namespaces/deployment-6573/pods/nginx-deployment-7b8c6f4498-7dkkh,UID:df03dbf1-1666-4e74-ada8-3c286fb0b092,ResourceVersion:19910348,Generation:0,CreationTimestamp:2020-01-09 14:16:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 bf0d0aef-9951-4d35-95f9-614d1b9c9c8e 0xc002699807 0xc002699808}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-j74k5 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-j74k5,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-j74k5 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0026998a0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0026998c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-09 14:16:07 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 9 14:16:17.112: INFO: Pod "nginx-deployment-7b8c6f4498-7kz8g" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-7kz8g,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6573,SelfLink:/api/v1/namespaces/deployment-6573/pods/nginx-deployment-7b8c6f4498-7kz8g,UID:bddfe960-9d64-41c0-864a-d0014f495457,ResourceVersion:19910344,Generation:0,CreationTimestamp:2020-01-09 14:16:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 bf0d0aef-9951-4d35-95f9-614d1b9c9c8e 0xc002699a07 0xc002699a08}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-j74k5 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-j74k5,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-j74k5 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002699a90} {node.kubernetes.io/unreachable Exists NoExecute 0xc002699ab0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-09 14:16:07 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 9 14:16:17.112: INFO: Pod "nginx-deployment-7b8c6f4498-bfnss" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-bfnss,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6573,SelfLink:/api/v1/namespaces/deployment-6573/pods/nginx-deployment-7b8c6f4498-bfnss,UID:7a781554-3133-4b39-ba11-ad519d99a378,ResourceVersion:19910368,Generation:0,CreationTimestamp:2020-01-09 14:16:06 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 bf0d0aef-9951-4d35-95f9-614d1b9c9c8e 0xc002699b37 0xc002699b38}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-j74k5 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-j74k5,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-j74k5 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002699bb0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002699bd0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-09 14:16:07 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-09 14:16:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-09 14:16:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-09 14:16:06 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2020-01-09 14:16:07 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 9 14:16:17.113: INFO: Pod "nginx-deployment-7b8c6f4498-d9sqw" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-d9sqw,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6573,SelfLink:/api/v1/namespaces/deployment-6573/pods/nginx-deployment-7b8c6f4498-d9sqw,UID:aa2effd0-6011-44db-af37-a940dfce7db9,ResourceVersion:19910230,Generation:0,CreationTimestamp:2020-01-09 14:15:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 bf0d0aef-9951-4d35-95f9-614d1b9c9c8e 0xc002699c97 0xc002699c98}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-j74k5 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-j74k5,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-j74k5 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002699d10} {node.kubernetes.io/unreachable Exists NoExecute 0xc002699d30}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-09 14:15:32 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-09 14:16:01 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-09 14:16:01 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-09 14:15:32 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.4,StartTime:2020-01-09 14:15:32 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-09 14:16:01 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://a19f125db7527d4daab91c88de297d30714853e47460bf8ecbc43223dfa1e949}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 9 14:16:17.113: INFO: Pod "nginx-deployment-7b8c6f4498-dhrkz" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-dhrkz,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6573,SelfLink:/api/v1/namespaces/deployment-6573/pods/nginx-deployment-7b8c6f4498-dhrkz,UID:3ef98921-c3f0-4085-8235-f758e03e1d7b,ResourceVersion:19910191,Generation:0,CreationTimestamp:2020-01-09 14:15:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 bf0d0aef-9951-4d35-95f9-614d1b9c9c8e 0xc002699e27 0xc002699e28}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-j74k5 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-j74k5,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-j74k5 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002699e90} {node.kubernetes.io/unreachable Exists NoExecute 0xc002699eb0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-09 14:15:32 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-09 14:15:55 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-09 14:15:55 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-09 14:15:32 +0000 UTC }],Message:,Reason:,HostIP:10.96.2.216,PodIP:10.32.0.6,StartTime:2020-01-09 14:15:32 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-09 14:15:53 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://4468f69ce0191d82468a613d0e667732e0fefd11de17480ab16de1821a69b8c2}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 9 14:16:17.114: INFO: Pod "nginx-deployment-7b8c6f4498-fmf8z" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-fmf8z,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6573,SelfLink:/api/v1/namespaces/deployment-6573/pods/nginx-deployment-7b8c6f4498-fmf8z,UID:c84ce2b7-b0f2-4b57-92be-4c4005ef4c07,ResourceVersion:19910334,Generation:0,CreationTimestamp:2020-01-09 14:16:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 bf0d0aef-9951-4d35-95f9-614d1b9c9c8e 0xc002699f87 0xc002699f88}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-j74k5 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-j74k5,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-j74k5 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002699ff0} {node.kubernetes.io/unreachable Exists NoExecute 0xc000b48020}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-09 14:16:07 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 9 14:16:17.114: INFO: Pod "nginx-deployment-7b8c6f4498-g86ml" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-g86ml,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6573,SelfLink:/api/v1/namespaces/deployment-6573/pods/nginx-deployment-7b8c6f4498-g86ml,UID:d98a02a4-0d27-46cc-bd67-96143a16b5b7,ResourceVersion:19910352,Generation:0,CreationTimestamp:2020-01-09 14:16:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 bf0d0aef-9951-4d35-95f9-614d1b9c9c8e 0xc000b48167 0xc000b48168}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-j74k5 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-j74k5,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-j74k5 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000b481f0} {node.kubernetes.io/unreachable Exists NoExecute 0xc000b48210}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-09 14:16:07 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 9 14:16:17.115: INFO: Pod "nginx-deployment-7b8c6f4498-hpggn" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-hpggn,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6573,SelfLink:/api/v1/namespaces/deployment-6573/pods/nginx-deployment-7b8c6f4498-hpggn,UID:5f2397f0-355a-430d-8dc4-a5a011924ca3,ResourceVersion:19910379,Generation:0,CreationTimestamp:2020-01-09 14:16:06 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 bf0d0aef-9951-4d35-95f9-614d1b9c9c8e 0xc000b48297 0xc000b48298}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-j74k5 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-j74k5,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-j74k5 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000b48310} {node.kubernetes.io/unreachable Exists NoExecute 0xc000b48330}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-09 14:16:09 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-09 14:16:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-09 14:16:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-09 14:16:07 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2020-01-09 14:16:09 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 9 14:16:17.115: INFO: Pod "nginx-deployment-7b8c6f4498-lx7sq" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-lx7sq,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6573,SelfLink:/api/v1/namespaces/deployment-6573/pods/nginx-deployment-7b8c6f4498-lx7sq,UID:0a1075e6-237b-4825-be34-12ba7f6ac461,ResourceVersion:19910200,Generation:0,CreationTimestamp:2020-01-09 14:15:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 bf0d0aef-9951-4d35-95f9-614d1b9c9c8e 0xc000b483f7 0xc000b483f8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-j74k5 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-j74k5,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-j74k5 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000b48460} {node.kubernetes.io/unreachable Exists NoExecute 0xc000b48480}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-09 14:15:32 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-09 14:15:55 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-09 14:15:55 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-09 14:15:32 +0000 UTC }],Message:,Reason:,HostIP:10.96.2.216,PodIP:10.32.0.5,StartTime:2020-01-09 14:15:32 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-09 14:15:53 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://d4a28478622bd04ea18e5c5e2483da312891cf9829cd5a68d8e839ed3e196245}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 9 14:16:17.115: INFO: Pod "nginx-deployment-7b8c6f4498-nhr8b" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-nhr8b,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6573,SelfLink:/api/v1/namespaces/deployment-6573/pods/nginx-deployment-7b8c6f4498-nhr8b,UID:1d87382f-f628-429e-882d-162d99739016,ResourceVersion:19910197,Generation:0,CreationTimestamp:2020-01-09 14:15:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 bf0d0aef-9951-4d35-95f9-614d1b9c9c8e 0xc000b48567 0xc000b48568}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-j74k5 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-j74k5,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-j74k5 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000b485e0} {node.kubernetes.io/unreachable Exists NoExecute 0xc000b48600}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-09 14:15:32 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-09 14:15:55 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-09 14:15:55 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-09 14:15:32 +0000 UTC }],Message:,Reason:,HostIP:10.96.2.216,PodIP:10.32.0.4,StartTime:2020-01-09 14:15:32 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-09 14:15:53 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://37c098884858b5d3a8b74255ffd95d2d9a88fb415fa57c7ea2d04aadced5fb1b}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 9 14:16:17.115: INFO: Pod "nginx-deployment-7b8c6f4498-sbk8b" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-sbk8b,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6573,SelfLink:/api/v1/namespaces/deployment-6573/pods/nginx-deployment-7b8c6f4498-sbk8b,UID:8616210e-c0c6-47e6-bbaf-1905e4fa7b8d,ResourceVersion:19910323,Generation:0,CreationTimestamp:2020-01-09 14:16:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 bf0d0aef-9951-4d35-95f9-614d1b9c9c8e 0xc000b486d7 0xc000b486d8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-j74k5 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-j74k5,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-j74k5 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000b48750} {node.kubernetes.io/unreachable Exists NoExecute 0xc000b48770}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-09 14:16:07 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 9 14:16:17.116: INFO: Pod "nginx-deployment-7b8c6f4498-sdtmw" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-sdtmw,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6573,SelfLink:/api/v1/namespaces/deployment-6573/pods/nginx-deployment-7b8c6f4498-sdtmw,UID:f441f0a9-a18d-4f03-926d-e854e0a41934,ResourceVersion:19910376,Generation:0,CreationTimestamp:2020-01-09 14:16:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 bf0d0aef-9951-4d35-95f9-614d1b9c9c8e 0xc000b487f7 0xc000b487f8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-j74k5 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-j74k5,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-j74k5 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000b48870} {node.kubernetes.io/unreachable Exists NoExecute 0xc000b48890}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-09 14:16:07 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-09 14:16:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-09 14:16:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-09 14:16:07 +0000 UTC }],Message:,Reason:,HostIP:10.96.2.216,PodIP:,StartTime:2020-01-09 14:16:07 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 9 14:16:17.116: INFO: Pod "nginx-deployment-7b8c6f4498-svh5k" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-svh5k,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6573,SelfLink:/api/v1/namespaces/deployment-6573/pods/nginx-deployment-7b8c6f4498-svh5k,UID:92dd3042-9c56-4516-a39f-d046da5b9201,ResourceVersion:19910194,Generation:0,CreationTimestamp:2020-01-09 14:15:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 bf0d0aef-9951-4d35-95f9-614d1b9c9c8e 0xc000b48957 0xc000b48958}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-j74k5 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-j74k5,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-j74k5 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000b489d0} {node.kubernetes.io/unreachable Exists NoExecute 0xc000b489f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-09 14:15:32 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-09 14:15:55 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-09 14:15:55 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-09 14:15:32 +0000 UTC }],Message:,Reason:,HostIP:10.96.2.216,PodIP:10.32.0.7,StartTime:2020-01-09 14:15:32 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-09 14:15:53 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://41d9cc203f835e8822a1684668f7b9902d18f0645896b0b319a10c7156091159}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 9 14:16:17.117: INFO: Pod "nginx-deployment-7b8c6f4498-tlzsj" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-tlzsj,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6573,SelfLink:/api/v1/namespaces/deployment-6573/pods/nginx-deployment-7b8c6f4498-tlzsj,UID:f3795b54-5efa-46ed-a514-6eb0b9f11839,ResourceVersion:19910326,Generation:0,CreationTimestamp:2020-01-09 14:16:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 bf0d0aef-9951-4d35-95f9-614d1b9c9c8e 0xc000b48ac7 0xc000b48ac8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-j74k5 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-j74k5,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-j74k5 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000b48b40} {node.kubernetes.io/unreachable Exists NoExecute 0xc000b48b60}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-09 14:16:07 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 9 14:16:17.117: INFO: Pod "nginx-deployment-7b8c6f4498-vsgcc" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-vsgcc,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6573,SelfLink:/api/v1/namespaces/deployment-6573/pods/nginx-deployment-7b8c6f4498-vsgcc,UID:4c55ff56-911f-4d6c-a261-af713e6d2f45,ResourceVersion:19910239,Generation:0,CreationTimestamp:2020-01-09 14:15:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 bf0d0aef-9951-4d35-95f9-614d1b9c9c8e 0xc000b48bf7 0xc000b48bf8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-j74k5 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-j74k5,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-j74k5 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000b48c70} {node.kubernetes.io/unreachable Exists NoExecute 0xc000b48c90}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-09 14:15:32 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-09 14:16:01 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-09 14:16:01 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-09 14:15:32 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.5,StartTime:2020-01-09 14:15:32 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-09 14:16:01 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://cbcc092a6d4841acb7529590a01ccd89597f09ceda449fd96ec3c0315a4544a9}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 9 14:16:17.118: INFO: Pod "nginx-deployment-7b8c6f4498-wnd86" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-wnd86,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6573,SelfLink:/api/v1/namespaces/deployment-6573/pods/nginx-deployment-7b8c6f4498-wnd86,UID:9b09c0eb-37b2-4c99-851c-baea31858c88,ResourceVersion:19910349,Generation:0,CreationTimestamp:2020-01-09 14:16:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 bf0d0aef-9951-4d35-95f9-614d1b9c9c8e 0xc000b48d67 0xc000b48d68}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-j74k5 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-j74k5,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-j74k5 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000b48de0} {node.kubernetes.io/unreachable Exists NoExecute 0xc000b48e00}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-09 14:16:07 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 9 14:16:17.118: INFO: Pod "nginx-deployment-7b8c6f4498-xj4nw" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-xj4nw,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6573,SelfLink:/api/v1/namespaces/deployment-6573/pods/nginx-deployment-7b8c6f4498-xj4nw,UID:f774ab6a-4db0-48c6-bc2c-778a1ccf627e,ResourceVersion:19910227,Generation:0,CreationTimestamp:2020-01-09 14:15:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 bf0d0aef-9951-4d35-95f9-614d1b9c9c8e 0xc000b48e87 0xc000b48e88}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-j74k5 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-j74k5,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-j74k5 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000b48f00} {node.kubernetes.io/unreachable Exists NoExecute 0xc000b48f20}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-09 14:15:32 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-09 14:16:01 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-09 14:16:01 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-09 14:15:32 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.6,StartTime:2020-01-09 14:15:32 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-09 14:16:00 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://074962d9f6b7e8ac0a7be1cdcc05c2c2672d032323e2584b9c15cf653f6e7985}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 9 14:16:17.118: INFO: Pod "nginx-deployment-7b8c6f4498-zw7hs" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-zw7hs,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6573,SelfLink:/api/v1/namespaces/deployment-6573/pods/nginx-deployment-7b8c6f4498-zw7hs,UID:f677de55-2f24-4b67-89bc-9ac46944bf57,ResourceVersion:19910367,Generation:0,CreationTimestamp:2020-01-09 14:16:06 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 bf0d0aef-9951-4d35-95f9-614d1b9c9c8e 0xc000b49007 0xc000b49008}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-j74k5 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-j74k5,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-j74k5 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000b49070} {node.kubernetes.io/unreachable Exists NoExecute 0xc000b49090}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-09 14:16:07 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-09 14:16:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-09 14:16:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-09 14:16:07 +0000 UTC }],Message:,Reason:,HostIP:10.96.2.216,PodIP:,StartTime:2020-01-09 14:16:07 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 9 14:16:17.119: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-6573" for this suite. Jan 9 14:17:04.274: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 9 14:17:04.425: INFO: namespace deployment-6573 deletion completed in 46.027895258s • [SLOW TEST:92.122 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 9 14:17:04.426: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Jan 9 14:17:04.628: INFO: Waiting up to 5m0s for pod "downwardapi-volume-2c3d23e7-c99f-442a-8227-de5d01a1a89b" in namespace "downward-api-4620" to be "success or failure" Jan 9 14:17:04.696: INFO: Pod "downwardapi-volume-2c3d23e7-c99f-442a-8227-de5d01a1a89b": Phase="Pending", Reason="", readiness=false. Elapsed: 67.543934ms Jan 9 14:17:06.712: INFO: Pod "downwardapi-volume-2c3d23e7-c99f-442a-8227-de5d01a1a89b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.083942958s Jan 9 14:17:08.726: INFO: Pod "downwardapi-volume-2c3d23e7-c99f-442a-8227-de5d01a1a89b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.097527882s Jan 9 14:17:10.732: INFO: Pod "downwardapi-volume-2c3d23e7-c99f-442a-8227-de5d01a1a89b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.103393441s Jan 9 14:17:12.772: INFO: Pod "downwardapi-volume-2c3d23e7-c99f-442a-8227-de5d01a1a89b": Phase="Pending", Reason="", readiness=false. Elapsed: 8.144040657s Jan 9 14:17:14.792: INFO: Pod "downwardapi-volume-2c3d23e7-c99f-442a-8227-de5d01a1a89b": Phase="Pending", Reason="", readiness=false. Elapsed: 10.163237823s Jan 9 14:17:16.807: INFO: Pod "downwardapi-volume-2c3d23e7-c99f-442a-8227-de5d01a1a89b": Phase="Pending", Reason="", readiness=false. Elapsed: 12.178931437s Jan 9 14:17:18.832: INFO: Pod "downwardapi-volume-2c3d23e7-c99f-442a-8227-de5d01a1a89b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.203338975s STEP: Saw pod success Jan 9 14:17:18.832: INFO: Pod "downwardapi-volume-2c3d23e7-c99f-442a-8227-de5d01a1a89b" satisfied condition "success or failure" Jan 9 14:17:18.839: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-2c3d23e7-c99f-442a-8227-de5d01a1a89b container client-container: STEP: delete the pod Jan 9 14:17:18.977: INFO: Waiting for pod downwardapi-volume-2c3d23e7-c99f-442a-8227-de5d01a1a89b to disappear Jan 9 14:17:18.991: INFO: Pod downwardapi-volume-2c3d23e7-c99f-442a-8227-de5d01a1a89b no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 9 14:17:18.991: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4620" for this suite. Jan 9 14:17:25.051: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 9 14:17:25.193: INFO: namespace downward-api-4620 deletion completed in 6.190550098s • [SLOW TEST:20.767 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 9 14:17:25.193: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jan 9 14:17:55.317: INFO: Container started at 2020-01-09 14:17:32 +0000 UTC, pod became ready at 2020-01-09 14:17:53 +0000 UTC [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 9 14:17:55.317: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-4318" for this suite. Jan 9 14:18:17.349: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 9 14:18:17.482: INFO: namespace container-probe-4318 deletion completed in 22.1581802s • [SLOW TEST:52.289 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 9 14:18:17.483: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jan 9 14:18:17.619: INFO: Pod name cleanup-pod: Found 0 pods out of 1 Jan 9 14:18:22.632: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Jan 9 14:18:26.661: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 Jan 9 14:18:26.823: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment,GenerateName:,Namespace:deployment-5809,SelfLink:/apis/apps/v1/namespaces/deployment-5809/deployments/test-cleanup-deployment,UID:ad6623db-8e3a-40b2-bd62-af8a9b65abd2,ResourceVersion:19910803,Generation:1,CreationTimestamp:2020-01-09 14:18:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[],ReadyReplicas:0,CollisionCount:nil,},} Jan 9 14:18:26.835: INFO: New ReplicaSet "test-cleanup-deployment-55bbcbc84c" of Deployment "test-cleanup-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment-55bbcbc84c,GenerateName:,Namespace:deployment-5809,SelfLink:/apis/apps/v1/namespaces/deployment-5809/replicasets/test-cleanup-deployment-55bbcbc84c,UID:605528c5-3bdc-4775-96ca-a596d83c821e,ResourceVersion:19910805,Generation:1,CreationTimestamp:2020-01-09 14:18:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-cleanup-deployment ad6623db-8e3a-40b2-bd62-af8a9b65abd2 0xc000cfa237 0xc000cfa238}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:0,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Jan 9 14:18:26.835: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment": Jan 9 14:18:26.835: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller,GenerateName:,Namespace:deployment-5809,SelfLink:/apis/apps/v1/namespaces/deployment-5809/replicasets/test-cleanup-controller,UID:3baefb1d-2cd8-4da4-8f8f-d746323d746a,ResourceVersion:19910804,Generation:1,CreationTimestamp:2020-01-09 14:18:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 Deployment test-cleanup-deployment ad6623db-8e3a-40b2-bd62-af8a9b65abd2 0xc000cfa157 0xc000cfa158}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} Jan 9 14:18:26.866: INFO: Pod "test-cleanup-controller-pgsxg" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller-pgsxg,GenerateName:test-cleanup-controller-,Namespace:deployment-5809,SelfLink:/api/v1/namespaces/deployment-5809/pods/test-cleanup-controller-pgsxg,UID:11f67c5a-c2dd-43fd-9c20-bf03264b6045,ResourceVersion:19910799,Generation:0,CreationTimestamp:2020-01-09 14:18:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-cleanup-controller 3baefb1d-2cd8-4da4-8f8f-d746323d746a 0xc000cfaca7 0xc000cfaca8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-2j52t {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-2j52t,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-2j52t true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000cfad20} {node.kubernetes.io/unreachable Exists NoExecute 0xc000cfad40}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-09 14:18:17 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-09 14:18:25 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-09 14:18:25 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-09 14:18:17 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.1,StartTime:2020-01-09 14:18:17 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-09 14:18:24 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://763a228788336808c029055b3b6da1b24ae9cc7c23132c8966626209960b23d0}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 9 14:18:26.866: INFO: Pod "test-cleanup-deployment-55bbcbc84c-dl7q5" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment-55bbcbc84c-dl7q5,GenerateName:test-cleanup-deployment-55bbcbc84c-,Namespace:deployment-5809,SelfLink:/api/v1/namespaces/deployment-5809/pods/test-cleanup-deployment-55bbcbc84c-dl7q5,UID:ac9814cc-98e3-42bf-9c3b-9cdaf231ccfc,ResourceVersion:19910808,Generation:0,CreationTimestamp:2020-01-09 14:18:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-cleanup-deployment-55bbcbc84c 605528c5-3bdc-4775-96ca-a596d83c821e 0xc000cfae27 0xc000cfae28}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-2j52t {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-2j52t,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-2j52t true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000cfaea0} {node.kubernetes.io/unreachable Exists NoExecute 0xc000cfaec0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 9 14:18:26.866: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-5809" for this suite. Jan 9 14:18:34.997: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 9 14:18:36.068: INFO: namespace deployment-5809 deletion completed in 9.182801243s • [SLOW TEST:18.585 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 9 14:18:36.068: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0644 on tmpfs Jan 9 14:18:36.299: INFO: Waiting up to 5m0s for pod "pod-103d65e0-7f61-420a-9593-4d46b4bcd56c" in namespace "emptydir-62" to be "success or failure" Jan 9 14:18:36.444: INFO: Pod "pod-103d65e0-7f61-420a-9593-4d46b4bcd56c": Phase="Pending", Reason="", readiness=false. Elapsed: 144.344868ms Jan 9 14:18:38.454: INFO: Pod "pod-103d65e0-7f61-420a-9593-4d46b4bcd56c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.154782001s Jan 9 14:18:40.464: INFO: Pod "pod-103d65e0-7f61-420a-9593-4d46b4bcd56c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.164765324s Jan 9 14:18:42.476: INFO: Pod "pod-103d65e0-7f61-420a-9593-4d46b4bcd56c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.177035777s Jan 9 14:18:44.495: INFO: Pod "pod-103d65e0-7f61-420a-9593-4d46b4bcd56c": Phase="Pending", Reason="", readiness=false. Elapsed: 8.195426264s Jan 9 14:18:46.510: INFO: Pod "pod-103d65e0-7f61-420a-9593-4d46b4bcd56c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.210966255s STEP: Saw pod success Jan 9 14:18:46.510: INFO: Pod "pod-103d65e0-7f61-420a-9593-4d46b4bcd56c" satisfied condition "success or failure" Jan 9 14:18:46.517: INFO: Trying to get logs from node iruya-node pod pod-103d65e0-7f61-420a-9593-4d46b4bcd56c container test-container: STEP: delete the pod Jan 9 14:18:46.573: INFO: Waiting for pod pod-103d65e0-7f61-420a-9593-4d46b4bcd56c to disappear Jan 9 14:18:46.688: INFO: Pod pod-103d65e0-7f61-420a-9593-4d46b4bcd56c no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 9 14:18:46.688: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-62" for this suite. Jan 9 14:18:52.740: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 9 14:18:52.853: INFO: namespace emptydir-62 deletion completed in 6.143378538s • [SLOW TEST:16.785 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 9 14:18:52.854: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the container STEP: wait for the container to reach Failed STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Jan 9 14:19:01.120: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 9 14:19:01.170: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-601" for this suite. Jan 9 14:19:07.193: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 9 14:19:07.304: INFO: namespace container-runtime-601 deletion completed in 6.129375128s • [SLOW TEST:14.450 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129 should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 9 14:19:07.304: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Jan 9 14:22:08.737: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 9 14:22:08.827: INFO: Pod pod-with-poststart-exec-hook still exists Jan 9 14:22:10.827: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 9 14:22:10.835: INFO: Pod pod-with-poststart-exec-hook still exists Jan 9 14:22:12.827: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 9 14:22:12.833: INFO: Pod pod-with-poststart-exec-hook still exists Jan 9 14:22:14.827: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 9 14:22:14.838: INFO: Pod pod-with-poststart-exec-hook still exists Jan 9 14:22:16.827: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 9 14:22:16.837: INFO: Pod pod-with-poststart-exec-hook still exists Jan 9 14:22:18.827: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 9 14:22:18.836: INFO: Pod pod-with-poststart-exec-hook still exists Jan 9 14:22:20.827: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 9 14:22:20.836: INFO: Pod pod-with-poststart-exec-hook still exists Jan 9 14:22:22.827: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 9 14:22:22.838: INFO: Pod pod-with-poststart-exec-hook still exists Jan 9 14:22:24.827: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 9 14:22:24.839: INFO: Pod pod-with-poststart-exec-hook still exists Jan 9 14:22:26.827: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 9 14:22:26.835: INFO: Pod pod-with-poststart-exec-hook still exists Jan 9 14:22:28.827: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 9 14:22:28.834: INFO: Pod pod-with-poststart-exec-hook still exists Jan 9 14:22:30.827: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 9 14:22:30.835: INFO: Pod pod-with-poststart-exec-hook still exists Jan 9 14:22:32.827: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 9 14:22:32.836: INFO: Pod pod-with-poststart-exec-hook still exists Jan 9 14:22:34.827: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 9 14:22:34.838: INFO: Pod pod-with-poststart-exec-hook still exists Jan 9 14:22:36.827: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 9 14:22:36.836: INFO: Pod pod-with-poststart-exec-hook still exists Jan 9 14:22:38.827: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 9 14:22:38.835: INFO: Pod pod-with-poststart-exec-hook still exists Jan 9 14:22:40.827: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 9 14:22:40.871: INFO: Pod pod-with-poststart-exec-hook still exists Jan 9 14:22:42.827: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 9 14:22:42.856: INFO: Pod pod-with-poststart-exec-hook still exists Jan 9 14:22:44.827: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 9 14:22:44.838: INFO: Pod pod-with-poststart-exec-hook still exists Jan 9 14:22:46.827: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 9 14:22:46.836: INFO: Pod pod-with-poststart-exec-hook still exists Jan 9 14:22:48.827: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 9 14:22:48.835: INFO: Pod pod-with-poststart-exec-hook still exists Jan 9 14:22:50.827: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 9 14:22:50.834: INFO: Pod pod-with-poststart-exec-hook still exists Jan 9 14:22:52.827: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 9 14:22:52.836: INFO: Pod pod-with-poststart-exec-hook still exists Jan 9 14:22:54.827: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 9 14:22:54.834: INFO: Pod pod-with-poststart-exec-hook still exists Jan 9 14:22:56.827: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 9 14:22:56.836: INFO: Pod pod-with-poststart-exec-hook still exists Jan 9 14:22:58.827: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 9 14:22:58.839: INFO: Pod pod-with-poststart-exec-hook still exists Jan 9 14:23:00.827: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 9 14:23:00.840: INFO: Pod pod-with-poststart-exec-hook still exists Jan 9 14:23:02.827: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 9 14:23:02.833: INFO: Pod pod-with-poststart-exec-hook still exists Jan 9 14:23:04.827: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 9 14:23:04.835: INFO: Pod pod-with-poststart-exec-hook still exists Jan 9 14:23:06.827: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 9 14:23:06.852: INFO: Pod pod-with-poststart-exec-hook still exists Jan 9 14:23:08.827: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 9 14:23:08.837: INFO: Pod pod-with-poststart-exec-hook still exists Jan 9 14:23:10.827: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 9 14:23:10.836: INFO: Pod pod-with-poststart-exec-hook still exists Jan 9 14:23:12.828: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 9 14:23:12.843: INFO: Pod pod-with-poststart-exec-hook still exists Jan 9 14:23:14.827: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 9 14:23:14.841: INFO: Pod pod-with-poststart-exec-hook still exists Jan 9 14:23:16.827: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 9 14:23:16.841: INFO: Pod pod-with-poststart-exec-hook still exists Jan 9 14:23:18.827: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 9 14:23:18.846: INFO: Pod pod-with-poststart-exec-hook still exists Jan 9 14:23:20.827: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 9 14:23:20.838: INFO: Pod pod-with-poststart-exec-hook still exists Jan 9 14:23:22.827: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 9 14:23:22.837: INFO: Pod pod-with-poststart-exec-hook still exists Jan 9 14:23:24.827: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 9 14:23:24.859: INFO: Pod pod-with-poststart-exec-hook still exists Jan 9 14:23:26.827: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 9 14:23:26.834: INFO: Pod pod-with-poststart-exec-hook still exists Jan 9 14:23:28.827: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 9 14:23:28.834: INFO: Pod pod-with-poststart-exec-hook still exists Jan 9 14:23:30.827: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 9 14:23:30.836: INFO: Pod pod-with-poststart-exec-hook still exists Jan 9 14:23:32.827: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 9 14:23:32.908: INFO: Pod pod-with-poststart-exec-hook still exists Jan 9 14:23:34.827: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 9 14:23:34.898: INFO: Pod pod-with-poststart-exec-hook still exists Jan 9 14:23:36.827: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 9 14:23:36.836: INFO: Pod pod-with-poststart-exec-hook still exists Jan 9 14:23:38.827: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 9 14:23:38.842: INFO: Pod pod-with-poststart-exec-hook still exists Jan 9 14:23:40.827: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 9 14:23:40.835: INFO: Pod pod-with-poststart-exec-hook still exists Jan 9 14:23:42.827: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 9 14:23:42.834: INFO: Pod pod-with-poststart-exec-hook still exists Jan 9 14:23:44.827: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 9 14:23:44.839: INFO: Pod pod-with-poststart-exec-hook still exists Jan 9 14:23:46.827: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 9 14:23:46.835: INFO: Pod pod-with-poststart-exec-hook still exists Jan 9 14:23:48.827: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 9 14:23:48.841: INFO: Pod pod-with-poststart-exec-hook still exists Jan 9 14:23:50.827: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 9 14:23:50.835: INFO: Pod pod-with-poststart-exec-hook still exists Jan 9 14:23:52.827: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 9 14:23:52.837: INFO: Pod pod-with-poststart-exec-hook still exists Jan 9 14:23:54.827: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 9 14:23:54.836: INFO: Pod pod-with-poststart-exec-hook still exists Jan 9 14:23:56.827: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 9 14:23:56.838: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 9 14:23:56.838: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-9967" for this suite. Jan 9 14:24:18.886: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 9 14:24:18.985: INFO: namespace container-lifecycle-hook-9967 deletion completed in 22.131890629s • [SLOW TEST:311.681 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 9 14:24:18.985: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-map-b412a89b-4438-4815-ae02-147d5f6decda STEP: Creating a pod to test consume secrets Jan 9 14:24:19.181: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-1ed50382-f89a-42f4-9698-4c04edcb912d" in namespace "projected-1154" to be "success or failure" Jan 9 14:24:19.213: INFO: Pod "pod-projected-secrets-1ed50382-f89a-42f4-9698-4c04edcb912d": Phase="Pending", Reason="", readiness=false. Elapsed: 32.062883ms Jan 9 14:24:21.227: INFO: Pod "pod-projected-secrets-1ed50382-f89a-42f4-9698-4c04edcb912d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.045995673s Jan 9 14:24:23.237: INFO: Pod "pod-projected-secrets-1ed50382-f89a-42f4-9698-4c04edcb912d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.056355448s Jan 9 14:24:25.245: INFO: Pod "pod-projected-secrets-1ed50382-f89a-42f4-9698-4c04edcb912d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.064618499s Jan 9 14:24:27.257: INFO: Pod "pod-projected-secrets-1ed50382-f89a-42f4-9698-4c04edcb912d": Phase="Pending", Reason="", readiness=false. Elapsed: 8.076256972s Jan 9 14:24:29.269: INFO: Pod "pod-projected-secrets-1ed50382-f89a-42f4-9698-4c04edcb912d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.088526689s STEP: Saw pod success Jan 9 14:24:29.269: INFO: Pod "pod-projected-secrets-1ed50382-f89a-42f4-9698-4c04edcb912d" satisfied condition "success or failure" Jan 9 14:24:29.276: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-1ed50382-f89a-42f4-9698-4c04edcb912d container projected-secret-volume-test: STEP: delete the pod Jan 9 14:24:29.414: INFO: Waiting for pod pod-projected-secrets-1ed50382-f89a-42f4-9698-4c04edcb912d to disappear Jan 9 14:24:29.507: INFO: Pod pod-projected-secrets-1ed50382-f89a-42f4-9698-4c04edcb912d no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 9 14:24:29.507: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1154" for this suite. Jan 9 14:24:35.548: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 9 14:24:35.716: INFO: namespace projected-1154 deletion completed in 6.199061426s • [SLOW TEST:16.731 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 9 14:24:35.717: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:167 [It] should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating server pod server in namespace prestop-1613 STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace prestop-1613 STEP: Deleting pre-stop pod Jan 9 14:24:57.003: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 9 14:24:57.018: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "prestop-1613" for this suite. Jan 9 14:25:37.081: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 9 14:25:37.198: INFO: namespace prestop-1613 deletion completed in 40.171917631s • [SLOW TEST:61.481 seconds] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 9 14:25:37.198: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0644 on node default medium Jan 9 14:25:37.300: INFO: Waiting up to 5m0s for pod "pod-7e943387-edfe-4bc6-b910-ab3b8c711111" in namespace "emptydir-2543" to be "success or failure" Jan 9 14:25:37.341: INFO: Pod "pod-7e943387-edfe-4bc6-b910-ab3b8c711111": Phase="Pending", Reason="", readiness=false. Elapsed: 40.335818ms Jan 9 14:25:39.347: INFO: Pod "pod-7e943387-edfe-4bc6-b910-ab3b8c711111": Phase="Pending", Reason="", readiness=false. Elapsed: 2.04692052s Jan 9 14:25:41.356: INFO: Pod "pod-7e943387-edfe-4bc6-b910-ab3b8c711111": Phase="Pending", Reason="", readiness=false. Elapsed: 4.055916947s Jan 9 14:25:43.373: INFO: Pod "pod-7e943387-edfe-4bc6-b910-ab3b8c711111": Phase="Pending", Reason="", readiness=false. Elapsed: 6.072886708s Jan 9 14:25:45.382: INFO: Pod "pod-7e943387-edfe-4bc6-b910-ab3b8c711111": Phase="Pending", Reason="", readiness=false. Elapsed: 8.0812767s Jan 9 14:25:47.391: INFO: Pod "pod-7e943387-edfe-4bc6-b910-ab3b8c711111": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.09028189s STEP: Saw pod success Jan 9 14:25:47.391: INFO: Pod "pod-7e943387-edfe-4bc6-b910-ab3b8c711111" satisfied condition "success or failure" Jan 9 14:25:47.396: INFO: Trying to get logs from node iruya-node pod pod-7e943387-edfe-4bc6-b910-ab3b8c711111 container test-container: STEP: delete the pod Jan 9 14:25:47.459: INFO: Waiting for pod pod-7e943387-edfe-4bc6-b910-ab3b8c711111 to disappear Jan 9 14:25:47.562: INFO: Pod pod-7e943387-edfe-4bc6-b910-ab3b8c711111 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 9 14:25:47.562: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2543" for this suite. Jan 9 14:25:53.597: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 9 14:25:53.725: INFO: namespace emptydir-2543 deletion completed in 6.153669124s • [SLOW TEST:16.527 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 9 14:25:53.725: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-8e545b78-6644-415a-a80d-faf564100eb5 STEP: Creating a pod to test consume configMaps Jan 9 14:25:53.875: INFO: Waiting up to 5m0s for pod "pod-configmaps-8e466245-4966-4f88-b315-8f5c0280e741" in namespace "configmap-4276" to be "success or failure" Jan 9 14:25:53.902: INFO: Pod "pod-configmaps-8e466245-4966-4f88-b315-8f5c0280e741": Phase="Pending", Reason="", readiness=false. Elapsed: 26.59862ms Jan 9 14:25:55.909: INFO: Pod "pod-configmaps-8e466245-4966-4f88-b315-8f5c0280e741": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033779575s Jan 9 14:25:57.930: INFO: Pod "pod-configmaps-8e466245-4966-4f88-b315-8f5c0280e741": Phase="Pending", Reason="", readiness=false. Elapsed: 4.05473669s Jan 9 14:25:59.937: INFO: Pod "pod-configmaps-8e466245-4966-4f88-b315-8f5c0280e741": Phase="Pending", Reason="", readiness=false. Elapsed: 6.061626847s Jan 9 14:26:01.957: INFO: Pod "pod-configmaps-8e466245-4966-4f88-b315-8f5c0280e741": Phase="Pending", Reason="", readiness=false. Elapsed: 8.082036488s Jan 9 14:26:03.992: INFO: Pod "pod-configmaps-8e466245-4966-4f88-b315-8f5c0280e741": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.11672715s STEP: Saw pod success Jan 9 14:26:03.992: INFO: Pod "pod-configmaps-8e466245-4966-4f88-b315-8f5c0280e741" satisfied condition "success or failure" Jan 9 14:26:04.003: INFO: Trying to get logs from node iruya-node pod pod-configmaps-8e466245-4966-4f88-b315-8f5c0280e741 container configmap-volume-test: STEP: delete the pod Jan 9 14:26:04.115: INFO: Waiting for pod pod-configmaps-8e466245-4966-4f88-b315-8f5c0280e741 to disappear Jan 9 14:26:04.127: INFO: Pod pod-configmaps-8e466245-4966-4f88-b315-8f5c0280e741 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 9 14:26:04.127: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4276" for this suite. Jan 9 14:26:10.164: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 9 14:26:10.289: INFO: namespace configmap-4276 deletion completed in 6.157548626s • [SLOW TEST:16.564 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 9 14:26:10.291: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 9 14:26:18.486: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-8554" for this suite. Jan 9 14:27:00.568: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 9 14:27:00.715: INFO: namespace kubelet-test-8554 deletion completed in 42.218063212s • [SLOW TEST:50.424 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a busybox Pod with hostAliases /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:136 should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 9 14:27:00.716: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-upd-576175ae-2a2e-4004-8d5d-5f53ee6afe40 STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 9 14:27:12.973: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9354" for this suite. Jan 9 14:27:35.027: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 9 14:27:35.152: INFO: namespace configmap-9354 deletion completed in 22.172404001s • [SLOW TEST:34.436 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl replace should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 9 14:27:35.153: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1721 [It] should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Jan 9 14:27:35.260: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --labels=run=e2e-test-nginx-pod --namespace=kubectl-937' Jan 9 14:27:37.495: INFO: stderr: "" Jan 9 14:27:37.495: INFO: stdout: "pod/e2e-test-nginx-pod created\n" STEP: verifying the pod e2e-test-nginx-pod is running STEP: verifying the pod e2e-test-nginx-pod was created Jan 9 14:27:47.547: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-nginx-pod --namespace=kubectl-937 -o json' Jan 9 14:27:47.807: INFO: stderr: "" Jan 9 14:27:47.808: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2020-01-09T14:27:37Z\",\n \"labels\": {\n \"run\": \"e2e-test-nginx-pod\"\n },\n \"name\": \"e2e-test-nginx-pod\",\n \"namespace\": \"kubectl-937\",\n \"resourceVersion\": \"19911859\",\n \"selfLink\": \"/api/v1/namespaces/kubectl-937/pods/e2e-test-nginx-pod\",\n \"uid\": \"ad99e5d6-55e1-4a75-b31d-94f28a2b85df\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/nginx:1.14-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-nginx-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-s5sj4\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"iruya-node\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-s5sj4\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-s5sj4\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-01-09T14:27:37Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-01-09T14:27:45Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-01-09T14:27:45Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-01-09T14:27:37Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"docker://30e4b1b72178780e2c8ca632b75b6162bbaf3d4ad3717a2dc1b7815e2504c146\",\n \"image\": \"nginx:1.14-alpine\",\n \"imageID\": \"docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7\",\n \"lastState\": {},\n \"name\": \"e2e-test-nginx-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2020-01-09T14:27:44Z\"\n }\n }\n }\n ],\n \"hostIP\": \"10.96.3.65\",\n \"phase\": \"Running\",\n \"podIP\": \"10.44.0.1\",\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2020-01-09T14:27:37Z\"\n }\n}\n" STEP: replace the image in the pod Jan 9 14:27:47.808: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-937' Jan 9 14:27:48.512: INFO: stderr: "" Jan 9 14:27:48.512: INFO: stdout: "pod/e2e-test-nginx-pod replaced\n" STEP: verifying the pod e2e-test-nginx-pod has the right image docker.io/library/busybox:1.29 [AfterEach] [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1726 Jan 9 14:27:48.534: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=kubectl-937' Jan 9 14:27:56.234: INFO: stderr: "" Jan 9 14:27:56.234: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 9 14:27:56.234: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-937" for this suite. Jan 9 14:28:02.300: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 9 14:28:02.429: INFO: namespace kubectl-937 deletion completed in 6.16549747s • [SLOW TEST:27.277 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Proxy server should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 9 14:28:02.430: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Starting the proxy Jan 9 14:28:02.530: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix382418562/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 9 14:28:02.607: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4464" for this suite. Jan 9 14:28:08.641: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 9 14:28:08.772: INFO: namespace kubectl-4464 deletion completed in 6.150385052s • [SLOW TEST:6.342 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Proxy server /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 9 14:28:08.773: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Performing setup for networking test in namespace pod-network-test-43 STEP: creating a selector STEP: Creating the service pods in kubernetes Jan 9 14:28:08.872: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Jan 9 14:28:45.101: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostName&protocol=udp&host=10.32.0.4&port=8081&tries=1'] Namespace:pod-network-test-43 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 9 14:28:45.101: INFO: >>> kubeConfig: /root/.kube/config I0109 14:28:45.188389 8 log.go:172] (0xc001875ad0) (0xc001d572c0) Create stream I0109 14:28:45.188556 8 log.go:172] (0xc001875ad0) (0xc001d572c0) Stream added, broadcasting: 1 I0109 14:28:45.201400 8 log.go:172] (0xc001875ad0) Reply frame received for 1 I0109 14:28:45.201436 8 log.go:172] (0xc001875ad0) (0xc001e81900) Create stream I0109 14:28:45.201446 8 log.go:172] (0xc001875ad0) (0xc001e81900) Stream added, broadcasting: 3 I0109 14:28:45.203988 8 log.go:172] (0xc001875ad0) Reply frame received for 3 I0109 14:28:45.204024 8 log.go:172] (0xc001875ad0) (0xc001c1cdc0) Create stream I0109 14:28:45.204037 8 log.go:172] (0xc001875ad0) (0xc001c1cdc0) Stream added, broadcasting: 5 I0109 14:28:45.207914 8 log.go:172] (0xc001875ad0) Reply frame received for 5 I0109 14:28:45.399846 8 log.go:172] (0xc001875ad0) Data frame received for 3 I0109 14:28:45.399952 8 log.go:172] (0xc001e81900) (3) Data frame handling I0109 14:28:45.399986 8 log.go:172] (0xc001e81900) (3) Data frame sent I0109 14:28:45.522259 8 log.go:172] (0xc001875ad0) Data frame received for 1 I0109 14:28:45.522314 8 log.go:172] (0xc001d572c0) (1) Data frame handling I0109 14:28:45.522344 8 log.go:172] (0xc001d572c0) (1) Data frame sent I0109 14:28:45.522374 8 log.go:172] (0xc001875ad0) (0xc001d572c0) Stream removed, broadcasting: 1 I0109 14:28:45.522728 8 log.go:172] (0xc001875ad0) (0xc001e81900) Stream removed, broadcasting: 3 I0109 14:28:45.522775 8 log.go:172] (0xc001875ad0) (0xc001c1cdc0) Stream removed, broadcasting: 5 I0109 14:28:45.522810 8 log.go:172] (0xc001875ad0) Go away received I0109 14:28:45.522844 8 log.go:172] (0xc001875ad0) (0xc001d572c0) Stream removed, broadcasting: 1 I0109 14:28:45.522879 8 log.go:172] (0xc001875ad0) (0xc001e81900) Stream removed, broadcasting: 3 I0109 14:28:45.522898 8 log.go:172] (0xc001875ad0) (0xc001c1cdc0) Stream removed, broadcasting: 5 Jan 9 14:28:45.522: INFO: Waiting for endpoints: map[] Jan 9 14:28:45.535: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostName&protocol=udp&host=10.44.0.1&port=8081&tries=1'] Namespace:pod-network-test-43 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 9 14:28:45.535: INFO: >>> kubeConfig: /root/.kube/config I0109 14:28:45.621732 8 log.go:172] (0xc0005c9d90) (0xc002595860) Create stream I0109 14:28:45.621928 8 log.go:172] (0xc0005c9d90) (0xc002595860) Stream added, broadcasting: 1 I0109 14:28:45.631870 8 log.go:172] (0xc0005c9d90) Reply frame received for 1 I0109 14:28:45.631908 8 log.go:172] (0xc0005c9d90) (0xc001e819a0) Create stream I0109 14:28:45.631918 8 log.go:172] (0xc0005c9d90) (0xc001e819a0) Stream added, broadcasting: 3 I0109 14:28:45.633968 8 log.go:172] (0xc0005c9d90) Reply frame received for 3 I0109 14:28:45.634009 8 log.go:172] (0xc0005c9d90) (0xc0025460a0) Create stream I0109 14:28:45.634024 8 log.go:172] (0xc0005c9d90) (0xc0025460a0) Stream added, broadcasting: 5 I0109 14:28:45.636108 8 log.go:172] (0xc0005c9d90) Reply frame received for 5 I0109 14:28:45.767204 8 log.go:172] (0xc0005c9d90) Data frame received for 3 I0109 14:28:45.767261 8 log.go:172] (0xc001e819a0) (3) Data frame handling I0109 14:28:45.767285 8 log.go:172] (0xc001e819a0) (3) Data frame sent I0109 14:28:45.915624 8 log.go:172] (0xc0005c9d90) (0xc001e819a0) Stream removed, broadcasting: 3 I0109 14:28:45.915903 8 log.go:172] (0xc0005c9d90) Data frame received for 1 I0109 14:28:45.915915 8 log.go:172] (0xc002595860) (1) Data frame handling I0109 14:28:45.915932 8 log.go:172] (0xc002595860) (1) Data frame sent I0109 14:28:45.915939 8 log.go:172] (0xc0005c9d90) (0xc002595860) Stream removed, broadcasting: 1 I0109 14:28:45.916155 8 log.go:172] (0xc0005c9d90) (0xc0025460a0) Stream removed, broadcasting: 5 I0109 14:28:45.916183 8 log.go:172] (0xc0005c9d90) (0xc002595860) Stream removed, broadcasting: 1 I0109 14:28:45.916191 8 log.go:172] (0xc0005c9d90) (0xc001e819a0) Stream removed, broadcasting: 3 I0109 14:28:45.916197 8 log.go:172] (0xc0005c9d90) (0xc0025460a0) Stream removed, broadcasting: 5 Jan 9 14:28:45.916: INFO: Waiting for endpoints: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 9 14:28:45.916: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready I0109 14:28:45.917627 8 log.go:172] (0xc0005c9d90) Go away received STEP: Destroying namespace "pod-network-test-43" for this suite. Jan 9 14:29:11.970: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 9 14:29:12.125: INFO: namespace pod-network-test-43 deletion completed in 26.20092204s • [SLOW TEST:63.352 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 9 14:29:12.126: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars Jan 9 14:29:12.319: INFO: Waiting up to 5m0s for pod "downward-api-386073d4-4a5c-4dd2-b19a-197cf3570be6" in namespace "downward-api-8714" to be "success or failure" Jan 9 14:29:12.345: INFO: Pod "downward-api-386073d4-4a5c-4dd2-b19a-197cf3570be6": Phase="Pending", Reason="", readiness=false. Elapsed: 26.041443ms Jan 9 14:29:14.358: INFO: Pod "downward-api-386073d4-4a5c-4dd2-b19a-197cf3570be6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03900885s Jan 9 14:29:16.424: INFO: Pod "downward-api-386073d4-4a5c-4dd2-b19a-197cf3570be6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.105244696s Jan 9 14:29:18.438: INFO: Pod "downward-api-386073d4-4a5c-4dd2-b19a-197cf3570be6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.118778784s Jan 9 14:29:20.448: INFO: Pod "downward-api-386073d4-4a5c-4dd2-b19a-197cf3570be6": Phase="Pending", Reason="", readiness=false. Elapsed: 8.128734934s Jan 9 14:29:22.457: INFO: Pod "downward-api-386073d4-4a5c-4dd2-b19a-197cf3570be6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.138216883s STEP: Saw pod success Jan 9 14:29:22.457: INFO: Pod "downward-api-386073d4-4a5c-4dd2-b19a-197cf3570be6" satisfied condition "success or failure" Jan 9 14:29:22.468: INFO: Trying to get logs from node iruya-node pod downward-api-386073d4-4a5c-4dd2-b19a-197cf3570be6 container dapi-container: STEP: delete the pod Jan 9 14:29:22.703: INFO: Waiting for pod downward-api-386073d4-4a5c-4dd2-b19a-197cf3570be6 to disappear Jan 9 14:29:22.714: INFO: Pod downward-api-386073d4-4a5c-4dd2-b19a-197cf3570be6 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 9 14:29:22.714: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8714" for this suite. Jan 9 14:29:28.837: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 9 14:29:28.945: INFO: namespace downward-api-8714 deletion completed in 6.22444926s • [SLOW TEST:16.820 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 9 14:29:28.946: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Jan 9 14:29:39.699: INFO: Expected: &{OK} to match Container's Termination Message: OK -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 9 14:29:39.863: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-6823" for this suite. Jan 9 14:29:45.905: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 9 14:29:46.020: INFO: namespace container-runtime-6823 deletion completed in 6.144199437s • [SLOW TEST:17.074 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129 should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 9 14:29:46.021: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Jan 9 14:29:46.176: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7b195033-45b9-4d35-a6f4-6ccb938a79cd" in namespace "downward-api-5149" to be "success or failure" Jan 9 14:29:46.251: INFO: Pod "downwardapi-volume-7b195033-45b9-4d35-a6f4-6ccb938a79cd": Phase="Pending", Reason="", readiness=false. Elapsed: 74.472683ms Jan 9 14:29:48.262: INFO: Pod "downwardapi-volume-7b195033-45b9-4d35-a6f4-6ccb938a79cd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.085920572s Jan 9 14:29:50.269: INFO: Pod "downwardapi-volume-7b195033-45b9-4d35-a6f4-6ccb938a79cd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.093002467s Jan 9 14:29:52.322: INFO: Pod "downwardapi-volume-7b195033-45b9-4d35-a6f4-6ccb938a79cd": Phase="Pending", Reason="", readiness=false. Elapsed: 6.146026666s Jan 9 14:29:54.353: INFO: Pod "downwardapi-volume-7b195033-45b9-4d35-a6f4-6ccb938a79cd": Phase="Pending", Reason="", readiness=false. Elapsed: 8.17667042s Jan 9 14:29:56.645: INFO: Pod "downwardapi-volume-7b195033-45b9-4d35-a6f4-6ccb938a79cd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.468583273s STEP: Saw pod success Jan 9 14:29:56.645: INFO: Pod "downwardapi-volume-7b195033-45b9-4d35-a6f4-6ccb938a79cd" satisfied condition "success or failure" Jan 9 14:29:56.651: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-7b195033-45b9-4d35-a6f4-6ccb938a79cd container client-container: STEP: delete the pod Jan 9 14:29:57.135: INFO: Waiting for pod downwardapi-volume-7b195033-45b9-4d35-a6f4-6ccb938a79cd to disappear Jan 9 14:29:57.150: INFO: Pod downwardapi-volume-7b195033-45b9-4d35-a6f4-6ccb938a79cd no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 9 14:29:57.150: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5149" for this suite. Jan 9 14:30:03.210: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 9 14:30:03.364: INFO: namespace downward-api-5149 deletion completed in 6.204525495s • [SLOW TEST:17.344 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 9 14:30:03.365: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Jan 9 14:30:03.716: INFO: Waiting up to 5m0s for pod "downwardapi-volume-916421cb-c04f-4bc3-8185-5baf930d6c14" in namespace "downward-api-6723" to be "success or failure" Jan 9 14:30:03.751: INFO: Pod "downwardapi-volume-916421cb-c04f-4bc3-8185-5baf930d6c14": Phase="Pending", Reason="", readiness=false. Elapsed: 35.031075ms Jan 9 14:30:05.760: INFO: Pod "downwardapi-volume-916421cb-c04f-4bc3-8185-5baf930d6c14": Phase="Pending", Reason="", readiness=false. Elapsed: 2.043850066s Jan 9 14:30:07.775: INFO: Pod "downwardapi-volume-916421cb-c04f-4bc3-8185-5baf930d6c14": Phase="Pending", Reason="", readiness=false. Elapsed: 4.058906758s Jan 9 14:30:09.854: INFO: Pod "downwardapi-volume-916421cb-c04f-4bc3-8185-5baf930d6c14": Phase="Pending", Reason="", readiness=false. Elapsed: 6.137936449s Jan 9 14:30:11.884: INFO: Pod "downwardapi-volume-916421cb-c04f-4bc3-8185-5baf930d6c14": Phase="Pending", Reason="", readiness=false. Elapsed: 8.167748993s Jan 9 14:30:13.895: INFO: Pod "downwardapi-volume-916421cb-c04f-4bc3-8185-5baf930d6c14": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.178456956s STEP: Saw pod success Jan 9 14:30:13.895: INFO: Pod "downwardapi-volume-916421cb-c04f-4bc3-8185-5baf930d6c14" satisfied condition "success or failure" Jan 9 14:30:13.902: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-916421cb-c04f-4bc3-8185-5baf930d6c14 container client-container: STEP: delete the pod Jan 9 14:30:14.037: INFO: Waiting for pod downwardapi-volume-916421cb-c04f-4bc3-8185-5baf930d6c14 to disappear Jan 9 14:30:14.056: INFO: Pod downwardapi-volume-916421cb-c04f-4bc3-8185-5baf930d6c14 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 9 14:30:14.056: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6723" for this suite. Jan 9 14:30:20.084: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 9 14:30:20.221: INFO: namespace downward-api-6723 deletion completed in 6.158311904s • [SLOW TEST:16.857 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 9 14:30:20.222: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap that has name configmap-test-emptyKey-c7c662c9-9692-428f-9087-9e5c8f4ca041 [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 9 14:30:20.367: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5736" for this suite. Jan 9 14:30:26.416: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 9 14:30:26.551: INFO: namespace configmap-5736 deletion completed in 6.180767086s • [SLOW TEST:6.329 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 9 14:30:26.552: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: validating cluster-info Jan 9 14:30:26.694: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info' Jan 9 14:30:26.816: INFO: stderr: "" Jan 9 14:30:26.817: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.24.4.57:6443\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.24.4.57:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 9 14:30:26.817: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8731" for this suite. Jan 9 14:30:32.854: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 9 14:30:33.365: INFO: namespace kubectl-8731 deletion completed in 6.540318102s • [SLOW TEST:6.813 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl cluster-info /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 9 14:30:33.366: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating Pod STEP: Waiting for the pod running STEP: Geting the pod STEP: Reading file content from the nginx-container Jan 9 14:30:45.590: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec pod-sharedvolume-9c3e2c74-0305-40e0-b2e4-13190da1f0a0 -c busybox-main-container --namespace=emptydir-5707 -- cat /usr/share/volumeshare/shareddata.txt' Jan 9 14:30:46.338: INFO: stderr: "I0109 14:30:45.862226 1921 log.go:172] (0xc000788420) (0xc0006da820) Create stream\nI0109 14:30:45.862476 1921 log.go:172] (0xc000788420) (0xc0006da820) Stream added, broadcasting: 1\nI0109 14:30:45.873176 1921 log.go:172] (0xc000788420) Reply frame received for 1\nI0109 14:30:45.873397 1921 log.go:172] (0xc000788420) (0xc0005c2320) Create stream\nI0109 14:30:45.873447 1921 log.go:172] (0xc000788420) (0xc0005c2320) Stream added, broadcasting: 3\nI0109 14:30:45.876557 1921 log.go:172] (0xc000788420) Reply frame received for 3\nI0109 14:30:45.876744 1921 log.go:172] (0xc000788420) (0xc0006da8c0) Create stream\nI0109 14:30:45.876774 1921 log.go:172] (0xc000788420) (0xc0006da8c0) Stream added, broadcasting: 5\nI0109 14:30:45.881471 1921 log.go:172] (0xc000788420) Reply frame received for 5\nI0109 14:30:46.150497 1921 log.go:172] (0xc000788420) Data frame received for 3\nI0109 14:30:46.150697 1921 log.go:172] (0xc0005c2320) (3) Data frame handling\nI0109 14:30:46.150748 1921 log.go:172] (0xc0005c2320) (3) Data frame sent\nI0109 14:30:46.318053 1921 log.go:172] (0xc000788420) (0xc0005c2320) Stream removed, broadcasting: 3\nI0109 14:30:46.318277 1921 log.go:172] (0xc000788420) Data frame received for 1\nI0109 14:30:46.318314 1921 log.go:172] (0xc0006da820) (1) Data frame handling\nI0109 14:30:46.318372 1921 log.go:172] (0xc0006da820) (1) Data frame sent\nI0109 14:30:46.318859 1921 log.go:172] (0xc000788420) (0xc0006da8c0) Stream removed, broadcasting: 5\nI0109 14:30:46.319259 1921 log.go:172] (0xc000788420) (0xc0006da820) Stream removed, broadcasting: 1\nI0109 14:30:46.319405 1921 log.go:172] (0xc000788420) Go away received\nI0109 14:30:46.321152 1921 log.go:172] (0xc000788420) (0xc0006da820) Stream removed, broadcasting: 1\nI0109 14:30:46.321224 1921 log.go:172] (0xc000788420) (0xc0005c2320) Stream removed, broadcasting: 3\nI0109 14:30:46.321273 1921 log.go:172] (0xc000788420) (0xc0006da8c0) Stream removed, broadcasting: 5\n" Jan 9 14:30:46.338: INFO: stdout: "Hello from the busy-box sub-container\n" [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 9 14:30:46.338: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5707" for this suite. Jan 9 14:30:52.438: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 9 14:30:52.591: INFO: namespace emptydir-5707 deletion completed in 6.198730893s • [SLOW TEST:19.225 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 9 14:30:52.592: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-3167 [It] Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating stateful set ss in namespace statefulset-3167 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-3167 Jan 9 14:30:52.822: INFO: Found 0 stateful pods, waiting for 1 Jan 9 14:31:02.831: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Pending - Ready=false Jan 9 14:31:12.836: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod Jan 9 14:31:12.842: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3167 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jan 9 14:31:13.440: INFO: stderr: "I0109 14:31:13.018350 1942 log.go:172] (0xc000116f20) (0xc000382820) Create stream\nI0109 14:31:13.018514 1942 log.go:172] (0xc000116f20) (0xc000382820) Stream added, broadcasting: 1\nI0109 14:31:13.036152 1942 log.go:172] (0xc000116f20) Reply frame received for 1\nI0109 14:31:13.036227 1942 log.go:172] (0xc000116f20) (0xc000656320) Create stream\nI0109 14:31:13.036243 1942 log.go:172] (0xc000116f20) (0xc000656320) Stream added, broadcasting: 3\nI0109 14:31:13.037455 1942 log.go:172] (0xc000116f20) Reply frame received for 3\nI0109 14:31:13.037506 1942 log.go:172] (0xc000116f20) (0xc000382000) Create stream\nI0109 14:31:13.037516 1942 log.go:172] (0xc000116f20) (0xc000382000) Stream added, broadcasting: 5\nI0109 14:31:13.040390 1942 log.go:172] (0xc000116f20) Reply frame received for 5\nI0109 14:31:13.181762 1942 log.go:172] (0xc000116f20) Data frame received for 5\nI0109 14:31:13.181926 1942 log.go:172] (0xc000382000) (5) Data frame handling\nI0109 14:31:13.181991 1942 log.go:172] (0xc000382000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0109 14:31:13.246899 1942 log.go:172] (0xc000116f20) Data frame received for 3\nI0109 14:31:13.246943 1942 log.go:172] (0xc000656320) (3) Data frame handling\nI0109 14:31:13.246967 1942 log.go:172] (0xc000656320) (3) Data frame sent\nI0109 14:31:13.427884 1942 log.go:172] (0xc000116f20) Data frame received for 1\nI0109 14:31:13.428019 1942 log.go:172] (0xc000116f20) (0xc000382000) Stream removed, broadcasting: 5\nI0109 14:31:13.428262 1942 log.go:172] (0xc000116f20) (0xc000656320) Stream removed, broadcasting: 3\nI0109 14:31:13.428378 1942 log.go:172] (0xc000382820) (1) Data frame handling\nI0109 14:31:13.428444 1942 log.go:172] (0xc000382820) (1) Data frame sent\nI0109 14:31:13.428479 1942 log.go:172] (0xc000116f20) (0xc000382820) Stream removed, broadcasting: 1\nI0109 14:31:13.428528 1942 log.go:172] (0xc000116f20) Go away received\nI0109 14:31:13.430259 1942 log.go:172] (0xc000116f20) (0xc000382820) Stream removed, broadcasting: 1\nI0109 14:31:13.430371 1942 log.go:172] (0xc000116f20) (0xc000656320) Stream removed, broadcasting: 3\nI0109 14:31:13.430386 1942 log.go:172] (0xc000116f20) (0xc000382000) Stream removed, broadcasting: 5\n" Jan 9 14:31:13.440: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jan 9 14:31:13.440: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Jan 9 14:31:13.461: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Jan 9 14:31:13.461: INFO: Waiting for statefulset status.replicas updated to 0 Jan 9 14:31:13.485: INFO: POD NODE PHASE GRACE CONDITIONS Jan 9 14:31:13.485: INFO: ss-0 iruya-node Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-09 14:30:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-09 14:31:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-09 14:31:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-09 14:30:52 +0000 UTC }] Jan 9 14:31:13.485: INFO: Jan 9 14:31:13.485: INFO: StatefulSet ss has not reached scale 3, at 1 Jan 9 14:31:15.256: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.992727226s Jan 9 14:31:16.582: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.222018162s Jan 9 14:31:17.599: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.895397092s Jan 9 14:31:18.622: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.878967014s Jan 9 14:31:20.934: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.856051387s Jan 9 14:31:21.948: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.543401443s Jan 9 14:31:22.970: INFO: Verifying statefulset ss doesn't scale past 3 for another 529.671147ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-3167 Jan 9 14:31:23.993: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3167 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 9 14:31:24.855: INFO: stderr: "I0109 14:31:24.229154 1964 log.go:172] (0xc00092a2c0) (0xc000968640) Create stream\nI0109 14:31:24.229518 1964 log.go:172] (0xc00092a2c0) (0xc000968640) Stream added, broadcasting: 1\nI0109 14:31:24.238076 1964 log.go:172] (0xc00092a2c0) Reply frame received for 1\nI0109 14:31:24.238132 1964 log.go:172] (0xc00092a2c0) (0xc00089e000) Create stream\nI0109 14:31:24.238144 1964 log.go:172] (0xc00092a2c0) (0xc00089e000) Stream added, broadcasting: 3\nI0109 14:31:24.242955 1964 log.go:172] (0xc00092a2c0) Reply frame received for 3\nI0109 14:31:24.242978 1964 log.go:172] (0xc00092a2c0) (0xc0004ba1e0) Create stream\nI0109 14:31:24.242985 1964 log.go:172] (0xc00092a2c0) (0xc0004ba1e0) Stream added, broadcasting: 5\nI0109 14:31:24.248454 1964 log.go:172] (0xc00092a2c0) Reply frame received for 5\nI0109 14:31:24.478408 1964 log.go:172] (0xc00092a2c0) Data frame received for 5\nI0109 14:31:24.478534 1964 log.go:172] (0xc0004ba1e0) (5) Data frame handling\nI0109 14:31:24.478579 1964 log.go:172] (0xc0004ba1e0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0109 14:31:24.478600 1964 log.go:172] (0xc00092a2c0) Data frame received for 3\nI0109 14:31:24.478606 1964 log.go:172] (0xc00089e000) (3) Data frame handling\nI0109 14:31:24.478610 1964 log.go:172] (0xc00089e000) (3) Data frame sent\nI0109 14:31:24.840020 1964 log.go:172] (0xc00092a2c0) Data frame received for 1\nI0109 14:31:24.840511 1964 log.go:172] (0xc00092a2c0) (0xc0004ba1e0) Stream removed, broadcasting: 5\nI0109 14:31:24.840634 1964 log.go:172] (0xc00092a2c0) (0xc00089e000) Stream removed, broadcasting: 3\nI0109 14:31:24.840717 1964 log.go:172] (0xc000968640) (1) Data frame handling\nI0109 14:31:24.840762 1964 log.go:172] (0xc000968640) (1) Data frame sent\nI0109 14:31:24.840808 1964 log.go:172] (0xc00092a2c0) (0xc000968640) Stream removed, broadcasting: 1\nI0109 14:31:24.840847 1964 log.go:172] (0xc00092a2c0) Go away received\nI0109 14:31:24.842444 1964 log.go:172] (0xc00092a2c0) (0xc000968640) Stream removed, broadcasting: 1\nI0109 14:31:24.842502 1964 log.go:172] (0xc00092a2c0) (0xc00089e000) Stream removed, broadcasting: 3\nI0109 14:31:24.842572 1964 log.go:172] (0xc00092a2c0) (0xc0004ba1e0) Stream removed, broadcasting: 5\n" Jan 9 14:31:24.855: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Jan 9 14:31:24.855: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Jan 9 14:31:24.855: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3167 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 9 14:31:25.900: INFO: stderr: "I0109 14:31:25.041197 1981 log.go:172] (0xc00078a420) (0xc000824500) Create stream\nI0109 14:31:25.041394 1981 log.go:172] (0xc00078a420) (0xc000824500) Stream added, broadcasting: 1\nI0109 14:31:25.044901 1981 log.go:172] (0xc00078a420) Reply frame received for 1\nI0109 14:31:25.045057 1981 log.go:172] (0xc00078a420) (0xc0008245a0) Create stream\nI0109 14:31:25.045078 1981 log.go:172] (0xc00078a420) (0xc0008245a0) Stream added, broadcasting: 3\nI0109 14:31:25.047615 1981 log.go:172] (0xc00078a420) Reply frame received for 3\nI0109 14:31:25.047882 1981 log.go:172] (0xc00078a420) (0xc000652280) Create stream\nI0109 14:31:25.047982 1981 log.go:172] (0xc00078a420) (0xc000652280) Stream added, broadcasting: 5\nI0109 14:31:25.051060 1981 log.go:172] (0xc00078a420) Reply frame received for 5\nI0109 14:31:25.650725 1981 log.go:172] (0xc00078a420) Data frame received for 5\nI0109 14:31:25.650829 1981 log.go:172] (0xc000652280) (5) Data frame handling\nI0109 14:31:25.650878 1981 log.go:172] (0xc000652280) (5) Data frame sent\nI0109 14:31:25.650889 1981 log.go:172] (0xc00078a420) Data frame received for 5\nI0109 14:31:25.650895 1981 log.go:172] (0xc000652280) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0109 14:31:25.650956 1981 log.go:172] (0xc000652280) (5) Data frame sent\nI0109 14:31:25.757631 1981 log.go:172] (0xc00078a420) Data frame received for 3\nI0109 14:31:25.757818 1981 log.go:172] (0xc0008245a0) (3) Data frame handling\nI0109 14:31:25.758320 1981 log.go:172] (0xc00078a420) Data frame received for 5\nI0109 14:31:25.758593 1981 log.go:172] (0xc000652280) (5) Data frame handling\nI0109 14:31:25.758672 1981 log.go:172] (0xc000652280) (5) Data frame sent\nI0109 14:31:25.758699 1981 log.go:172] (0xc0008245a0) (3) Data frame sent\nmv: can't rename '/tmp/index.html': No such file or directory\nI0109 14:31:25.759338 1981 log.go:172] (0xc00078a420) Data frame received for 5\nI0109 14:31:25.759367 1981 log.go:172] (0xc000652280) (5) Data frame handling\nI0109 14:31:25.759403 1981 log.go:172] (0xc000652280) (5) Data frame sent\n+ true\nI0109 14:31:25.892510 1981 log.go:172] (0xc00078a420) Data frame received for 1\nI0109 14:31:25.892626 1981 log.go:172] (0xc00078a420) (0xc0008245a0) Stream removed, broadcasting: 3\nI0109 14:31:25.892882 1981 log.go:172] (0xc000824500) (1) Data frame handling\nI0109 14:31:25.893064 1981 log.go:172] (0xc000824500) (1) Data frame sent\nI0109 14:31:25.893106 1981 log.go:172] (0xc00078a420) (0xc000652280) Stream removed, broadcasting: 5\nI0109 14:31:25.893206 1981 log.go:172] (0xc00078a420) (0xc000824500) Stream removed, broadcasting: 1\nI0109 14:31:25.893230 1981 log.go:172] (0xc00078a420) Go away received\nI0109 14:31:25.894226 1981 log.go:172] (0xc00078a420) (0xc000824500) Stream removed, broadcasting: 1\nI0109 14:31:25.894242 1981 log.go:172] (0xc00078a420) (0xc0008245a0) Stream removed, broadcasting: 3\nI0109 14:31:25.894249 1981 log.go:172] (0xc00078a420) (0xc000652280) Stream removed, broadcasting: 5\n" Jan 9 14:31:25.901: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Jan 9 14:31:25.901: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Jan 9 14:31:25.901: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3167 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 9 14:31:26.565: INFO: stderr: "I0109 14:31:26.217897 2000 log.go:172] (0xc000448bb0) (0xc0005c2a00) Create stream\nI0109 14:31:26.218161 2000 log.go:172] (0xc000448bb0) (0xc0005c2a00) Stream added, broadcasting: 1\nI0109 14:31:26.232946 2000 log.go:172] (0xc000448bb0) Reply frame received for 1\nI0109 14:31:26.233564 2000 log.go:172] (0xc000448bb0) (0xc000696000) Create stream\nI0109 14:31:26.233694 2000 log.go:172] (0xc000448bb0) (0xc000696000) Stream added, broadcasting: 3\nI0109 14:31:26.242694 2000 log.go:172] (0xc000448bb0) Reply frame received for 3\nI0109 14:31:26.242743 2000 log.go:172] (0xc000448bb0) (0xc0006960a0) Create stream\nI0109 14:31:26.242756 2000 log.go:172] (0xc000448bb0) (0xc0006960a0) Stream added, broadcasting: 5\nI0109 14:31:26.245483 2000 log.go:172] (0xc000448bb0) Reply frame received for 5\nI0109 14:31:26.335235 2000 log.go:172] (0xc000448bb0) Data frame received for 3\nI0109 14:31:26.335372 2000 log.go:172] (0xc000696000) (3) Data frame handling\nI0109 14:31:26.335408 2000 log.go:172] (0xc000696000) (3) Data frame sent\nI0109 14:31:26.335440 2000 log.go:172] (0xc000448bb0) Data frame received for 5\nI0109 14:31:26.335462 2000 log.go:172] (0xc0006960a0) (5) Data frame handling\nI0109 14:31:26.335480 2000 log.go:172] (0xc0006960a0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0109 14:31:26.541281 2000 log.go:172] (0xc000448bb0) (0xc000696000) Stream removed, broadcasting: 3\nI0109 14:31:26.541699 2000 log.go:172] (0xc000448bb0) Data frame received for 1\nI0109 14:31:26.541723 2000 log.go:172] (0xc0005c2a00) (1) Data frame handling\nI0109 14:31:26.541757 2000 log.go:172] (0xc0005c2a00) (1) Data frame sent\nI0109 14:31:26.541778 2000 log.go:172] (0xc000448bb0) (0xc0005c2a00) Stream removed, broadcasting: 1\nI0109 14:31:26.543394 2000 log.go:172] (0xc000448bb0) (0xc0006960a0) Stream removed, broadcasting: 5\nI0109 14:31:26.543473 2000 log.go:172] (0xc000448bb0) (0xc0005c2a00) Stream removed, broadcasting: 1\nI0109 14:31:26.543504 2000 log.go:172] (0xc000448bb0) (0xc000696000) Stream removed, broadcasting: 3\nI0109 14:31:26.543520 2000 log.go:172] (0xc000448bb0) (0xc0006960a0) Stream removed, broadcasting: 5\nI0109 14:31:26.543836 2000 log.go:172] (0xc000448bb0) Go away received\n" Jan 9 14:31:26.565: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Jan 9 14:31:26.565: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Jan 9 14:31:26.580: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Jan 9 14:31:26.580: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Jan 9 14:31:26.580: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod Jan 9 14:31:26.585: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3167 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jan 9 14:31:27.126: INFO: stderr: "I0109 14:31:26.780899 2022 log.go:172] (0xc000aa82c0) (0xc00087e5a0) Create stream\nI0109 14:31:26.781041 2022 log.go:172] (0xc000aa82c0) (0xc00087e5a0) Stream added, broadcasting: 1\nI0109 14:31:26.786089 2022 log.go:172] (0xc000aa82c0) Reply frame received for 1\nI0109 14:31:26.786114 2022 log.go:172] (0xc000aa82c0) (0xc0008b8000) Create stream\nI0109 14:31:26.786121 2022 log.go:172] (0xc000aa82c0) (0xc0008b8000) Stream added, broadcasting: 3\nI0109 14:31:26.787143 2022 log.go:172] (0xc000aa82c0) Reply frame received for 3\nI0109 14:31:26.787160 2022 log.go:172] (0xc000aa82c0) (0xc00087e640) Create stream\nI0109 14:31:26.787165 2022 log.go:172] (0xc000aa82c0) (0xc00087e640) Stream added, broadcasting: 5\nI0109 14:31:26.788312 2022 log.go:172] (0xc000aa82c0) Reply frame received for 5\nI0109 14:31:26.898496 2022 log.go:172] (0xc000aa82c0) Data frame received for 5\nI0109 14:31:26.898608 2022 log.go:172] (0xc00087e640) (5) Data frame handling\nI0109 14:31:26.898643 2022 log.go:172] (0xc00087e640) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0109 14:31:26.898702 2022 log.go:172] (0xc000aa82c0) Data frame received for 3\nI0109 14:31:26.898743 2022 log.go:172] (0xc0008b8000) (3) Data frame handling\nI0109 14:31:26.898751 2022 log.go:172] (0xc0008b8000) (3) Data frame sent\nI0109 14:31:27.112857 2022 log.go:172] (0xc000aa82c0) Data frame received for 1\nI0109 14:31:27.113015 2022 log.go:172] (0xc000aa82c0) (0xc00087e640) Stream removed, broadcasting: 5\nI0109 14:31:27.113082 2022 log.go:172] (0xc00087e5a0) (1) Data frame handling\nI0109 14:31:27.113111 2022 log.go:172] (0xc00087e5a0) (1) Data frame sent\nI0109 14:31:27.113315 2022 log.go:172] (0xc000aa82c0) (0xc0008b8000) Stream removed, broadcasting: 3\nI0109 14:31:27.113341 2022 log.go:172] (0xc000aa82c0) (0xc00087e5a0) Stream removed, broadcasting: 1\nI0109 14:31:27.113348 2022 log.go:172] (0xc000aa82c0) Go away received\nI0109 14:31:27.115227 2022 log.go:172] (0xc000aa82c0) (0xc00087e5a0) Stream removed, broadcasting: 1\nI0109 14:31:27.115242 2022 log.go:172] (0xc000aa82c0) (0xc0008b8000) Stream removed, broadcasting: 3\nI0109 14:31:27.115250 2022 log.go:172] (0xc000aa82c0) (0xc00087e640) Stream removed, broadcasting: 5\n" Jan 9 14:31:27.126: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jan 9 14:31:27.126: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Jan 9 14:31:27.126: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3167 ss-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jan 9 14:31:27.513: INFO: stderr: "I0109 14:31:27.273570 2043 log.go:172] (0xc0009c8420) (0xc0008fa5a0) Create stream\nI0109 14:31:27.273650 2043 log.go:172] (0xc0009c8420) (0xc0008fa5a0) Stream added, broadcasting: 1\nI0109 14:31:27.276062 2043 log.go:172] (0xc0009c8420) Reply frame received for 1\nI0109 14:31:27.276091 2043 log.go:172] (0xc0009c8420) (0xc0009b4000) Create stream\nI0109 14:31:27.276100 2043 log.go:172] (0xc0009c8420) (0xc0009b4000) Stream added, broadcasting: 3\nI0109 14:31:27.276804 2043 log.go:172] (0xc0009c8420) Reply frame received for 3\nI0109 14:31:27.276823 2043 log.go:172] (0xc0009c8420) (0xc0008fa640) Create stream\nI0109 14:31:27.276830 2043 log.go:172] (0xc0009c8420) (0xc0008fa640) Stream added, broadcasting: 5\nI0109 14:31:27.277683 2043 log.go:172] (0xc0009c8420) Reply frame received for 5\nI0109 14:31:27.354378 2043 log.go:172] (0xc0009c8420) Data frame received for 5\nI0109 14:31:27.354425 2043 log.go:172] (0xc0008fa640) (5) Data frame handling\nI0109 14:31:27.354441 2043 log.go:172] (0xc0008fa640) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0109 14:31:27.400417 2043 log.go:172] (0xc0009c8420) Data frame received for 3\nI0109 14:31:27.400442 2043 log.go:172] (0xc0009b4000) (3) Data frame handling\nI0109 14:31:27.400463 2043 log.go:172] (0xc0009b4000) (3) Data frame sent\nI0109 14:31:27.505889 2043 log.go:172] (0xc0009c8420) Data frame received for 1\nI0109 14:31:27.506081 2043 log.go:172] (0xc0009c8420) (0xc0009b4000) Stream removed, broadcasting: 3\nI0109 14:31:27.506130 2043 log.go:172] (0xc0008fa5a0) (1) Data frame handling\nI0109 14:31:27.506150 2043 log.go:172] (0xc0008fa5a0) (1) Data frame sent\nI0109 14:31:27.506181 2043 log.go:172] (0xc0009c8420) (0xc0008fa5a0) Stream removed, broadcasting: 1\nI0109 14:31:27.506300 2043 log.go:172] (0xc0009c8420) (0xc0008fa640) Stream removed, broadcasting: 5\nI0109 14:31:27.506814 2043 log.go:172] (0xc0009c8420) (0xc0008fa5a0) Stream removed, broadcasting: 1\nI0109 14:31:27.506828 2043 log.go:172] (0xc0009c8420) (0xc0009b4000) Stream removed, broadcasting: 3\nI0109 14:31:27.506836 2043 log.go:172] (0xc0009c8420) (0xc0008fa640) Stream removed, broadcasting: 5\nI0109 14:31:27.507177 2043 log.go:172] (0xc0009c8420) Go away received\n" Jan 9 14:31:27.513: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jan 9 14:31:27.513: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Jan 9 14:31:27.513: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3167 ss-2 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jan 9 14:31:28.156: INFO: stderr: "I0109 14:31:27.709111 2064 log.go:172] (0xc00092e0b0) (0xc0009025a0) Create stream\nI0109 14:31:27.709253 2064 log.go:172] (0xc00092e0b0) (0xc0009025a0) Stream added, broadcasting: 1\nI0109 14:31:27.715918 2064 log.go:172] (0xc00092e0b0) Reply frame received for 1\nI0109 14:31:27.716009 2064 log.go:172] (0xc00092e0b0) (0xc00091a000) Create stream\nI0109 14:31:27.716020 2064 log.go:172] (0xc00092e0b0) (0xc00091a000) Stream added, broadcasting: 3\nI0109 14:31:27.717988 2064 log.go:172] (0xc00092e0b0) Reply frame received for 3\nI0109 14:31:27.718030 2064 log.go:172] (0xc00092e0b0) (0xc0006ee140) Create stream\nI0109 14:31:27.718041 2064 log.go:172] (0xc00092e0b0) (0xc0006ee140) Stream added, broadcasting: 5\nI0109 14:31:27.720093 2064 log.go:172] (0xc00092e0b0) Reply frame received for 5\nI0109 14:31:27.846228 2064 log.go:172] (0xc00092e0b0) Data frame received for 5\nI0109 14:31:27.846297 2064 log.go:172] (0xc0006ee140) (5) Data frame handling\nI0109 14:31:27.846349 2064 log.go:172] (0xc0006ee140) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0109 14:31:27.892777 2064 log.go:172] (0xc00092e0b0) Data frame received for 3\nI0109 14:31:27.892849 2064 log.go:172] (0xc00091a000) (3) Data frame handling\nI0109 14:31:27.892880 2064 log.go:172] (0xc00091a000) (3) Data frame sent\nI0109 14:31:28.130933 2064 log.go:172] (0xc00092e0b0) Data frame received for 1\nI0109 14:31:28.131321 2064 log.go:172] (0xc00092e0b0) (0xc00091a000) Stream removed, broadcasting: 3\nI0109 14:31:28.131756 2064 log.go:172] (0xc00092e0b0) (0xc0006ee140) Stream removed, broadcasting: 5\nI0109 14:31:28.131883 2064 log.go:172] (0xc0009025a0) (1) Data frame handling\nI0109 14:31:28.131951 2064 log.go:172] (0xc0009025a0) (1) Data frame sent\nI0109 14:31:28.131971 2064 log.go:172] (0xc00092e0b0) (0xc0009025a0) Stream removed, broadcasting: 1\nI0109 14:31:28.132016 2064 log.go:172] (0xc00092e0b0) Go away received\nI0109 14:31:28.135032 2064 log.go:172] (0xc00092e0b0) (0xc0009025a0) Stream removed, broadcasting: 1\nI0109 14:31:28.135074 2064 log.go:172] (0xc00092e0b0) (0xc00091a000) Stream removed, broadcasting: 3\nI0109 14:31:28.135106 2064 log.go:172] (0xc00092e0b0) (0xc0006ee140) Stream removed, broadcasting: 5\n" Jan 9 14:31:28.156: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jan 9 14:31:28.157: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Jan 9 14:31:28.157: INFO: Waiting for statefulset status.replicas updated to 0 Jan 9 14:31:28.175: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Jan 9 14:31:28.175: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Jan 9 14:31:28.175: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Jan 9 14:31:28.200: INFO: POD NODE PHASE GRACE CONDITIONS Jan 9 14:31:28.200: INFO: ss-0 iruya-node Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-09 14:30:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-09 14:31:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-09 14:31:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-09 14:30:52 +0000 UTC }] Jan 9 14:31:28.200: INFO: ss-1 iruya-server-sfge57q7djm7 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-09 14:31:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-09 14:31:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-09 14:31:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-09 14:31:13 +0000 UTC }] Jan 9 14:31:28.200: INFO: ss-2 iruya-node Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-09 14:31:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-09 14:31:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-09 14:31:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-09 14:31:13 +0000 UTC }] Jan 9 14:31:28.200: INFO: Jan 9 14:31:28.200: INFO: StatefulSet ss has not reached scale 0, at 3 Jan 9 14:31:30.015: INFO: POD NODE PHASE GRACE CONDITIONS Jan 9 14:31:30.015: INFO: ss-0 iruya-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-09 14:30:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-09 14:31:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-09 14:31:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-09 14:30:52 +0000 UTC }] Jan 9 14:31:30.015: INFO: ss-1 iruya-server-sfge57q7djm7 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-09 14:31:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-09 14:31:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-09 14:31:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-09 14:31:13 +0000 UTC }] Jan 9 14:31:30.015: INFO: ss-2 iruya-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-09 14:31:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-09 14:31:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-09 14:31:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-09 14:31:13 +0000 UTC }] Jan 9 14:31:30.015: INFO: Jan 9 14:31:30.015: INFO: StatefulSet ss has not reached scale 0, at 3 Jan 9 14:31:31.021: INFO: POD NODE PHASE GRACE CONDITIONS Jan 9 14:31:31.021: INFO: ss-0 iruya-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-09 14:30:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-09 14:31:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-09 14:31:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-09 14:30:52 +0000 UTC }] Jan 9 14:31:31.021: INFO: ss-1 iruya-server-sfge57q7djm7 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-09 14:31:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-09 14:31:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-09 14:31:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-09 14:31:13 +0000 UTC }] Jan 9 14:31:31.021: INFO: ss-2 iruya-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-09 14:31:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-09 14:31:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-09 14:31:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-09 14:31:13 +0000 UTC }] Jan 9 14:31:31.021: INFO: Jan 9 14:31:31.021: INFO: StatefulSet ss has not reached scale 0, at 3 Jan 9 14:31:32.030: INFO: POD NODE PHASE GRACE CONDITIONS Jan 9 14:31:32.030: INFO: ss-0 iruya-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-09 14:30:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-09 14:31:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-09 14:31:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-09 14:30:52 +0000 UTC }] Jan 9 14:31:32.030: INFO: ss-1 iruya-server-sfge57q7djm7 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-09 14:31:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-09 14:31:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-09 14:31:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-09 14:31:13 +0000 UTC }] Jan 9 14:31:32.030: INFO: ss-2 iruya-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-09 14:31:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-09 14:31:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-09 14:31:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-09 14:31:13 +0000 UTC }] Jan 9 14:31:32.030: INFO: Jan 9 14:31:32.030: INFO: StatefulSet ss has not reached scale 0, at 3 Jan 9 14:31:33.042: INFO: POD NODE PHASE GRACE CONDITIONS Jan 9 14:31:33.042: INFO: ss-0 iruya-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-09 14:30:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-09 14:31:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-09 14:31:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-09 14:30:52 +0000 UTC }] Jan 9 14:31:33.042: INFO: ss-1 iruya-server-sfge57q7djm7 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-09 14:31:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-09 14:31:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-09 14:31:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-09 14:31:13 +0000 UTC }] Jan 9 14:31:33.042: INFO: ss-2 iruya-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-09 14:31:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-09 14:31:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-09 14:31:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-09 14:31:13 +0000 UTC }] Jan 9 14:31:33.042: INFO: Jan 9 14:31:33.042: INFO: StatefulSet ss has not reached scale 0, at 3 Jan 9 14:31:34.055: INFO: POD NODE PHASE GRACE CONDITIONS Jan 9 14:31:34.055: INFO: ss-0 iruya-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-09 14:30:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-09 14:31:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-09 14:31:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-09 14:30:52 +0000 UTC }] Jan 9 14:31:34.055: INFO: ss-1 iruya-server-sfge57q7djm7 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-09 14:31:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-09 14:31:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-09 14:31:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-09 14:31:13 +0000 UTC }] Jan 9 14:31:34.055: INFO: ss-2 iruya-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-09 14:31:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-09 14:31:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-09 14:31:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-09 14:31:13 +0000 UTC }] Jan 9 14:31:34.055: INFO: Jan 9 14:31:34.055: INFO: StatefulSet ss has not reached scale 0, at 3 Jan 9 14:31:35.083: INFO: POD NODE PHASE GRACE CONDITIONS Jan 9 14:31:35.083: INFO: ss-0 iruya-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-09 14:30:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-09 14:31:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-09 14:31:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-09 14:30:52 +0000 UTC }] Jan 9 14:31:35.083: INFO: ss-1 iruya-server-sfge57q7djm7 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-09 14:31:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-09 14:31:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-09 14:31:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-09 14:31:13 +0000 UTC }] Jan 9 14:31:35.083: INFO: ss-2 iruya-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-09 14:31:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-09 14:31:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-09 14:31:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-09 14:31:13 +0000 UTC }] Jan 9 14:31:35.083: INFO: Jan 9 14:31:35.083: INFO: StatefulSet ss has not reached scale 0, at 3 Jan 9 14:31:36.180: INFO: POD NODE PHASE GRACE CONDITIONS Jan 9 14:31:36.180: INFO: ss-0 iruya-node Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-09 14:30:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-09 14:31:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-09 14:31:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-09 14:30:52 +0000 UTC }] Jan 9 14:31:36.180: INFO: ss-1 iruya-server-sfge57q7djm7 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-09 14:31:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-09 14:31:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-09 14:31:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-09 14:31:13 +0000 UTC }] Jan 9 14:31:36.180: INFO: ss-2 iruya-node Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-09 14:31:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-09 14:31:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-09 14:31:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-09 14:31:13 +0000 UTC }] Jan 9 14:31:36.180: INFO: Jan 9 14:31:36.180: INFO: StatefulSet ss has not reached scale 0, at 3 Jan 9 14:31:37.208: INFO: POD NODE PHASE GRACE CONDITIONS Jan 9 14:31:37.208: INFO: ss-1 iruya-server-sfge57q7djm7 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-09 14:31:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-09 14:31:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-09 14:31:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-09 14:31:13 +0000 UTC }] Jan 9 14:31:37.208: INFO: Jan 9 14:31:37.208: INFO: StatefulSet ss has not reached scale 0, at 1 STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-3167 Jan 9 14:31:38.219: INFO: Scaling statefulset ss to 0 Jan 9 14:31:38.249: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 Jan 9 14:31:38.251: INFO: Deleting all statefulset in ns statefulset-3167 Jan 9 14:31:38.253: INFO: Scaling statefulset ss to 0 Jan 9 14:31:38.261: INFO: Waiting for statefulset status.replicas updated to 0 Jan 9 14:31:38.262: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 9 14:31:38.278: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-3167" for this suite. Jan 9 14:31:44.316: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 9 14:31:44.416: INFO: namespace statefulset-3167 deletion completed in 6.132337726s • [SLOW TEST:51.824 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 9 14:31:44.416: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Jan 9 14:32:02.674: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 9 14:32:02.680: INFO: Pod pod-with-prestop-exec-hook still exists Jan 9 14:32:04.680: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 9 14:32:04.688: INFO: Pod pod-with-prestop-exec-hook still exists Jan 9 14:32:06.681: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 9 14:32:06.687: INFO: Pod pod-with-prestop-exec-hook still exists Jan 9 14:32:08.681: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 9 14:32:08.734: INFO: Pod pod-with-prestop-exec-hook still exists Jan 9 14:32:10.681: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 9 14:32:10.697: INFO: Pod pod-with-prestop-exec-hook still exists Jan 9 14:32:12.681: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 9 14:32:12.689: INFO: Pod pod-with-prestop-exec-hook still exists Jan 9 14:32:14.681: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 9 14:32:14.713: INFO: Pod pod-with-prestop-exec-hook still exists Jan 9 14:32:16.681: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 9 14:32:16.740: INFO: Pod pod-with-prestop-exec-hook still exists Jan 9 14:32:18.681: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 9 14:32:18.687: INFO: Pod pod-with-prestop-exec-hook still exists Jan 9 14:32:20.681: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 9 14:32:20.725: INFO: Pod pod-with-prestop-exec-hook still exists Jan 9 14:32:22.681: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 9 14:32:22.688: INFO: Pod pod-with-prestop-exec-hook still exists Jan 9 14:32:24.681: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 9 14:32:24.698: INFO: Pod pod-with-prestop-exec-hook still exists Jan 9 14:32:26.681: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 9 14:32:26.688: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 9 14:32:26.722: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-8960" for this suite. Jan 9 14:32:48.769: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 9 14:32:48.907: INFO: namespace container-lifecycle-hook-8960 deletion completed in 22.172716393s • [SLOW TEST:64.491 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 9 14:32:48.908: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod Jan 9 14:32:48.988: INFO: PodSpec: initContainers in spec.initContainers Jan 9 14:33:54.437: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-ff94a62f-c382-42d1-b6cc-8d87ad8d6b8e", GenerateName:"", Namespace:"init-container-6860", SelfLink:"/api/v1/namespaces/init-container-6860/pods/pod-init-ff94a62f-c382-42d1-b6cc-8d87ad8d6b8e", UID:"b66b49a6-e296-423e-9a36-af2419733c4d", ResourceVersion:"19912801", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63714177169, loc:(*time.Location)(0x7ea48a0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"988869892"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-ktn4v", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc002826880), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-ktn4v", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-ktn4v", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-ktn4v", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0027402c8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"iruya-node", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0011e7da0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc002740360)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc002740380)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc002740388), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc00274038c), PreemptionPolicy:(*v1.PreemptionPolicy)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714177169, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714177169, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714177169, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714177169, loc:(*time.Location)(0x7ea48a0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.96.3.65", PodIP:"10.44.0.1", StartTime:(*v1.Time)(0xc001fa6240), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc001c1b110)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc001c1b180)}, Ready:false, RestartCount:3, Image:"busybox:1.29", ImageID:"docker-pullable://busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"docker://874ca06ad4d79424f86007af221fa9bd85da4132d076c9c4ce669815154a5310"}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc001fa6280), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:""}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc001fa6260), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:""}}, QOSClass:"Guaranteed"}} [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 9 14:33:54.438: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-6860" for this suite. Jan 9 14:34:16.582: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 9 14:34:16.748: INFO: namespace init-container-6860 deletion completed in 22.22182084s • [SLOW TEST:87.840 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl label should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 9 14:34:16.748: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1210 STEP: creating the pod Jan 9 14:34:16.818: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7187' Jan 9 14:34:17.313: INFO: stderr: "" Jan 9 14:34:17.313: INFO: stdout: "pod/pause created\n" Jan 9 14:34:17.313: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] Jan 9 14:34:17.313: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-7187" to be "running and ready" Jan 9 14:34:17.324: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 10.580265ms Jan 9 14:34:19.331: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017909817s Jan 9 14:34:21.343: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 4.029777338s Jan 9 14:34:23.351: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 6.037791964s Jan 9 14:34:25.362: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 8.048673697s Jan 9 14:34:27.410: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 10.097252339s Jan 9 14:34:27.411: INFO: Pod "pause" satisfied condition "running and ready" Jan 9 14:34:27.411: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: adding the label testing-label with value testing-label-value to a pod Jan 9 14:34:27.411: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-7187' Jan 9 14:34:27.643: INFO: stderr: "" Jan 9 14:34:27.643: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value Jan 9 14:34:27.643: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-7187' Jan 9 14:34:27.782: INFO: stderr: "" Jan 9 14:34:27.782: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 10s testing-label-value\n" STEP: removing the label testing-label of a pod Jan 9 14:34:27.783: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-7187' Jan 9 14:34:27.947: INFO: stderr: "" Jan 9 14:34:27.947: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label Jan 9 14:34:27.947: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-7187' Jan 9 14:34:28.028: INFO: stderr: "" Jan 9 14:34:28.028: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 11s \n" [AfterEach] [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1217 STEP: using delete to clean up resources Jan 9 14:34:28.028: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-7187' Jan 9 14:34:28.162: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 9 14:34:28.162: INFO: stdout: "pod \"pause\" force deleted\n" Jan 9 14:34:28.162: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-7187' Jan 9 14:34:28.377: INFO: stderr: "No resources found.\n" Jan 9 14:34:28.377: INFO: stdout: "" Jan 9 14:34:28.377: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-7187 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Jan 9 14:34:28.472: INFO: stderr: "" Jan 9 14:34:28.472: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 9 14:34:28.472: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7187" for this suite. Jan 9 14:34:34.519: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 9 14:34:34.636: INFO: namespace kubectl-7187 deletion completed in 6.158551382s • [SLOW TEST:17.888 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 9 14:34:34.637: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-48c629d2-ad66-49f1-8665-57095da4eba0 STEP: Creating a pod to test consume secrets Jan 9 14:34:34.797: INFO: Waiting up to 5m0s for pod "pod-secrets-a9c6d527-4c0d-45e1-9164-903c5224dc11" in namespace "secrets-6398" to be "success or failure" Jan 9 14:34:34.802: INFO: Pod "pod-secrets-a9c6d527-4c0d-45e1-9164-903c5224dc11": Phase="Pending", Reason="", readiness=false. Elapsed: 5.185847ms Jan 9 14:34:36.808: INFO: Pod "pod-secrets-a9c6d527-4c0d-45e1-9164-903c5224dc11": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010959075s Jan 9 14:34:38.891: INFO: Pod "pod-secrets-a9c6d527-4c0d-45e1-9164-903c5224dc11": Phase="Pending", Reason="", readiness=false. Elapsed: 4.09371329s Jan 9 14:34:40.900: INFO: Pod "pod-secrets-a9c6d527-4c0d-45e1-9164-903c5224dc11": Phase="Pending", Reason="", readiness=false. Elapsed: 6.10335905s Jan 9 14:34:42.910: INFO: Pod "pod-secrets-a9c6d527-4c0d-45e1-9164-903c5224dc11": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.113149735s STEP: Saw pod success Jan 9 14:34:42.910: INFO: Pod "pod-secrets-a9c6d527-4c0d-45e1-9164-903c5224dc11" satisfied condition "success or failure" Jan 9 14:34:42.914: INFO: Trying to get logs from node iruya-node pod pod-secrets-a9c6d527-4c0d-45e1-9164-903c5224dc11 container secret-volume-test: STEP: delete the pod Jan 9 14:34:42.990: INFO: Waiting for pod pod-secrets-a9c6d527-4c0d-45e1-9164-903c5224dc11 to disappear Jan 9 14:34:43.001: INFO: Pod pod-secrets-a9c6d527-4c0d-45e1-9164-903c5224dc11 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 9 14:34:43.002: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-6398" for this suite. Jan 9 14:34:49.314: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 9 14:34:49.419: INFO: namespace secrets-6398 deletion completed in 6.405891564s • [SLOW TEST:14.782 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 9 14:34:49.419: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-a1a434a9-66fe-492c-8782-1c2c6423522c STEP: Creating a pod to test consume configMaps Jan 9 14:34:49.572: INFO: Waiting up to 5m0s for pod "pod-configmaps-ab7a41d0-3f3f-48a6-99c1-70b416b36baa" in namespace "configmap-5134" to be "success or failure" Jan 9 14:34:49.580: INFO: Pod "pod-configmaps-ab7a41d0-3f3f-48a6-99c1-70b416b36baa": Phase="Pending", Reason="", readiness=false. Elapsed: 8.511053ms Jan 9 14:34:51.598: INFO: Pod "pod-configmaps-ab7a41d0-3f3f-48a6-99c1-70b416b36baa": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025863916s Jan 9 14:34:53.610: INFO: Pod "pod-configmaps-ab7a41d0-3f3f-48a6-99c1-70b416b36baa": Phase="Pending", Reason="", readiness=false. Elapsed: 4.037841104s Jan 9 14:34:55.627: INFO: Pod "pod-configmaps-ab7a41d0-3f3f-48a6-99c1-70b416b36baa": Phase="Pending", Reason="", readiness=false. Elapsed: 6.055272896s Jan 9 14:34:57.643: INFO: Pod "pod-configmaps-ab7a41d0-3f3f-48a6-99c1-70b416b36baa": Phase="Running", Reason="", readiness=true. Elapsed: 8.071310047s Jan 9 14:34:59.651: INFO: Pod "pod-configmaps-ab7a41d0-3f3f-48a6-99c1-70b416b36baa": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.079521769s STEP: Saw pod success Jan 9 14:34:59.651: INFO: Pod "pod-configmaps-ab7a41d0-3f3f-48a6-99c1-70b416b36baa" satisfied condition "success or failure" Jan 9 14:34:59.656: INFO: Trying to get logs from node iruya-node pod pod-configmaps-ab7a41d0-3f3f-48a6-99c1-70b416b36baa container configmap-volume-test: STEP: delete the pod Jan 9 14:34:59.734: INFO: Waiting for pod pod-configmaps-ab7a41d0-3f3f-48a6-99c1-70b416b36baa to disappear Jan 9 14:34:59.746: INFO: Pod pod-configmaps-ab7a41d0-3f3f-48a6-99c1-70b416b36baa no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 9 14:34:59.746: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5134" for this suite. Jan 9 14:35:05.798: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 9 14:35:05.972: INFO: namespace configmap-5134 deletion completed in 6.219140818s • [SLOW TEST:16.553 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 9 14:35:05.972: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating the pod Jan 9 14:35:16.712: INFO: Successfully updated pod "annotationupdate02ce5361-0ffe-40b4-8225-c7bdbfdc912c" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 9 14:35:18.808: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-880" for this suite. Jan 9 14:35:40.865: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 9 14:35:41.000: INFO: namespace downward-api-880 deletion completed in 22.184503573s • [SLOW TEST:35.028 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 9 14:35:41.000: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications Jan 9 14:35:41.119: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-7711,SelfLink:/api/v1/namespaces/watch-7711/configmaps/e2e-watch-test-watch-closed,UID:bcd59ec0-1a90-49c5-93c2-9208a03152dc,ResourceVersion:19913061,Generation:0,CreationTimestamp:2020-01-09 14:35:41 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} Jan 9 14:35:41.119: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-7711,SelfLink:/api/v1/namespaces/watch-7711/configmaps/e2e-watch-test-watch-closed,UID:bcd59ec0-1a90-49c5-93c2-9208a03152dc,ResourceVersion:19913062,Generation:0,CreationTimestamp:2020-01-09 14:35:41 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed Jan 9 14:35:41.140: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-7711,SelfLink:/api/v1/namespaces/watch-7711/configmaps/e2e-watch-test-watch-closed,UID:bcd59ec0-1a90-49c5-93c2-9208a03152dc,ResourceVersion:19913063,Generation:0,CreationTimestamp:2020-01-09 14:35:41 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Jan 9 14:35:41.140: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-7711,SelfLink:/api/v1/namespaces/watch-7711/configmaps/e2e-watch-test-watch-closed,UID:bcd59ec0-1a90-49c5-93c2-9208a03152dc,ResourceVersion:19913064,Generation:0,CreationTimestamp:2020-01-09 14:35:41 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 9 14:35:41.140: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-7711" for this suite. Jan 9 14:35:47.157: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 9 14:35:47.252: INFO: namespace watch-7711 deletion completed in 6.108376918s • [SLOW TEST:6.252 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 9 14:35:47.252: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-97cc2e66-79e0-47b3-a1ad-f930f599b1df STEP: Creating a pod to test consume configMaps Jan 9 14:35:47.377: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-48c15a01-5e09-4524-b9a4-85fd14eb7396" in namespace "projected-2478" to be "success or failure" Jan 9 14:35:47.399: INFO: Pod "pod-projected-configmaps-48c15a01-5e09-4524-b9a4-85fd14eb7396": Phase="Pending", Reason="", readiness=false. Elapsed: 21.904667ms Jan 9 14:35:49.407: INFO: Pod "pod-projected-configmaps-48c15a01-5e09-4524-b9a4-85fd14eb7396": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030050345s Jan 9 14:35:51.415: INFO: Pod "pod-projected-configmaps-48c15a01-5e09-4524-b9a4-85fd14eb7396": Phase="Pending", Reason="", readiness=false. Elapsed: 4.037948182s Jan 9 14:35:53.422: INFO: Pod "pod-projected-configmaps-48c15a01-5e09-4524-b9a4-85fd14eb7396": Phase="Pending", Reason="", readiness=false. Elapsed: 6.044832396s Jan 9 14:35:55.435: INFO: Pod "pod-projected-configmaps-48c15a01-5e09-4524-b9a4-85fd14eb7396": Phase="Pending", Reason="", readiness=false. Elapsed: 8.057931486s Jan 9 14:35:57.444: INFO: Pod "pod-projected-configmaps-48c15a01-5e09-4524-b9a4-85fd14eb7396": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.067409792s STEP: Saw pod success Jan 9 14:35:57.444: INFO: Pod "pod-projected-configmaps-48c15a01-5e09-4524-b9a4-85fd14eb7396" satisfied condition "success or failure" Jan 9 14:35:57.449: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-48c15a01-5e09-4524-b9a4-85fd14eb7396 container projected-configmap-volume-test: STEP: delete the pod Jan 9 14:35:57.505: INFO: Waiting for pod pod-projected-configmaps-48c15a01-5e09-4524-b9a4-85fd14eb7396 to disappear Jan 9 14:35:57.578: INFO: Pod pod-projected-configmaps-48c15a01-5e09-4524-b9a4-85fd14eb7396 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 9 14:35:57.578: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2478" for this suite. Jan 9 14:36:03.626: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 9 14:36:03.765: INFO: namespace projected-2478 deletion completed in 6.179601583s • [SLOW TEST:16.513 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 9 14:36:03.766: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 9 14:36:13.994: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-1965" for this suite. Jan 9 14:36:20.100: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 9 14:36:20.274: INFO: namespace emptydir-wrapper-1965 deletion completed in 6.26732487s • [SLOW TEST:16.509 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 9 14:36:20.275: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jan 9 14:36:20.413: INFO: Create a RollingUpdate DaemonSet Jan 9 14:36:20.417: INFO: Check that daemon pods launch on every node of the cluster Jan 9 14:36:20.451: INFO: Number of nodes with available pods: 0 Jan 9 14:36:20.451: INFO: Node iruya-node is running more than one daemon pod Jan 9 14:36:21.468: INFO: Number of nodes with available pods: 0 Jan 9 14:36:21.468: INFO: Node iruya-node is running more than one daemon pod Jan 9 14:36:22.657: INFO: Number of nodes with available pods: 0 Jan 9 14:36:22.657: INFO: Node iruya-node is running more than one daemon pod Jan 9 14:36:23.464: INFO: Number of nodes with available pods: 0 Jan 9 14:36:23.464: INFO: Node iruya-node is running more than one daemon pod Jan 9 14:36:24.467: INFO: Number of nodes with available pods: 0 Jan 9 14:36:24.467: INFO: Node iruya-node is running more than one daemon pod Jan 9 14:36:25.461: INFO: Number of nodes with available pods: 0 Jan 9 14:36:25.461: INFO: Node iruya-node is running more than one daemon pod Jan 9 14:36:27.607: INFO: Number of nodes with available pods: 0 Jan 9 14:36:27.608: INFO: Node iruya-node is running more than one daemon pod Jan 9 14:36:28.556: INFO: Number of nodes with available pods: 0 Jan 9 14:36:28.556: INFO: Node iruya-node is running more than one daemon pod Jan 9 14:36:29.460: INFO: Number of nodes with available pods: 0 Jan 9 14:36:29.460: INFO: Node iruya-node is running more than one daemon pod Jan 9 14:36:30.503: INFO: Number of nodes with available pods: 1 Jan 9 14:36:30.503: INFO: Node iruya-node is running more than one daemon pod Jan 9 14:36:31.475: INFO: Number of nodes with available pods: 2 Jan 9 14:36:31.475: INFO: Number of running nodes: 2, number of available pods: 2 Jan 9 14:36:31.475: INFO: Update the DaemonSet to trigger a rollout Jan 9 14:36:31.486: INFO: Updating DaemonSet daemon-set Jan 9 14:36:37.526: INFO: Roll back the DaemonSet before rollout is complete Jan 9 14:36:37.539: INFO: Updating DaemonSet daemon-set Jan 9 14:36:37.539: INFO: Make sure DaemonSet rollback is complete Jan 9 14:36:37.593: INFO: Wrong image for pod: daemon-set-j96lk. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent. Jan 9 14:36:37.593: INFO: Pod daemon-set-j96lk is not available Jan 9 14:36:38.641: INFO: Wrong image for pod: daemon-set-j96lk. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent. Jan 9 14:36:38.641: INFO: Pod daemon-set-j96lk is not available Jan 9 14:36:39.640: INFO: Wrong image for pod: daemon-set-j96lk. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent. Jan 9 14:36:39.641: INFO: Pod daemon-set-j96lk is not available Jan 9 14:36:40.637: INFO: Wrong image for pod: daemon-set-j96lk. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent. Jan 9 14:36:40.637: INFO: Pod daemon-set-j96lk is not available Jan 9 14:36:41.634: INFO: Wrong image for pod: daemon-set-j96lk. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent. Jan 9 14:36:41.634: INFO: Pod daemon-set-j96lk is not available Jan 9 14:36:42.642: INFO: Wrong image for pod: daemon-set-j96lk. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent. Jan 9 14:36:42.643: INFO: Pod daemon-set-j96lk is not available Jan 9 14:36:44.060: INFO: Wrong image for pod: daemon-set-j96lk. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent. Jan 9 14:36:44.060: INFO: Pod daemon-set-j96lk is not available Jan 9 14:36:44.636: INFO: Wrong image for pod: daemon-set-j96lk. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent. Jan 9 14:36:44.636: INFO: Pod daemon-set-j96lk is not available Jan 9 14:36:45.639: INFO: Pod daemon-set-r8bps is not available [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-8149, will wait for the garbage collector to delete the pods Jan 9 14:36:45.874: INFO: Deleting DaemonSet.extensions daemon-set took: 156.777209ms Jan 9 14:36:46.275: INFO: Terminating DaemonSet.extensions daemon-set pods took: 400.441021ms Jan 9 14:36:53.302: INFO: Number of nodes with available pods: 0 Jan 9 14:36:53.302: INFO: Number of running nodes: 0, number of available pods: 0 Jan 9 14:36:53.307: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-8149/daemonsets","resourceVersion":"19913292"},"items":null} Jan 9 14:36:53.312: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-8149/pods","resourceVersion":"19913292"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 9 14:36:53.325: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-8149" for this suite. Jan 9 14:36:59.357: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 9 14:36:59.500: INFO: namespace daemonsets-8149 deletion completed in 6.172001514s • [SLOW TEST:39.226 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl version should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 9 14:36:59.502: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jan 9 14:36:59.608: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version' Jan 9 14:36:59.760: INFO: stderr: "" Jan 9 14:36:59.760: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"15\", GitVersion:\"v1.15.7\", GitCommit:\"6c143d35bb11d74970e7bc0b6c45b6bfdffc0bd4\", GitTreeState:\"clean\", BuildDate:\"2019-12-22T16:55:20Z\", GoVersion:\"go1.12.14\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"15\", GitVersion:\"v1.15.1\", GitCommit:\"4485c6f18cee9a5d3c3b4e523bd27972b1b53892\", GitTreeState:\"clean\", BuildDate:\"2019-07-18T09:09:21Z\", GoVersion:\"go1.12.5\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 9 14:36:59.760: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3572" for this suite. Jan 9 14:37:05.857: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 9 14:37:06.045: INFO: namespace kubectl-3572 deletion completed in 6.275476881s • [SLOW TEST:6.543 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl version /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 9 14:37:06.046: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-map-b1bab5ec-aab6-467d-8558-d9df699a4dde STEP: Creating a pod to test consume configMaps Jan 9 14:37:06.223: INFO: Waiting up to 5m0s for pod "pod-configmaps-8c24ed9c-061b-4e99-a8e3-f23c8be64596" in namespace "configmap-1210" to be "success or failure" Jan 9 14:37:06.244: INFO: Pod "pod-configmaps-8c24ed9c-061b-4e99-a8e3-f23c8be64596": Phase="Pending", Reason="", readiness=false. Elapsed: 21.103926ms Jan 9 14:37:08.256: INFO: Pod "pod-configmaps-8c24ed9c-061b-4e99-a8e3-f23c8be64596": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033564567s Jan 9 14:37:10.273: INFO: Pod "pod-configmaps-8c24ed9c-061b-4e99-a8e3-f23c8be64596": Phase="Pending", Reason="", readiness=false. Elapsed: 4.050285428s Jan 9 14:37:12.279: INFO: Pod "pod-configmaps-8c24ed9c-061b-4e99-a8e3-f23c8be64596": Phase="Pending", Reason="", readiness=false. Elapsed: 6.056304153s Jan 9 14:37:14.291: INFO: Pod "pod-configmaps-8c24ed9c-061b-4e99-a8e3-f23c8be64596": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.067896559s STEP: Saw pod success Jan 9 14:37:14.291: INFO: Pod "pod-configmaps-8c24ed9c-061b-4e99-a8e3-f23c8be64596" satisfied condition "success or failure" Jan 9 14:37:14.295: INFO: Trying to get logs from node iruya-node pod pod-configmaps-8c24ed9c-061b-4e99-a8e3-f23c8be64596 container configmap-volume-test: STEP: delete the pod Jan 9 14:37:14.342: INFO: Waiting for pod pod-configmaps-8c24ed9c-061b-4e99-a8e3-f23c8be64596 to disappear Jan 9 14:37:14.346: INFO: Pod pod-configmaps-8c24ed9c-061b-4e99-a8e3-f23c8be64596 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 9 14:37:14.346: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1210" for this suite. Jan 9 14:37:20.372: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 9 14:37:20.473: INFO: namespace configmap-1210 deletion completed in 6.123137583s • [SLOW TEST:14.428 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 9 14:37:20.474: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-c8d44803-7165-42a1-aea6-019f490e04f0 STEP: Creating a pod to test consume configMaps Jan 9 14:37:20.632: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-b63ca1c6-1d54-4766-a3c7-a2a60deac5f0" in namespace "projected-4923" to be "success or failure" Jan 9 14:37:20.652: INFO: Pod "pod-projected-configmaps-b63ca1c6-1d54-4766-a3c7-a2a60deac5f0": Phase="Pending", Reason="", readiness=false. Elapsed: 19.625696ms Jan 9 14:37:22.663: INFO: Pod "pod-projected-configmaps-b63ca1c6-1d54-4766-a3c7-a2a60deac5f0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030579744s Jan 9 14:37:24.671: INFO: Pod "pod-projected-configmaps-b63ca1c6-1d54-4766-a3c7-a2a60deac5f0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.038390287s Jan 9 14:37:26.680: INFO: Pod "pod-projected-configmaps-b63ca1c6-1d54-4766-a3c7-a2a60deac5f0": Phase="Pending", Reason="", readiness=false. Elapsed: 6.048144898s Jan 9 14:37:28.695: INFO: Pod "pod-projected-configmaps-b63ca1c6-1d54-4766-a3c7-a2a60deac5f0": Phase="Pending", Reason="", readiness=false. Elapsed: 8.062834713s Jan 9 14:37:30.705: INFO: Pod "pod-projected-configmaps-b63ca1c6-1d54-4766-a3c7-a2a60deac5f0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.072440516s STEP: Saw pod success Jan 9 14:37:30.705: INFO: Pod "pod-projected-configmaps-b63ca1c6-1d54-4766-a3c7-a2a60deac5f0" satisfied condition "success or failure" Jan 9 14:37:30.710: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-b63ca1c6-1d54-4766-a3c7-a2a60deac5f0 container projected-configmap-volume-test: STEP: delete the pod Jan 9 14:37:30.846: INFO: Waiting for pod pod-projected-configmaps-b63ca1c6-1d54-4766-a3c7-a2a60deac5f0 to disappear Jan 9 14:37:30.860: INFO: Pod pod-projected-configmaps-b63ca1c6-1d54-4766-a3c7-a2a60deac5f0 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 9 14:37:30.860: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4923" for this suite. Jan 9 14:37:36.898: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 9 14:37:36.984: INFO: namespace projected-4923 deletion completed in 6.118791489s • [SLOW TEST:16.510 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 9 14:37:36.985: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Jan 9 14:37:37.069: INFO: Waiting up to 5m0s for pod "downwardapi-volume-887950d4-7303-4aa3-9730-125e6212f186" in namespace "projected-7066" to be "success or failure" Jan 9 14:37:37.082: INFO: Pod "downwardapi-volume-887950d4-7303-4aa3-9730-125e6212f186": Phase="Pending", Reason="", readiness=false. Elapsed: 12.953561ms Jan 9 14:37:39.089: INFO: Pod "downwardapi-volume-887950d4-7303-4aa3-9730-125e6212f186": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01915239s Jan 9 14:37:41.098: INFO: Pod "downwardapi-volume-887950d4-7303-4aa3-9730-125e6212f186": Phase="Pending", Reason="", readiness=false. Elapsed: 4.028164965s Jan 9 14:37:43.104: INFO: Pod "downwardapi-volume-887950d4-7303-4aa3-9730-125e6212f186": Phase="Pending", Reason="", readiness=false. Elapsed: 6.034375097s Jan 9 14:37:45.110: INFO: Pod "downwardapi-volume-887950d4-7303-4aa3-9730-125e6212f186": Phase="Pending", Reason="", readiness=false. Elapsed: 8.040908188s Jan 9 14:37:47.119: INFO: Pod "downwardapi-volume-887950d4-7303-4aa3-9730-125e6212f186": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.049792688s STEP: Saw pod success Jan 9 14:37:47.119: INFO: Pod "downwardapi-volume-887950d4-7303-4aa3-9730-125e6212f186" satisfied condition "success or failure" Jan 9 14:37:47.123: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-887950d4-7303-4aa3-9730-125e6212f186 container client-container: STEP: delete the pod Jan 9 14:37:47.174: INFO: Waiting for pod downwardapi-volume-887950d4-7303-4aa3-9730-125e6212f186 to disappear Jan 9 14:37:47.203: INFO: Pod downwardapi-volume-887950d4-7303-4aa3-9730-125e6212f186 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 9 14:37:47.203: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7066" for this suite. Jan 9 14:37:53.310: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 9 14:37:53.409: INFO: namespace projected-7066 deletion completed in 6.200828485s • [SLOW TEST:16.425 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 9 14:37:53.409: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5095.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5095.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jan 9 14:38:07.599: INFO: Unable to read wheezy_udp@kubernetes.default.svc.cluster.local from pod dns-5095/dns-test-9c93b0e4-9a6f-4f37-ac04-f2b34db079c8: the server could not find the requested resource (get pods dns-test-9c93b0e4-9a6f-4f37-ac04-f2b34db079c8) Jan 9 14:38:07.613: INFO: Unable to read wheezy_tcp@kubernetes.default.svc.cluster.local from pod dns-5095/dns-test-9c93b0e4-9a6f-4f37-ac04-f2b34db079c8: the server could not find the requested resource (get pods dns-test-9c93b0e4-9a6f-4f37-ac04-f2b34db079c8) Jan 9 14:38:07.628: INFO: Unable to read wheezy_udp@PodARecord from pod dns-5095/dns-test-9c93b0e4-9a6f-4f37-ac04-f2b34db079c8: the server could not find the requested resource (get pods dns-test-9c93b0e4-9a6f-4f37-ac04-f2b34db079c8) Jan 9 14:38:07.648: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-5095/dns-test-9c93b0e4-9a6f-4f37-ac04-f2b34db079c8: the server could not find the requested resource (get pods dns-test-9c93b0e4-9a6f-4f37-ac04-f2b34db079c8) Jan 9 14:38:07.656: INFO: Unable to read jessie_udp@kubernetes.default.svc.cluster.local from pod dns-5095/dns-test-9c93b0e4-9a6f-4f37-ac04-f2b34db079c8: the server could not find the requested resource (get pods dns-test-9c93b0e4-9a6f-4f37-ac04-f2b34db079c8) Jan 9 14:38:07.662: INFO: Unable to read jessie_tcp@kubernetes.default.svc.cluster.local from pod dns-5095/dns-test-9c93b0e4-9a6f-4f37-ac04-f2b34db079c8: the server could not find the requested resource (get pods dns-test-9c93b0e4-9a6f-4f37-ac04-f2b34db079c8) Jan 9 14:38:07.668: INFO: Unable to read jessie_udp@PodARecord from pod dns-5095/dns-test-9c93b0e4-9a6f-4f37-ac04-f2b34db079c8: the server could not find the requested resource (get pods dns-test-9c93b0e4-9a6f-4f37-ac04-f2b34db079c8) Jan 9 14:38:07.674: INFO: Unable to read jessie_tcp@PodARecord from pod dns-5095/dns-test-9c93b0e4-9a6f-4f37-ac04-f2b34db079c8: the server could not find the requested resource (get pods dns-test-9c93b0e4-9a6f-4f37-ac04-f2b34db079c8) Jan 9 14:38:07.674: INFO: Lookups using dns-5095/dns-test-9c93b0e4-9a6f-4f37-ac04-f2b34db079c8 failed for: [wheezy_udp@kubernetes.default.svc.cluster.local wheezy_tcp@kubernetes.default.svc.cluster.local wheezy_udp@PodARecord wheezy_tcp@PodARecord jessie_udp@kubernetes.default.svc.cluster.local jessie_tcp@kubernetes.default.svc.cluster.local jessie_udp@PodARecord jessie_tcp@PodARecord] Jan 9 14:38:12.755: INFO: DNS probes using dns-5095/dns-test-9c93b0e4-9a6f-4f37-ac04-f2b34db079c8 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 9 14:38:12.906: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-5095" for this suite. Jan 9 14:38:19.000: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 9 14:38:19.161: INFO: namespace dns-5095 deletion completed in 6.229803598s • [SLOW TEST:25.752 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 9 14:38:19.161: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 9 14:38:51.572: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-9882" for this suite. Jan 9 14:38:57.609: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 9 14:38:57.766: INFO: namespace namespaces-9882 deletion completed in 6.188578629s STEP: Destroying namespace "nsdeletetest-6459" for this suite. Jan 9 14:38:57.770: INFO: Namespace nsdeletetest-6459 was already deleted STEP: Destroying namespace "nsdeletetest-2226" for this suite. Jan 9 14:39:03.820: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 9 14:39:03.943: INFO: namespace nsdeletetest-2226 deletion completed in 6.17288774s • [SLOW TEST:44.782 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 9 14:39:03.944: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-map-088c027b-cbe7-476b-8b79-834cf987721f STEP: Creating a pod to test consume secrets Jan 9 14:39:04.166: INFO: Waiting up to 5m0s for pod "pod-secrets-8f3513a4-e8e7-4d54-96c2-c67a4f0f36bf" in namespace "secrets-4248" to be "success or failure" Jan 9 14:39:04.171: INFO: Pod "pod-secrets-8f3513a4-e8e7-4d54-96c2-c67a4f0f36bf": Phase="Pending", Reason="", readiness=false. Elapsed: 5.355309ms Jan 9 14:39:06.182: INFO: Pod "pod-secrets-8f3513a4-e8e7-4d54-96c2-c67a4f0f36bf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016287871s Jan 9 14:39:08.189: INFO: Pod "pod-secrets-8f3513a4-e8e7-4d54-96c2-c67a4f0f36bf": Phase="Pending", Reason="", readiness=false. Elapsed: 4.023508187s Jan 9 14:39:10.198: INFO: Pod "pod-secrets-8f3513a4-e8e7-4d54-96c2-c67a4f0f36bf": Phase="Pending", Reason="", readiness=false. Elapsed: 6.032639451s Jan 9 14:39:12.208: INFO: Pod "pod-secrets-8f3513a4-e8e7-4d54-96c2-c67a4f0f36bf": Phase="Pending", Reason="", readiness=false. Elapsed: 8.042336282s Jan 9 14:39:14.217: INFO: Pod "pod-secrets-8f3513a4-e8e7-4d54-96c2-c67a4f0f36bf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.051279245s STEP: Saw pod success Jan 9 14:39:14.217: INFO: Pod "pod-secrets-8f3513a4-e8e7-4d54-96c2-c67a4f0f36bf" satisfied condition "success or failure" Jan 9 14:39:14.221: INFO: Trying to get logs from node iruya-node pod pod-secrets-8f3513a4-e8e7-4d54-96c2-c67a4f0f36bf container secret-volume-test: STEP: delete the pod Jan 9 14:39:14.272: INFO: Waiting for pod pod-secrets-8f3513a4-e8e7-4d54-96c2-c67a4f0f36bf to disappear Jan 9 14:39:14.276: INFO: Pod pod-secrets-8f3513a4-e8e7-4d54-96c2-c67a4f0f36bf no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 9 14:39:14.276: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-4248" for this suite. Jan 9 14:39:20.413: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 9 14:39:20.494: INFO: namespace secrets-4248 deletion completed in 6.212647333s • [SLOW TEST:16.550 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 9 14:39:20.494: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating replication controller my-hostname-basic-43e0112e-b84f-460c-a4d2-72bff773d877 Jan 9 14:39:20.651: INFO: Pod name my-hostname-basic-43e0112e-b84f-460c-a4d2-72bff773d877: Found 0 pods out of 1 Jan 9 14:39:25.664: INFO: Pod name my-hostname-basic-43e0112e-b84f-460c-a4d2-72bff773d877: Found 1 pods out of 1 Jan 9 14:39:25.664: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-43e0112e-b84f-460c-a4d2-72bff773d877" are running Jan 9 14:39:29.678: INFO: Pod "my-hostname-basic-43e0112e-b84f-460c-a4d2-72bff773d877-fbfl2" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-09 14:39:20 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-09 14:39:20 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-43e0112e-b84f-460c-a4d2-72bff773d877]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-09 14:39:20 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-43e0112e-b84f-460c-a4d2-72bff773d877]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-09 14:39:20 +0000 UTC Reason: Message:}]) Jan 9 14:39:29.678: INFO: Trying to dial the pod Jan 9 14:39:34.773: INFO: Controller my-hostname-basic-43e0112e-b84f-460c-a4d2-72bff773d877: Got expected result from replica 1 [my-hostname-basic-43e0112e-b84f-460c-a4d2-72bff773d877-fbfl2]: "my-hostname-basic-43e0112e-b84f-460c-a4d2-72bff773d877-fbfl2", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 9 14:39:34.773: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-586" for this suite. Jan 9 14:39:40.819: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 9 14:39:41.144: INFO: namespace replication-controller-586 deletion completed in 6.362765411s • [SLOW TEST:20.650 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 9 14:39:41.144: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0666 on tmpfs Jan 9 14:39:41.245: INFO: Waiting up to 5m0s for pod "pod-e40290c2-5777-4d14-9fd8-20b93d7c64a9" in namespace "emptydir-4540" to be "success or failure" Jan 9 14:39:41.264: INFO: Pod "pod-e40290c2-5777-4d14-9fd8-20b93d7c64a9": Phase="Pending", Reason="", readiness=false. Elapsed: 19.39742ms Jan 9 14:39:43.275: INFO: Pod "pod-e40290c2-5777-4d14-9fd8-20b93d7c64a9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030663533s Jan 9 14:39:45.282: INFO: Pod "pod-e40290c2-5777-4d14-9fd8-20b93d7c64a9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.036997586s Jan 9 14:39:47.290: INFO: Pod "pod-e40290c2-5777-4d14-9fd8-20b93d7c64a9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.045576349s Jan 9 14:39:49.356: INFO: Pod "pod-e40290c2-5777-4d14-9fd8-20b93d7c64a9": Phase="Pending", Reason="", readiness=false. Elapsed: 8.111708308s Jan 9 14:39:51.371: INFO: Pod "pod-e40290c2-5777-4d14-9fd8-20b93d7c64a9": Phase="Pending", Reason="", readiness=false. Elapsed: 10.126477532s Jan 9 14:39:53.383: INFO: Pod "pod-e40290c2-5777-4d14-9fd8-20b93d7c64a9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.138749215s STEP: Saw pod success Jan 9 14:39:53.384: INFO: Pod "pod-e40290c2-5777-4d14-9fd8-20b93d7c64a9" satisfied condition "success or failure" Jan 9 14:39:53.390: INFO: Trying to get logs from node iruya-node pod pod-e40290c2-5777-4d14-9fd8-20b93d7c64a9 container test-container: STEP: delete the pod Jan 9 14:39:53.560: INFO: Waiting for pod pod-e40290c2-5777-4d14-9fd8-20b93d7c64a9 to disappear Jan 9 14:39:53.572: INFO: Pod pod-e40290c2-5777-4d14-9fd8-20b93d7c64a9 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 9 14:39:53.572: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4540" for this suite. Jan 9 14:39:59.611: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 9 14:39:59.743: INFO: namespace emptydir-4540 deletion completed in 6.164141156s • [SLOW TEST:18.598 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 9 14:39:59.744: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: starting an echo server on multiple ports STEP: creating replication controller proxy-service-8nn4g in namespace proxy-111 I0109 14:39:59.937030 8 runners.go:180] Created replication controller with name: proxy-service-8nn4g, namespace: proxy-111, replica count: 1 I0109 14:40:00.988274 8 runners.go:180] proxy-service-8nn4g Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0109 14:40:01.988749 8 runners.go:180] proxy-service-8nn4g Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0109 14:40:02.989168 8 runners.go:180] proxy-service-8nn4g Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0109 14:40:03.989776 8 runners.go:180] proxy-service-8nn4g Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0109 14:40:04.990129 8 runners.go:180] proxy-service-8nn4g Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0109 14:40:05.990417 8 runners.go:180] proxy-service-8nn4g Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0109 14:40:06.991009 8 runners.go:180] proxy-service-8nn4g Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0109 14:40:07.991280 8 runners.go:180] proxy-service-8nn4g Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0109 14:40:08.991546 8 runners.go:180] proxy-service-8nn4g Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0109 14:40:09.991820 8 runners.go:180] proxy-service-8nn4g Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0109 14:40:10.992134 8 runners.go:180] proxy-service-8nn4g Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0109 14:40:11.992460 8 runners.go:180] proxy-service-8nn4g Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 9 14:40:11.999: INFO: setup took 12.190350961s, starting test cases STEP: running 16 cases, 20 attempts per case, 320 total attempts Jan 9 14:40:12.052: INFO: (0) /api/v1/namespaces/proxy-111/pods/proxy-service-8nn4g-567g2/proxy/: test (200; 52.628858ms) Jan 9 14:40:12.052: INFO: (0) /api/v1/namespaces/proxy-111/services/proxy-service-8nn4g:portname1/proxy/: foo (200; 52.759688ms) Jan 9 14:40:12.052: INFO: (0) /api/v1/namespaces/proxy-111/services/proxy-service-8nn4g:portname2/proxy/: bar (200; 52.678365ms) Jan 9 14:40:12.052: INFO: (0) /api/v1/namespaces/proxy-111/pods/proxy-service-8nn4g-567g2:162/proxy/: bar (200; 52.925521ms) Jan 9 14:40:12.053: INFO: (0) /api/v1/namespaces/proxy-111/pods/proxy-service-8nn4g-567g2:160/proxy/: foo (200; 52.831569ms) Jan 9 14:40:12.053: INFO: (0) /api/v1/namespaces/proxy-111/pods/proxy-service-8nn4g-567g2:1080/proxy/: testt... (200; 53.31188ms) Jan 9 14:40:12.055: INFO: (0) /api/v1/namespaces/proxy-111/services/http:proxy-service-8nn4g:portname1/proxy/: foo (200; 55.422081ms) Jan 9 14:40:12.061: INFO: (0) /api/v1/namespaces/proxy-111/pods/http:proxy-service-8nn4g-567g2:162/proxy/: bar (200; 61.22195ms) Jan 9 14:40:12.061: INFO: (0) /api/v1/namespaces/proxy-111/services/http:proxy-service-8nn4g:portname2/proxy/: bar (200; 61.727042ms) Jan 9 14:40:12.071: INFO: (0) /api/v1/namespaces/proxy-111/pods/http:proxy-service-8nn4g-567g2:160/proxy/: foo (200; 71.48507ms) Jan 9 14:40:12.076: INFO: (0) /api/v1/namespaces/proxy-111/pods/https:proxy-service-8nn4g-567g2:460/proxy/: tls baz (200; 76.063465ms) Jan 9 14:40:12.076: INFO: (0) /api/v1/namespaces/proxy-111/services/https:proxy-service-8nn4g:tlsportname2/proxy/: tls qux (200; 76.281203ms) Jan 9 14:40:12.077: INFO: (0) /api/v1/namespaces/proxy-111/pods/https:proxy-service-8nn4g-567g2:443/proxy/: t... (200; 23.013707ms) Jan 9 14:40:12.104: INFO: (1) /api/v1/namespaces/proxy-111/pods/proxy-service-8nn4g-567g2:160/proxy/: foo (200; 23.772315ms) Jan 9 14:40:12.105: INFO: (1) /api/v1/namespaces/proxy-111/pods/proxy-service-8nn4g-567g2:1080/proxy/: testtest (200; 25.052405ms) Jan 9 14:40:12.106: INFO: (1) /api/v1/namespaces/proxy-111/pods/proxy-service-8nn4g-567g2:162/proxy/: bar (200; 25.250374ms) Jan 9 14:40:12.106: INFO: (1) /api/v1/namespaces/proxy-111/pods/http:proxy-service-8nn4g-567g2:162/proxy/: bar (200; 25.792022ms) Jan 9 14:40:12.108: INFO: (1) /api/v1/namespaces/proxy-111/pods/https:proxy-service-8nn4g-567g2:462/proxy/: tls qux (200; 27.416742ms) Jan 9 14:40:12.109: INFO: (1) /api/v1/namespaces/proxy-111/pods/https:proxy-service-8nn4g-567g2:443/proxy/: testt... (200; 20.002521ms) Jan 9 14:40:12.139: INFO: (2) /api/v1/namespaces/proxy-111/pods/http:proxy-service-8nn4g-567g2:160/proxy/: foo (200; 22.24736ms) Jan 9 14:40:12.139: INFO: (2) /api/v1/namespaces/proxy-111/pods/proxy-service-8nn4g-567g2:160/proxy/: foo (200; 22.289294ms) Jan 9 14:40:12.142: INFO: (2) /api/v1/namespaces/proxy-111/services/https:proxy-service-8nn4g:tlsportname1/proxy/: tls baz (200; 25.635313ms) Jan 9 14:40:12.143: INFO: (2) /api/v1/namespaces/proxy-111/pods/proxy-service-8nn4g-567g2/proxy/: test (200; 26.379395ms) Jan 9 14:40:12.143: INFO: (2) /api/v1/namespaces/proxy-111/services/http:proxy-service-8nn4g:portname2/proxy/: bar (200; 26.352873ms) Jan 9 14:40:12.143: INFO: (2) /api/v1/namespaces/proxy-111/services/proxy-service-8nn4g:portname1/proxy/: foo (200; 26.613175ms) Jan 9 14:40:12.144: INFO: (2) /api/v1/namespaces/proxy-111/services/http:proxy-service-8nn4g:portname1/proxy/: foo (200; 27.145761ms) Jan 9 14:40:12.144: INFO: (2) /api/v1/namespaces/proxy-111/services/proxy-service-8nn4g:portname2/proxy/: bar (200; 27.482127ms) Jan 9 14:40:12.145: INFO: (2) /api/v1/namespaces/proxy-111/services/https:proxy-service-8nn4g:tlsportname2/proxy/: tls qux (200; 28.549593ms) Jan 9 14:40:12.156: INFO: (3) /api/v1/namespaces/proxy-111/pods/proxy-service-8nn4g-567g2:1080/proxy/: testtest (200; 10.876349ms) Jan 9 14:40:12.157: INFO: (3) /api/v1/namespaces/proxy-111/pods/http:proxy-service-8nn4g-567g2:162/proxy/: bar (200; 10.998248ms) Jan 9 14:40:12.157: INFO: (3) /api/v1/namespaces/proxy-111/pods/https:proxy-service-8nn4g-567g2:460/proxy/: tls baz (200; 10.943109ms) Jan 9 14:40:12.158: INFO: (3) /api/v1/namespaces/proxy-111/pods/https:proxy-service-8nn4g-567g2:443/proxy/: t... (200; 11.550751ms) Jan 9 14:40:12.158: INFO: (3) /api/v1/namespaces/proxy-111/pods/https:proxy-service-8nn4g-567g2:462/proxy/: tls qux (200; 11.30625ms) Jan 9 14:40:12.159: INFO: (3) /api/v1/namespaces/proxy-111/services/https:proxy-service-8nn4g:tlsportname1/proxy/: tls baz (200; 13.046104ms) Jan 9 14:40:12.159: INFO: (3) /api/v1/namespaces/proxy-111/services/proxy-service-8nn4g:portname2/proxy/: bar (200; 12.815475ms) Jan 9 14:40:12.160: INFO: (3) /api/v1/namespaces/proxy-111/services/https:proxy-service-8nn4g:tlsportname2/proxy/: tls qux (200; 14.039045ms) Jan 9 14:40:12.160: INFO: (3) /api/v1/namespaces/proxy-111/services/http:proxy-service-8nn4g:portname1/proxy/: foo (200; 13.258561ms) Jan 9 14:40:12.162: INFO: (3) /api/v1/namespaces/proxy-111/services/proxy-service-8nn4g:portname1/proxy/: foo (200; 16.428358ms) Jan 9 14:40:12.169: INFO: (4) /api/v1/namespaces/proxy-111/pods/https:proxy-service-8nn4g-567g2:460/proxy/: tls baz (200; 7.264361ms) Jan 9 14:40:12.172: INFO: (4) /api/v1/namespaces/proxy-111/pods/proxy-service-8nn4g-567g2/proxy/: test (200; 10.090249ms) Jan 9 14:40:12.174: INFO: (4) /api/v1/namespaces/proxy-111/pods/proxy-service-8nn4g-567g2:1080/proxy/: testt... (200; 12.982129ms) Jan 9 14:40:12.175: INFO: (4) /api/v1/namespaces/proxy-111/pods/proxy-service-8nn4g-567g2:160/proxy/: foo (200; 12.924192ms) Jan 9 14:40:12.175: INFO: (4) /api/v1/namespaces/proxy-111/pods/https:proxy-service-8nn4g-567g2:462/proxy/: tls qux (200; 12.881492ms) Jan 9 14:40:12.176: INFO: (4) /api/v1/namespaces/proxy-111/services/http:proxy-service-8nn4g:portname2/proxy/: bar (200; 13.408751ms) Jan 9 14:40:12.179: INFO: (4) /api/v1/namespaces/proxy-111/services/https:proxy-service-8nn4g:tlsportname2/proxy/: tls qux (200; 16.3703ms) Jan 9 14:40:12.179: INFO: (4) /api/v1/namespaces/proxy-111/services/proxy-service-8nn4g:portname2/proxy/: bar (200; 17.058624ms) Jan 9 14:40:12.179: INFO: (4) /api/v1/namespaces/proxy-111/services/http:proxy-service-8nn4g:portname1/proxy/: foo (200; 17.037641ms) Jan 9 14:40:12.179: INFO: (4) /api/v1/namespaces/proxy-111/services/https:proxy-service-8nn4g:tlsportname1/proxy/: tls baz (200; 16.954092ms) Jan 9 14:40:12.179: INFO: (4) /api/v1/namespaces/proxy-111/services/proxy-service-8nn4g:portname1/proxy/: foo (200; 17.113956ms) Jan 9 14:40:12.191: INFO: (5) /api/v1/namespaces/proxy-111/pods/proxy-service-8nn4g-567g2:160/proxy/: foo (200; 10.655833ms) Jan 9 14:40:12.191: INFO: (5) /api/v1/namespaces/proxy-111/pods/http:proxy-service-8nn4g-567g2:160/proxy/: foo (200; 10.416549ms) Jan 9 14:40:12.191: INFO: (5) /api/v1/namespaces/proxy-111/pods/https:proxy-service-8nn4g-567g2:460/proxy/: tls baz (200; 11.288291ms) Jan 9 14:40:12.192: INFO: (5) /api/v1/namespaces/proxy-111/pods/http:proxy-service-8nn4g-567g2:1080/proxy/: t... (200; 11.176821ms) Jan 9 14:40:12.192: INFO: (5) /api/v1/namespaces/proxy-111/pods/https:proxy-service-8nn4g-567g2:443/proxy/: testtest (200; 11.625017ms) Jan 9 14:40:12.192: INFO: (5) /api/v1/namespaces/proxy-111/pods/proxy-service-8nn4g-567g2:162/proxy/: bar (200; 11.230575ms) Jan 9 14:40:12.195: INFO: (5) /api/v1/namespaces/proxy-111/services/proxy-service-8nn4g:portname1/proxy/: foo (200; 15.545959ms) Jan 9 14:40:12.196: INFO: (5) /api/v1/namespaces/proxy-111/services/http:proxy-service-8nn4g:portname1/proxy/: foo (200; 14.787385ms) Jan 9 14:40:12.196: INFO: (5) /api/v1/namespaces/proxy-111/services/proxy-service-8nn4g:portname2/proxy/: bar (200; 14.9935ms) Jan 9 14:40:12.196: INFO: (5) /api/v1/namespaces/proxy-111/services/https:proxy-service-8nn4g:tlsportname2/proxy/: tls qux (200; 14.832633ms) Jan 9 14:40:12.196: INFO: (5) /api/v1/namespaces/proxy-111/services/http:proxy-service-8nn4g:portname2/proxy/: bar (200; 16.262549ms) Jan 9 14:40:12.196: INFO: (5) /api/v1/namespaces/proxy-111/services/https:proxy-service-8nn4g:tlsportname1/proxy/: tls baz (200; 16.628848ms) Jan 9 14:40:12.203: INFO: (6) /api/v1/namespaces/proxy-111/pods/https:proxy-service-8nn4g-567g2:460/proxy/: tls baz (200; 6.992138ms) Jan 9 14:40:12.205: INFO: (6) /api/v1/namespaces/proxy-111/pods/http:proxy-service-8nn4g-567g2:162/proxy/: bar (200; 8.655269ms) Jan 9 14:40:12.205: INFO: (6) /api/v1/namespaces/proxy-111/pods/proxy-service-8nn4g-567g2:162/proxy/: bar (200; 8.811892ms) Jan 9 14:40:12.205: INFO: (6) /api/v1/namespaces/proxy-111/pods/proxy-service-8nn4g-567g2:1080/proxy/: testtest (200; 8.962198ms) Jan 9 14:40:12.206: INFO: (6) /api/v1/namespaces/proxy-111/pods/https:proxy-service-8nn4g-567g2:462/proxy/: tls qux (200; 9.453633ms) Jan 9 14:40:12.206: INFO: (6) /api/v1/namespaces/proxy-111/pods/http:proxy-service-8nn4g-567g2:160/proxy/: foo (200; 9.657139ms) Jan 9 14:40:12.206: INFO: (6) /api/v1/namespaces/proxy-111/pods/http:proxy-service-8nn4g-567g2:1080/proxy/: t... (200; 9.656625ms) Jan 9 14:40:12.206: INFO: (6) /api/v1/namespaces/proxy-111/pods/proxy-service-8nn4g-567g2:160/proxy/: foo (200; 9.788107ms) Jan 9 14:40:12.207: INFO: (6) /api/v1/namespaces/proxy-111/pods/https:proxy-service-8nn4g-567g2:443/proxy/: t... (200; 7.426425ms) Jan 9 14:40:12.217: INFO: (7) /api/v1/namespaces/proxy-111/pods/proxy-service-8nn4g-567g2/proxy/: test (200; 7.548086ms) Jan 9 14:40:12.218: INFO: (7) /api/v1/namespaces/proxy-111/pods/proxy-service-8nn4g-567g2:162/proxy/: bar (200; 8.754638ms) Jan 9 14:40:12.219: INFO: (7) /api/v1/namespaces/proxy-111/pods/https:proxy-service-8nn4g-567g2:460/proxy/: tls baz (200; 9.144166ms) Jan 9 14:40:12.219: INFO: (7) /api/v1/namespaces/proxy-111/pods/https:proxy-service-8nn4g-567g2:443/proxy/: testtest (200; 41.039128ms) Jan 9 14:40:12.265: INFO: (8) /api/v1/namespaces/proxy-111/pods/proxy-service-8nn4g-567g2:1080/proxy/: testt... (200; 45.815927ms) Jan 9 14:40:12.269: INFO: (8) /api/v1/namespaces/proxy-111/pods/proxy-service-8nn4g-567g2:160/proxy/: foo (200; 46.747522ms) Jan 9 14:40:12.270: INFO: (8) /api/v1/namespaces/proxy-111/pods/https:proxy-service-8nn4g-567g2:462/proxy/: tls qux (200; 47.199556ms) Jan 9 14:40:12.271: INFO: (8) /api/v1/namespaces/proxy-111/pods/http:proxy-service-8nn4g-567g2:162/proxy/: bar (200; 48.181555ms) Jan 9 14:40:12.271: INFO: (8) /api/v1/namespaces/proxy-111/services/proxy-service-8nn4g:portname2/proxy/: bar (200; 48.216111ms) Jan 9 14:40:12.271: INFO: (8) /api/v1/namespaces/proxy-111/services/https:proxy-service-8nn4g:tlsportname1/proxy/: tls baz (200; 48.148303ms) Jan 9 14:40:12.275: INFO: (8) /api/v1/namespaces/proxy-111/services/proxy-service-8nn4g:portname1/proxy/: foo (200; 51.97404ms) Jan 9 14:40:12.275: INFO: (8) /api/v1/namespaces/proxy-111/services/http:proxy-service-8nn4g:portname2/proxy/: bar (200; 52.009874ms) Jan 9 14:40:12.275: INFO: (8) /api/v1/namespaces/proxy-111/services/https:proxy-service-8nn4g:tlsportname2/proxy/: tls qux (200; 52.047888ms) Jan 9 14:40:12.276: INFO: (8) /api/v1/namespaces/proxy-111/pods/https:proxy-service-8nn4g-567g2:443/proxy/: testt... (200; 15.769794ms) Jan 9 14:40:12.292: INFO: (9) /api/v1/namespaces/proxy-111/pods/proxy-service-8nn4g-567g2/proxy/: test (200; 15.791402ms) Jan 9 14:40:12.292: INFO: (9) /api/v1/namespaces/proxy-111/pods/proxy-service-8nn4g-567g2:160/proxy/: foo (200; 15.808745ms) Jan 9 14:40:12.293: INFO: (9) /api/v1/namespaces/proxy-111/pods/https:proxy-service-8nn4g-567g2:462/proxy/: tls qux (200; 16.398574ms) Jan 9 14:40:12.293: INFO: (9) /api/v1/namespaces/proxy-111/pods/proxy-service-8nn4g-567g2:162/proxy/: bar (200; 16.69836ms) Jan 9 14:40:12.293: INFO: (9) /api/v1/namespaces/proxy-111/pods/https:proxy-service-8nn4g-567g2:460/proxy/: tls baz (200; 16.596339ms) Jan 9 14:40:12.295: INFO: (9) /api/v1/namespaces/proxy-111/services/proxy-service-8nn4g:portname2/proxy/: bar (200; 18.307278ms) Jan 9 14:40:12.295: INFO: (9) /api/v1/namespaces/proxy-111/services/http:proxy-service-8nn4g:portname1/proxy/: foo (200; 19.022554ms) Jan 9 14:40:12.297: INFO: (9) /api/v1/namespaces/proxy-111/services/https:proxy-service-8nn4g:tlsportname1/proxy/: tls baz (200; 20.748877ms) Jan 9 14:40:12.297: INFO: (9) /api/v1/namespaces/proxy-111/services/http:proxy-service-8nn4g:portname2/proxy/: bar (200; 20.792957ms) Jan 9 14:40:12.297: INFO: (9) /api/v1/namespaces/proxy-111/services/proxy-service-8nn4g:portname1/proxy/: foo (200; 21.440998ms) Jan 9 14:40:12.298: INFO: (9) /api/v1/namespaces/proxy-111/services/https:proxy-service-8nn4g:tlsportname2/proxy/: tls qux (200; 21.50022ms) Jan 9 14:40:12.314: INFO: (10) /api/v1/namespaces/proxy-111/pods/https:proxy-service-8nn4g-567g2:460/proxy/: tls baz (200; 16.506803ms) Jan 9 14:40:12.314: INFO: (10) /api/v1/namespaces/proxy-111/pods/proxy-service-8nn4g-567g2:160/proxy/: foo (200; 16.454733ms) Jan 9 14:40:12.314: INFO: (10) /api/v1/namespaces/proxy-111/pods/proxy-service-8nn4g-567g2:162/proxy/: bar (200; 16.322693ms) Jan 9 14:40:12.314: INFO: (10) /api/v1/namespaces/proxy-111/pods/proxy-service-8nn4g-567g2:1080/proxy/: testtest (200; 16.703652ms) Jan 9 14:40:12.315: INFO: (10) /api/v1/namespaces/proxy-111/pods/http:proxy-service-8nn4g-567g2:1080/proxy/: t... (200; 16.74104ms) Jan 9 14:40:12.322: INFO: (10) /api/v1/namespaces/proxy-111/pods/https:proxy-service-8nn4g-567g2:443/proxy/: testt... (200; 25.030973ms) Jan 9 14:40:12.349: INFO: (11) /api/v1/namespaces/proxy-111/pods/proxy-service-8nn4g-567g2:162/proxy/: bar (200; 24.986492ms) Jan 9 14:40:12.350: INFO: (11) /api/v1/namespaces/proxy-111/services/proxy-service-8nn4g:portname2/proxy/: bar (200; 25.432945ms) Jan 9 14:40:12.350: INFO: (11) /api/v1/namespaces/proxy-111/services/http:proxy-service-8nn4g:portname2/proxy/: bar (200; 25.720844ms) Jan 9 14:40:12.351: INFO: (11) /api/v1/namespaces/proxy-111/pods/https:proxy-service-8nn4g-567g2:460/proxy/: tls baz (200; 26.470926ms) Jan 9 14:40:12.351: INFO: (11) /api/v1/namespaces/proxy-111/pods/https:proxy-service-8nn4g-567g2:443/proxy/: test (200; 26.714157ms) Jan 9 14:40:12.351: INFO: (11) /api/v1/namespaces/proxy-111/services/proxy-service-8nn4g:portname1/proxy/: foo (200; 27.052367ms) Jan 9 14:40:12.352: INFO: (11) /api/v1/namespaces/proxy-111/services/https:proxy-service-8nn4g:tlsportname2/proxy/: tls qux (200; 27.506701ms) Jan 9 14:40:12.352: INFO: (11) /api/v1/namespaces/proxy-111/services/https:proxy-service-8nn4g:tlsportname1/proxy/: tls baz (200; 27.361015ms) Jan 9 14:40:12.352: INFO: (11) /api/v1/namespaces/proxy-111/pods/http:proxy-service-8nn4g-567g2:160/proxy/: foo (200; 27.31385ms) Jan 9 14:40:12.352: INFO: (11) /api/v1/namespaces/proxy-111/pods/http:proxy-service-8nn4g-567g2:162/proxy/: bar (200; 27.400548ms) Jan 9 14:40:12.352: INFO: (11) /api/v1/namespaces/proxy-111/pods/proxy-service-8nn4g-567g2:160/proxy/: foo (200; 27.653777ms) Jan 9 14:40:12.352: INFO: (11) /api/v1/namespaces/proxy-111/pods/https:proxy-service-8nn4g-567g2:462/proxy/: tls qux (200; 27.546549ms) Jan 9 14:40:12.352: INFO: (11) /api/v1/namespaces/proxy-111/services/http:proxy-service-8nn4g:portname1/proxy/: foo (200; 27.834123ms) Jan 9 14:40:12.364: INFO: (12) /api/v1/namespaces/proxy-111/pods/proxy-service-8nn4g-567g2/proxy/: test (200; 12.040694ms) Jan 9 14:40:12.365: INFO: (12) /api/v1/namespaces/proxy-111/pods/https:proxy-service-8nn4g-567g2:462/proxy/: tls qux (200; 12.676358ms) Jan 9 14:40:12.365: INFO: (12) /api/v1/namespaces/proxy-111/pods/http:proxy-service-8nn4g-567g2:162/proxy/: bar (200; 12.572522ms) Jan 9 14:40:12.365: INFO: (12) /api/v1/namespaces/proxy-111/pods/proxy-service-8nn4g-567g2:160/proxy/: foo (200; 12.620665ms) Jan 9 14:40:12.365: INFO: (12) /api/v1/namespaces/proxy-111/pods/http:proxy-service-8nn4g-567g2:160/proxy/: foo (200; 13.108062ms) Jan 9 14:40:12.366: INFO: (12) /api/v1/namespaces/proxy-111/pods/https:proxy-service-8nn4g-567g2:460/proxy/: tls baz (200; 13.179857ms) Jan 9 14:40:12.366: INFO: (12) /api/v1/namespaces/proxy-111/pods/proxy-service-8nn4g-567g2:1080/proxy/: testt... (200; 14.255147ms) Jan 9 14:40:12.367: INFO: (12) /api/v1/namespaces/proxy-111/services/proxy-service-8nn4g:portname1/proxy/: foo (200; 14.938872ms) Jan 9 14:40:12.368: INFO: (12) /api/v1/namespaces/proxy-111/services/proxy-service-8nn4g:portname2/proxy/: bar (200; 15.131804ms) Jan 9 14:40:12.368: INFO: (12) /api/v1/namespaces/proxy-111/services/http:proxy-service-8nn4g:portname1/proxy/: foo (200; 15.424243ms) Jan 9 14:40:12.369: INFO: (12) /api/v1/namespaces/proxy-111/services/https:proxy-service-8nn4g:tlsportname2/proxy/: tls qux (200; 16.082594ms) Jan 9 14:40:12.369: INFO: (12) /api/v1/namespaces/proxy-111/services/https:proxy-service-8nn4g:tlsportname1/proxy/: tls baz (200; 16.392192ms) Jan 9 14:40:12.370: INFO: (12) /api/v1/namespaces/proxy-111/services/http:proxy-service-8nn4g:portname2/proxy/: bar (200; 17.985652ms) Jan 9 14:40:12.375: INFO: (13) /api/v1/namespaces/proxy-111/pods/proxy-service-8nn4g-567g2:160/proxy/: foo (200; 4.652863ms) Jan 9 14:40:12.380: INFO: (13) /api/v1/namespaces/proxy-111/pods/https:proxy-service-8nn4g-567g2:460/proxy/: tls baz (200; 9.369405ms) Jan 9 14:40:12.380: INFO: (13) /api/v1/namespaces/proxy-111/pods/https:proxy-service-8nn4g-567g2:462/proxy/: tls qux (200; 9.86244ms) Jan 9 14:40:12.392: INFO: (13) /api/v1/namespaces/proxy-111/services/https:proxy-service-8nn4g:tlsportname2/proxy/: tls qux (200; 21.355832ms) Jan 9 14:40:12.392: INFO: (13) /api/v1/namespaces/proxy-111/services/http:proxy-service-8nn4g:portname1/proxy/: foo (200; 21.474841ms) Jan 9 14:40:12.392: INFO: (13) /api/v1/namespaces/proxy-111/services/https:proxy-service-8nn4g:tlsportname1/proxy/: tls baz (200; 21.443835ms) Jan 9 14:40:12.392: INFO: (13) /api/v1/namespaces/proxy-111/pods/proxy-service-8nn4g-567g2:162/proxy/: bar (200; 21.576849ms) Jan 9 14:40:12.392: INFO: (13) /api/v1/namespaces/proxy-111/services/proxy-service-8nn4g:portname2/proxy/: bar (200; 21.541559ms) Jan 9 14:40:12.392: INFO: (13) /api/v1/namespaces/proxy-111/pods/http:proxy-service-8nn4g-567g2:160/proxy/: foo (200; 21.543412ms) Jan 9 14:40:12.392: INFO: (13) /api/v1/namespaces/proxy-111/pods/http:proxy-service-8nn4g-567g2:1080/proxy/: t... (200; 21.877977ms) Jan 9 14:40:12.392: INFO: (13) /api/v1/namespaces/proxy-111/pods/https:proxy-service-8nn4g-567g2:443/proxy/: test (200; 22.10024ms) Jan 9 14:40:12.393: INFO: (13) /api/v1/namespaces/proxy-111/pods/proxy-service-8nn4g-567g2:1080/proxy/: testtestt... (200; 12.482151ms) Jan 9 14:40:12.409: INFO: (14) /api/v1/namespaces/proxy-111/pods/proxy-service-8nn4g-567g2:160/proxy/: foo (200; 11.950086ms) Jan 9 14:40:12.409: INFO: (14) /api/v1/namespaces/proxy-111/pods/proxy-service-8nn4g-567g2/proxy/: test (200; 12.310094ms) Jan 9 14:40:12.409: INFO: (14) /api/v1/namespaces/proxy-111/pods/proxy-service-8nn4g-567g2:162/proxy/: bar (200; 12.247825ms) Jan 9 14:40:12.410: INFO: (14) /api/v1/namespaces/proxy-111/pods/http:proxy-service-8nn4g-567g2:162/proxy/: bar (200; 12.466543ms) Jan 9 14:40:12.410: INFO: (14) /api/v1/namespaces/proxy-111/pods/https:proxy-service-8nn4g-567g2:460/proxy/: tls baz (200; 12.799257ms) Jan 9 14:40:12.410: INFO: (14) /api/v1/namespaces/proxy-111/pods/http:proxy-service-8nn4g-567g2:160/proxy/: foo (200; 12.548938ms) Jan 9 14:40:12.410: INFO: (14) /api/v1/namespaces/proxy-111/pods/https:proxy-service-8nn4g-567g2:443/proxy/: t... (200; 17.896704ms) Jan 9 14:40:12.431: INFO: (15) /api/v1/namespaces/proxy-111/pods/http:proxy-service-8nn4g-567g2:162/proxy/: bar (200; 17.73791ms) Jan 9 14:40:12.431: INFO: (15) /api/v1/namespaces/proxy-111/pods/proxy-service-8nn4g-567g2:160/proxy/: foo (200; 17.782371ms) Jan 9 14:40:12.431: INFO: (15) /api/v1/namespaces/proxy-111/services/https:proxy-service-8nn4g:tlsportname1/proxy/: tls baz (200; 17.643163ms) Jan 9 14:40:12.431: INFO: (15) /api/v1/namespaces/proxy-111/services/http:proxy-service-8nn4g:portname1/proxy/: foo (200; 17.939986ms) Jan 9 14:40:12.431: INFO: (15) /api/v1/namespaces/proxy-111/services/proxy-service-8nn4g:portname2/proxy/: bar (200; 18.052058ms) Jan 9 14:40:12.431: INFO: (15) /api/v1/namespaces/proxy-111/pods/proxy-service-8nn4g-567g2:1080/proxy/: testtest (200; 24.258732ms) Jan 9 14:40:12.446: INFO: (16) /api/v1/namespaces/proxy-111/pods/https:proxy-service-8nn4g-567g2:443/proxy/: test (200; 9.717004ms) Jan 9 14:40:12.448: INFO: (16) /api/v1/namespaces/proxy-111/pods/http:proxy-service-8nn4g-567g2:160/proxy/: foo (200; 9.673628ms) Jan 9 14:40:12.448: INFO: (16) /api/v1/namespaces/proxy-111/pods/https:proxy-service-8nn4g-567g2:462/proxy/: tls qux (200; 10.294606ms) Jan 9 14:40:12.448: INFO: (16) /api/v1/namespaces/proxy-111/pods/proxy-service-8nn4g-567g2:160/proxy/: foo (200; 10.320855ms) Jan 9 14:40:12.448: INFO: (16) /api/v1/namespaces/proxy-111/pods/proxy-service-8nn4g-567g2:1080/proxy/: testt... (200; 10.555101ms) Jan 9 14:40:12.450: INFO: (16) /api/v1/namespaces/proxy-111/pods/https:proxy-service-8nn4g-567g2:460/proxy/: tls baz (200; 12.135979ms) Jan 9 14:40:12.450: INFO: (16) /api/v1/namespaces/proxy-111/services/http:proxy-service-8nn4g:portname2/proxy/: bar (200; 11.962845ms) Jan 9 14:40:12.451: INFO: (16) /api/v1/namespaces/proxy-111/services/proxy-service-8nn4g:portname2/proxy/: bar (200; 13.126422ms) Jan 9 14:40:12.451: INFO: (16) /api/v1/namespaces/proxy-111/services/https:proxy-service-8nn4g:tlsportname2/proxy/: tls qux (200; 13.406822ms) Jan 9 14:40:12.451: INFO: (16) /api/v1/namespaces/proxy-111/services/http:proxy-service-8nn4g:portname1/proxy/: foo (200; 13.530639ms) Jan 9 14:40:12.452: INFO: (16) /api/v1/namespaces/proxy-111/services/https:proxy-service-8nn4g:tlsportname1/proxy/: tls baz (200; 13.626866ms) Jan 9 14:40:12.457: INFO: (17) /api/v1/namespaces/proxy-111/pods/https:proxy-service-8nn4g-567g2:460/proxy/: tls baz (200; 5.323956ms) Jan 9 14:40:12.459: INFO: (17) /api/v1/namespaces/proxy-111/pods/https:proxy-service-8nn4g-567g2:462/proxy/: tls qux (200; 7.414318ms) Jan 9 14:40:12.460: INFO: (17) /api/v1/namespaces/proxy-111/pods/http:proxy-service-8nn4g-567g2:1080/proxy/: t... (200; 7.619708ms) Jan 9 14:40:12.460: INFO: (17) /api/v1/namespaces/proxy-111/pods/proxy-service-8nn4g-567g2:1080/proxy/: testtest (200; 12.250679ms) Jan 9 14:40:12.471: INFO: (17) /api/v1/namespaces/proxy-111/pods/http:proxy-service-8nn4g-567g2:160/proxy/: foo (200; 19.191939ms) Jan 9 14:40:12.472: INFO: (17) /api/v1/namespaces/proxy-111/pods/proxy-service-8nn4g-567g2:162/proxy/: bar (200; 19.661296ms) Jan 9 14:40:12.472: INFO: (17) /api/v1/namespaces/proxy-111/pods/proxy-service-8nn4g-567g2:160/proxy/: foo (200; 19.903018ms) Jan 9 14:40:12.473: INFO: (17) /api/v1/namespaces/proxy-111/pods/http:proxy-service-8nn4g-567g2:162/proxy/: bar (200; 20.893683ms) Jan 9 14:40:12.474: INFO: (17) /api/v1/namespaces/proxy-111/pods/https:proxy-service-8nn4g-567g2:443/proxy/: testt... (200; 14.630219ms) Jan 9 14:40:12.493: INFO: (18) /api/v1/namespaces/proxy-111/pods/proxy-service-8nn4g-567g2:162/proxy/: bar (200; 15.97418ms) Jan 9 14:40:12.493: INFO: (18) /api/v1/namespaces/proxy-111/pods/https:proxy-service-8nn4g-567g2:443/proxy/: test (200; 16.092802ms) Jan 9 14:40:12.493: INFO: (18) /api/v1/namespaces/proxy-111/services/http:proxy-service-8nn4g:portname1/proxy/: foo (200; 16.416015ms) Jan 9 14:40:12.493: INFO: (18) /api/v1/namespaces/proxy-111/services/https:proxy-service-8nn4g:tlsportname1/proxy/: tls baz (200; 15.988992ms) Jan 9 14:40:12.493: INFO: (18) /api/v1/namespaces/proxy-111/pods/https:proxy-service-8nn4g-567g2:462/proxy/: tls qux (200; 15.966202ms) Jan 9 14:40:12.494: INFO: (18) /api/v1/namespaces/proxy-111/services/http:proxy-service-8nn4g:portname2/proxy/: bar (200; 16.315962ms) Jan 9 14:40:12.495: INFO: (18) /api/v1/namespaces/proxy-111/services/proxy-service-8nn4g:portname1/proxy/: foo (200; 17.388484ms) Jan 9 14:40:12.495: INFO: (18) /api/v1/namespaces/proxy-111/services/proxy-service-8nn4g:portname2/proxy/: bar (200; 17.299872ms) Jan 9 14:40:12.495: INFO: (18) /api/v1/namespaces/proxy-111/services/https:proxy-service-8nn4g:tlsportname2/proxy/: tls qux (200; 17.537067ms) Jan 9 14:40:12.506: INFO: (19) /api/v1/namespaces/proxy-111/pods/http:proxy-service-8nn4g-567g2:160/proxy/: foo (200; 10.840381ms) Jan 9 14:40:12.506: INFO: (19) /api/v1/namespaces/proxy-111/pods/proxy-service-8nn4g-567g2:162/proxy/: bar (200; 11.379968ms) Jan 9 14:40:12.508: INFO: (19) /api/v1/namespaces/proxy-111/services/http:proxy-service-8nn4g:portname1/proxy/: foo (200; 13.317457ms) Jan 9 14:40:12.509: INFO: (19) /api/v1/namespaces/proxy-111/services/proxy-service-8nn4g:portname2/proxy/: bar (200; 14.393579ms) Jan 9 14:40:12.509: INFO: (19) /api/v1/namespaces/proxy-111/pods/proxy-service-8nn4g-567g2:1080/proxy/: testtest (200; 14.310009ms) Jan 9 14:40:12.510: INFO: (19) /api/v1/namespaces/proxy-111/services/https:proxy-service-8nn4g:tlsportname1/proxy/: tls baz (200; 14.767175ms) Jan 9 14:40:12.510: INFO: (19) /api/v1/namespaces/proxy-111/pods/https:proxy-service-8nn4g-567g2:460/proxy/: tls baz (200; 15.066923ms) Jan 9 14:40:12.510: INFO: (19) /api/v1/namespaces/proxy-111/pods/http:proxy-service-8nn4g-567g2:1080/proxy/: t... (200; 15.094612ms) Jan 9 14:40:12.511: INFO: (19) /api/v1/namespaces/proxy-111/pods/http:proxy-service-8nn4g-567g2:162/proxy/: bar (200; 15.724333ms) Jan 9 14:40:12.511: INFO: (19) /api/v1/namespaces/proxy-111/pods/proxy-service-8nn4g-567g2:160/proxy/: foo (200; 15.929516ms) Jan 9 14:40:12.512: INFO: (19) /api/v1/namespaces/proxy-111/services/http:proxy-service-8nn4g:portname2/proxy/: bar (200; 16.961763ms) Jan 9 14:40:12.513: INFO: (19) /api/v1/namespaces/proxy-111/services/https:proxy-service-8nn4g:tlsportname2/proxy/: tls qux (200; 17.52331ms) Jan 9 14:40:12.513: INFO: (19) /api/v1/namespaces/proxy-111/services/proxy-service-8nn4g:portname1/proxy/: foo (200; 17.72387ms) Jan 9 14:40:12.513: INFO: (19) /api/v1/namespaces/proxy-111/pods/https:proxy-service-8nn4g-567g2:443/proxy/: >> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1685 [It] should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Jan 9 14:40:32.867: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-4271' Jan 9 14:40:35.204: INFO: stderr: "" Jan 9 14:40:35.204: INFO: stdout: "pod/e2e-test-nginx-pod created\n" STEP: verifying the pod e2e-test-nginx-pod was created [AfterEach] [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1690 Jan 9 14:40:35.260: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=kubectl-4271' Jan 9 14:40:40.438: INFO: stderr: "" Jan 9 14:40:40.439: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 9 14:40:40.439: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4271" for this suite. Jan 9 14:40:46.512: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 9 14:40:46.672: INFO: namespace kubectl-4271 deletion completed in 6.216912588s • [SLOW TEST:13.932 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 9 14:40:46.673: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0777 on node default medium Jan 9 14:40:46.815: INFO: Waiting up to 5m0s for pod "pod-a2a9d37d-97d6-4177-8b5f-80f304f1d7cd" in namespace "emptydir-9165" to be "success or failure" Jan 9 14:40:46.827: INFO: Pod "pod-a2a9d37d-97d6-4177-8b5f-80f304f1d7cd": Phase="Pending", Reason="", readiness=false. Elapsed: 12.047449ms Jan 9 14:40:48.837: INFO: Pod "pod-a2a9d37d-97d6-4177-8b5f-80f304f1d7cd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021819717s Jan 9 14:40:50.845: INFO: Pod "pod-a2a9d37d-97d6-4177-8b5f-80f304f1d7cd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.030104249s Jan 9 14:40:52.863: INFO: Pod "pod-a2a9d37d-97d6-4177-8b5f-80f304f1d7cd": Phase="Pending", Reason="", readiness=false. Elapsed: 6.047682287s Jan 9 14:40:54.873: INFO: Pod "pod-a2a9d37d-97d6-4177-8b5f-80f304f1d7cd": Phase="Pending", Reason="", readiness=false. Elapsed: 8.057991275s Jan 9 14:40:56.883: INFO: Pod "pod-a2a9d37d-97d6-4177-8b5f-80f304f1d7cd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.068377186s STEP: Saw pod success Jan 9 14:40:56.884: INFO: Pod "pod-a2a9d37d-97d6-4177-8b5f-80f304f1d7cd" satisfied condition "success or failure" Jan 9 14:40:56.895: INFO: Trying to get logs from node iruya-node pod pod-a2a9d37d-97d6-4177-8b5f-80f304f1d7cd container test-container: STEP: delete the pod Jan 9 14:40:57.072: INFO: Waiting for pod pod-a2a9d37d-97d6-4177-8b5f-80f304f1d7cd to disappear Jan 9 14:40:57.085: INFO: Pod pod-a2a9d37d-97d6-4177-8b5f-80f304f1d7cd no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 9 14:40:57.086: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9165" for this suite. Jan 9 14:41:03.129: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 9 14:41:03.234: INFO: namespace emptydir-9165 deletion completed in 6.137191898s • [SLOW TEST:16.561 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 9 14:41:03.234: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-0ff5493f-f36e-4b20-862a-14c2175aa304 STEP: Creating a pod to test consume secrets Jan 9 14:41:03.335: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-038ca847-6f13-4ed5-8b73-8cf2b07f49c0" in namespace "projected-1536" to be "success or failure" Jan 9 14:41:03.341: INFO: Pod "pod-projected-secrets-038ca847-6f13-4ed5-8b73-8cf2b07f49c0": Phase="Pending", Reason="", readiness=false. Elapsed: 6.826134ms Jan 9 14:41:05.348: INFO: Pod "pod-projected-secrets-038ca847-6f13-4ed5-8b73-8cf2b07f49c0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013445809s Jan 9 14:41:07.355: INFO: Pod "pod-projected-secrets-038ca847-6f13-4ed5-8b73-8cf2b07f49c0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.02064289s Jan 9 14:41:09.367: INFO: Pod "pod-projected-secrets-038ca847-6f13-4ed5-8b73-8cf2b07f49c0": Phase="Pending", Reason="", readiness=false. Elapsed: 6.032424468s Jan 9 14:41:11.734: INFO: Pod "pod-projected-secrets-038ca847-6f13-4ed5-8b73-8cf2b07f49c0": Phase="Pending", Reason="", readiness=false. Elapsed: 8.398876178s Jan 9 14:41:13.826: INFO: Pod "pod-projected-secrets-038ca847-6f13-4ed5-8b73-8cf2b07f49c0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.491786136s STEP: Saw pod success Jan 9 14:41:13.827: INFO: Pod "pod-projected-secrets-038ca847-6f13-4ed5-8b73-8cf2b07f49c0" satisfied condition "success or failure" Jan 9 14:41:13.841: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-038ca847-6f13-4ed5-8b73-8cf2b07f49c0 container projected-secret-volume-test: STEP: delete the pod Jan 9 14:41:13.949: INFO: Waiting for pod pod-projected-secrets-038ca847-6f13-4ed5-8b73-8cf2b07f49c0 to disappear Jan 9 14:41:13.956: INFO: Pod pod-projected-secrets-038ca847-6f13-4ed5-8b73-8cf2b07f49c0 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 9 14:41:13.956: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1536" for this suite. Jan 9 14:41:19.985: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 9 14:41:20.091: INFO: namespace projected-1536 deletion completed in 6.13005261s • [SLOW TEST:16.857 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 9 14:41:20.091: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-aad1c99c-2010-4d6e-90af-5e1881da692f STEP: Creating a pod to test consume secrets Jan 9 14:41:20.173: INFO: Waiting up to 5m0s for pod "pod-secrets-f7f624cc-4263-4a82-8efb-d91dd8d7e19d" in namespace "secrets-8163" to be "success or failure" Jan 9 14:41:20.201: INFO: Pod "pod-secrets-f7f624cc-4263-4a82-8efb-d91dd8d7e19d": Phase="Pending", Reason="", readiness=false. Elapsed: 28.041042ms Jan 9 14:41:22.209: INFO: Pod "pod-secrets-f7f624cc-4263-4a82-8efb-d91dd8d7e19d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035845236s Jan 9 14:41:24.219: INFO: Pod "pod-secrets-f7f624cc-4263-4a82-8efb-d91dd8d7e19d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.045255446s Jan 9 14:41:26.226: INFO: Pod "pod-secrets-f7f624cc-4263-4a82-8efb-d91dd8d7e19d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.052969754s Jan 9 14:41:28.235: INFO: Pod "pod-secrets-f7f624cc-4263-4a82-8efb-d91dd8d7e19d": Phase="Pending", Reason="", readiness=false. Elapsed: 8.061662609s Jan 9 14:41:30.246: INFO: Pod "pod-secrets-f7f624cc-4263-4a82-8efb-d91dd8d7e19d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.072897796s STEP: Saw pod success Jan 9 14:41:30.246: INFO: Pod "pod-secrets-f7f624cc-4263-4a82-8efb-d91dd8d7e19d" satisfied condition "success or failure" Jan 9 14:41:30.251: INFO: Trying to get logs from node iruya-node pod pod-secrets-f7f624cc-4263-4a82-8efb-d91dd8d7e19d container secret-volume-test: STEP: delete the pod Jan 9 14:41:30.360: INFO: Waiting for pod pod-secrets-f7f624cc-4263-4a82-8efb-d91dd8d7e19d to disappear Jan 9 14:41:30.412: INFO: Pod pod-secrets-f7f624cc-4263-4a82-8efb-d91dd8d7e19d no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 9 14:41:30.412: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-8163" for this suite. Jan 9 14:41:36.446: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 9 14:41:36.619: INFO: namespace secrets-8163 deletion completed in 6.202847304s • [SLOW TEST:16.529 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 9 14:41:36.620: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Given a Pod with a 'name' label pod-adoption-release is created STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change Jan 9 14:41:45.808: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 9 14:41:45.881: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-6659" for this suite. Jan 9 14:42:09.993: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 9 14:42:10.147: INFO: namespace replicaset-6659 deletion completed in 24.196849205s • [SLOW TEST:33.527 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 9 14:42:10.148: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 9 14:42:20.409: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-3531" for this suite. Jan 9 14:43:12.449: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 9 14:43:12.528: INFO: namespace kubelet-test-3531 deletion completed in 52.111718709s • [SLOW TEST:62.380 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a read only busybox container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:187 should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 9 14:43:12.529: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod liveness-efdd47a3-e17d-4636-afd0-7234debac7c7 in namespace container-probe-485 Jan 9 14:43:20.628: INFO: Started pod liveness-efdd47a3-e17d-4636-afd0-7234debac7c7 in namespace container-probe-485 STEP: checking the pod's current state and verifying that restartCount is present Jan 9 14:43:20.663: INFO: Initial restart count of pod liveness-efdd47a3-e17d-4636-afd0-7234debac7c7 is 0 Jan 9 14:43:41.007: INFO: Restart count of pod container-probe-485/liveness-efdd47a3-e17d-4636-afd0-7234debac7c7 is now 1 (20.343437739s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 9 14:43:41.106: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-485" for this suite. Jan 9 14:43:47.147: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 9 14:43:47.305: INFO: namespace container-probe-485 deletion completed in 6.182278523s • [SLOW TEST:34.777 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 9 14:43:47.305: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 9 14:43:55.544: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-2689" for this suite. Jan 9 14:44:57.586: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 9 14:44:57.693: INFO: namespace kubelet-test-2689 deletion completed in 1m2.144731773s • [SLOW TEST:70.388 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a busybox command in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:40 should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 9 14:44:57.694: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:179 [It] should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 9 14:44:57.827: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-7791" for this suite. Jan 9 14:45:19.983: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 9 14:45:20.169: INFO: namespace pods-7791 deletion completed in 22.250921143s • [SLOW TEST:22.475 seconds] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 9 14:45:20.170: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods STEP: Gathering metrics W0109 14:46:01.139518 8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jan 9 14:46:01.139: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 9 14:46:01.139: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-1583" for this suite. Jan 9 14:46:21.212: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 9 14:46:21.319: INFO: namespace gc-1583 deletion completed in 20.173491678s • [SLOW TEST:61.149 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 9 14:46:21.319: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test override all Jan 9 14:46:21.490: INFO: Waiting up to 5m0s for pod "client-containers-379453e7-319f-4dba-91fe-5ac5001fd71f" in namespace "containers-3789" to be "success or failure" Jan 9 14:46:21.496: INFO: Pod "client-containers-379453e7-319f-4dba-91fe-5ac5001fd71f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.339389ms Jan 9 14:46:23.506: INFO: Pod "client-containers-379453e7-319f-4dba-91fe-5ac5001fd71f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016422746s Jan 9 14:46:25.513: INFO: Pod "client-containers-379453e7-319f-4dba-91fe-5ac5001fd71f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.023853309s Jan 9 14:46:27.536: INFO: Pod "client-containers-379453e7-319f-4dba-91fe-5ac5001fd71f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.046739731s Jan 9 14:46:29.586: INFO: Pod "client-containers-379453e7-319f-4dba-91fe-5ac5001fd71f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.096810106s STEP: Saw pod success Jan 9 14:46:29.587: INFO: Pod "client-containers-379453e7-319f-4dba-91fe-5ac5001fd71f" satisfied condition "success or failure" Jan 9 14:46:29.592: INFO: Trying to get logs from node iruya-node pod client-containers-379453e7-319f-4dba-91fe-5ac5001fd71f container test-container: STEP: delete the pod Jan 9 14:46:29.672: INFO: Waiting for pod client-containers-379453e7-319f-4dba-91fe-5ac5001fd71f to disappear Jan 9 14:46:29.682: INFO: Pod client-containers-379453e7-319f-4dba-91fe-5ac5001fd71f no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 9 14:46:29.683: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-3789" for this suite. Jan 9 14:46:35.763: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 9 14:46:35.912: INFO: namespace containers-3789 deletion completed in 6.177247512s • [SLOW TEST:14.593 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 9 14:46:35.912: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Jan 9 14:46:36.116: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4b01ed74-eb54-4762-b86e-13b6dca4085b" in namespace "projected-4959" to be "success or failure" Jan 9 14:46:36.140: INFO: Pod "downwardapi-volume-4b01ed74-eb54-4762-b86e-13b6dca4085b": Phase="Pending", Reason="", readiness=false. Elapsed: 23.420583ms Jan 9 14:46:38.151: INFO: Pod "downwardapi-volume-4b01ed74-eb54-4762-b86e-13b6dca4085b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034876287s Jan 9 14:46:40.160: INFO: Pod "downwardapi-volume-4b01ed74-eb54-4762-b86e-13b6dca4085b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.043707998s Jan 9 14:46:42.166: INFO: Pod "downwardapi-volume-4b01ed74-eb54-4762-b86e-13b6dca4085b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.049499884s Jan 9 14:46:44.180: INFO: Pod "downwardapi-volume-4b01ed74-eb54-4762-b86e-13b6dca4085b": Phase="Pending", Reason="", readiness=false. Elapsed: 8.063763727s Jan 9 14:46:46.191: INFO: Pod "downwardapi-volume-4b01ed74-eb54-4762-b86e-13b6dca4085b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.074715564s STEP: Saw pod success Jan 9 14:46:46.191: INFO: Pod "downwardapi-volume-4b01ed74-eb54-4762-b86e-13b6dca4085b" satisfied condition "success or failure" Jan 9 14:46:46.197: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-4b01ed74-eb54-4762-b86e-13b6dca4085b container client-container: STEP: delete the pod Jan 9 14:46:46.298: INFO: Waiting for pod downwardapi-volume-4b01ed74-eb54-4762-b86e-13b6dca4085b to disappear Jan 9 14:46:46.306: INFO: Pod downwardapi-volume-4b01ed74-eb54-4762-b86e-13b6dca4085b no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 9 14:46:46.306: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4959" for this suite. Jan 9 14:46:52.344: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 9 14:46:52.504: INFO: namespace projected-4959 deletion completed in 6.187783147s • [SLOW TEST:16.592 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 9 14:46:52.504: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with configMap that has name projected-configmap-test-upd-4d35ac4c-5e1f-455c-a932-8cf014af31ca STEP: Creating the pod STEP: Updating configmap projected-configmap-test-upd-4d35ac4c-5e1f-455c-a932-8cf014af31ca STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 9 14:48:24.388: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5362" for this suite. Jan 9 14:48:46.449: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 9 14:48:46.599: INFO: namespace projected-5362 deletion completed in 22.203591209s • [SLOW TEST:114.095 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 9 14:48:46.600: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 9 14:48:58.913: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-3674" for this suite. Jan 9 14:49:04.964: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 9 14:49:05.138: INFO: namespace kubelet-test-3674 deletion completed in 6.217081425s • [SLOW TEST:18.539 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78 should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 9 14:49:05.139: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap configmap-4315/configmap-test-802a77fa-bc22-4791-8a57-1f3465459d75 STEP: Creating a pod to test consume configMaps Jan 9 14:49:05.338: INFO: Waiting up to 5m0s for pod "pod-configmaps-af52db75-7bc2-4f73-ac60-8c6b5bd0796c" in namespace "configmap-4315" to be "success or failure" Jan 9 14:49:05.372: INFO: Pod "pod-configmaps-af52db75-7bc2-4f73-ac60-8c6b5bd0796c": Phase="Pending", Reason="", readiness=false. Elapsed: 33.939321ms Jan 9 14:49:07.382: INFO: Pod "pod-configmaps-af52db75-7bc2-4f73-ac60-8c6b5bd0796c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.043732559s Jan 9 14:49:09.388: INFO: Pod "pod-configmaps-af52db75-7bc2-4f73-ac60-8c6b5bd0796c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.049922392s Jan 9 14:49:11.460: INFO: Pod "pod-configmaps-af52db75-7bc2-4f73-ac60-8c6b5bd0796c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.121135816s Jan 9 14:49:13.472: INFO: Pod "pod-configmaps-af52db75-7bc2-4f73-ac60-8c6b5bd0796c": Phase="Pending", Reason="", readiness=false. Elapsed: 8.133938553s Jan 9 14:49:15.492: INFO: Pod "pod-configmaps-af52db75-7bc2-4f73-ac60-8c6b5bd0796c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.153402955s STEP: Saw pod success Jan 9 14:49:15.492: INFO: Pod "pod-configmaps-af52db75-7bc2-4f73-ac60-8c6b5bd0796c" satisfied condition "success or failure" Jan 9 14:49:15.497: INFO: Trying to get logs from node iruya-node pod pod-configmaps-af52db75-7bc2-4f73-ac60-8c6b5bd0796c container env-test: STEP: delete the pod Jan 9 14:49:15.605: INFO: Waiting for pod pod-configmaps-af52db75-7bc2-4f73-ac60-8c6b5bd0796c to disappear Jan 9 14:49:15.631: INFO: Pod pod-configmaps-af52db75-7bc2-4f73-ac60-8c6b5bd0796c no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 9 14:49:15.631: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4315" for this suite. Jan 9 14:49:21.675: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 9 14:49:21.892: INFO: namespace configmap-4315 deletion completed in 6.249801107s • [SLOW TEST:16.753 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 9 14:49:21.892: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap configmap-4122/configmap-test-df946a5f-e832-4b53-b3de-5a26a3cae200 STEP: Creating a pod to test consume configMaps Jan 9 14:49:22.116: INFO: Waiting up to 5m0s for pod "pod-configmaps-324c8cc5-59b4-4615-a498-c36e10fcca73" in namespace "configmap-4122" to be "success or failure" Jan 9 14:49:22.123: INFO: Pod "pod-configmaps-324c8cc5-59b4-4615-a498-c36e10fcca73": Phase="Pending", Reason="", readiness=false. Elapsed: 6.63524ms Jan 9 14:49:24.132: INFO: Pod "pod-configmaps-324c8cc5-59b4-4615-a498-c36e10fcca73": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015819153s Jan 9 14:49:26.139: INFO: Pod "pod-configmaps-324c8cc5-59b4-4615-a498-c36e10fcca73": Phase="Pending", Reason="", readiness=false. Elapsed: 4.022867536s Jan 9 14:49:28.150: INFO: Pod "pod-configmaps-324c8cc5-59b4-4615-a498-c36e10fcca73": Phase="Pending", Reason="", readiness=false. Elapsed: 6.034035162s Jan 9 14:49:30.159: INFO: Pod "pod-configmaps-324c8cc5-59b4-4615-a498-c36e10fcca73": Phase="Pending", Reason="", readiness=false. Elapsed: 8.043280917s Jan 9 14:49:32.184: INFO: Pod "pod-configmaps-324c8cc5-59b4-4615-a498-c36e10fcca73": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.068198445s STEP: Saw pod success Jan 9 14:49:32.184: INFO: Pod "pod-configmaps-324c8cc5-59b4-4615-a498-c36e10fcca73" satisfied condition "success or failure" Jan 9 14:49:32.197: INFO: Trying to get logs from node iruya-node pod pod-configmaps-324c8cc5-59b4-4615-a498-c36e10fcca73 container env-test: STEP: delete the pod Jan 9 14:49:32.499: INFO: Waiting for pod pod-configmaps-324c8cc5-59b4-4615-a498-c36e10fcca73 to disappear Jan 9 14:49:32.502: INFO: Pod pod-configmaps-324c8cc5-59b4-4615-a498-c36e10fcca73 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 9 14:49:32.502: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4122" for this suite. Jan 9 14:49:38.534: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 9 14:49:38.679: INFO: namespace configmap-4122 deletion completed in 6.172606491s • [SLOW TEST:16.787 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 9 14:49:38.680: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: starting a background goroutine to produce watch events STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 9 14:49:44.223: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-8252" for this suite. Jan 9 14:49:50.356: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 9 14:49:50.534: INFO: namespace watch-8252 deletion completed in 6.292677022s • [SLOW TEST:11.854 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 9 14:49:50.535: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-1cba59da-eb2c-4855-a621-5aaddeced86f STEP: Creating a pod to test consume secrets Jan 9 14:49:50.688: INFO: Waiting up to 5m0s for pod "pod-secrets-d00443c5-7f4d-442e-b3e6-72231f68704d" in namespace "secrets-5841" to be "success or failure" Jan 9 14:49:50.696: INFO: Pod "pod-secrets-d00443c5-7f4d-442e-b3e6-72231f68704d": Phase="Pending", Reason="", readiness=false. Elapsed: 8.044635ms Jan 9 14:49:52.707: INFO: Pod "pod-secrets-d00443c5-7f4d-442e-b3e6-72231f68704d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019096599s Jan 9 14:49:54.721: INFO: Pod "pod-secrets-d00443c5-7f4d-442e-b3e6-72231f68704d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.032952208s Jan 9 14:49:56.731: INFO: Pod "pod-secrets-d00443c5-7f4d-442e-b3e6-72231f68704d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.042359703s Jan 9 14:49:58.737: INFO: Pod "pod-secrets-d00443c5-7f4d-442e-b3e6-72231f68704d": Phase="Pending", Reason="", readiness=false. Elapsed: 8.049052945s Jan 9 14:50:00.743: INFO: Pod "pod-secrets-d00443c5-7f4d-442e-b3e6-72231f68704d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.055015685s STEP: Saw pod success Jan 9 14:50:00.743: INFO: Pod "pod-secrets-d00443c5-7f4d-442e-b3e6-72231f68704d" satisfied condition "success or failure" Jan 9 14:50:00.746: INFO: Trying to get logs from node iruya-node pod pod-secrets-d00443c5-7f4d-442e-b3e6-72231f68704d container secret-volume-test: STEP: delete the pod Jan 9 14:50:00.809: INFO: Waiting for pod pod-secrets-d00443c5-7f4d-442e-b3e6-72231f68704d to disappear Jan 9 14:50:00.817: INFO: Pod pod-secrets-d00443c5-7f4d-442e-b3e6-72231f68704d no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 9 14:50:00.817: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-5841" for this suite. Jan 9 14:50:06.913: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 9 14:50:07.026: INFO: namespace secrets-5841 deletion completed in 6.203460015s • [SLOW TEST:16.492 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 9 14:50:07.027: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jan 9 14:50:07.127: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 9 14:50:17.663: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-2456" for this suite. Jan 9 14:51:09.703: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 9 14:51:09.890: INFO: namespace pods-2456 deletion completed in 52.212761411s • [SLOW TEST:62.864 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 9 14:51:09.891: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jan 9 14:51:10.018: INFO: (0) /api/v1/nodes/iruya-node/proxy/logs/:
alternatives.log
alternatives.l... (200; 12.406203ms)
Jan  9 14:51:10.023: INFO: (1) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.930667ms)
Jan  9 14:51:10.028: INFO: (2) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.133156ms)
Jan  9 14:51:10.035: INFO: (3) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.210453ms)
Jan  9 14:51:10.041: INFO: (4) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.496318ms)
Jan  9 14:51:10.048: INFO: (5) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.402369ms)
Jan  9 14:51:10.055: INFO: (6) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.617198ms)
Jan  9 14:51:10.063: INFO: (7) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.773977ms)
Jan  9 14:51:10.078: INFO: (8) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 14.560936ms)
Jan  9 14:51:10.089: INFO: (9) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 11.547733ms)
Jan  9 14:51:10.096: INFO: (10) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.160049ms)
Jan  9 14:51:10.103: INFO: (11) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.021558ms)
Jan  9 14:51:10.147: INFO: (12) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 44.467423ms)
Jan  9 14:51:10.153: INFO: (13) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.467263ms)
Jan  9 14:51:10.159: INFO: (14) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.646135ms)
Jan  9 14:51:10.165: INFO: (15) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.251701ms)
Jan  9 14:51:10.170: INFO: (16) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.968128ms)
Jan  9 14:51:10.174: INFO: (17) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.945927ms)
Jan  9 14:51:10.177: INFO: (18) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.355523ms)
Jan  9 14:51:10.182: INFO: (19) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.174656ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  9 14:51:10.182: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-1397" for this suite.
Jan  9 14:51:16.206: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  9 14:51:16.449: INFO: namespace proxy-1397 deletion completed in 6.263454309s

• [SLOW TEST:6.559 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:58
    should proxy logs on node using proxy subresource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  9 14:51:16.450: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0777 on tmpfs
Jan  9 14:51:16.748: INFO: Waiting up to 5m0s for pod "pod-7c4a3c95-5303-4754-a278-acda2e8bbfee" in namespace "emptydir-8577" to be "success or failure"
Jan  9 14:51:16.761: INFO: Pod "pod-7c4a3c95-5303-4754-a278-acda2e8bbfee": Phase="Pending", Reason="", readiness=false. Elapsed: 12.747433ms
Jan  9 14:51:18.775: INFO: Pod "pod-7c4a3c95-5303-4754-a278-acda2e8bbfee": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027184481s
Jan  9 14:51:20.791: INFO: Pod "pod-7c4a3c95-5303-4754-a278-acda2e8bbfee": Phase="Pending", Reason="", readiness=false. Elapsed: 4.04306194s
Jan  9 14:51:22.797: INFO: Pod "pod-7c4a3c95-5303-4754-a278-acda2e8bbfee": Phase="Pending", Reason="", readiness=false. Elapsed: 6.049221076s
Jan  9 14:51:24.858: INFO: Pod "pod-7c4a3c95-5303-4754-a278-acda2e8bbfee": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.109891201s
STEP: Saw pod success
Jan  9 14:51:24.858: INFO: Pod "pod-7c4a3c95-5303-4754-a278-acda2e8bbfee" satisfied condition "success or failure"
Jan  9 14:51:24.874: INFO: Trying to get logs from node iruya-node pod pod-7c4a3c95-5303-4754-a278-acda2e8bbfee container test-container: 
STEP: delete the pod
Jan  9 14:51:25.007: INFO: Waiting for pod pod-7c4a3c95-5303-4754-a278-acda2e8bbfee to disappear
Jan  9 14:51:25.017: INFO: Pod pod-7c4a3c95-5303-4754-a278-acda2e8bbfee no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  9 14:51:25.018: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-8577" for this suite.
Jan  9 14:51:31.049: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  9 14:51:31.176: INFO: namespace emptydir-8577 deletion completed in 6.150700586s

• [SLOW TEST:14.726 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  9 14:51:31.177: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Jan  9 14:51:40.449: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  9 14:51:40.485: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-8924" for this suite.
Jan  9 14:51:46.559: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  9 14:51:46.682: INFO: namespace container-runtime-8924 deletion completed in 6.18922253s

• [SLOW TEST:15.506 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129
      should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  9 14:51:46.683: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Jan  9 14:51:55.934: INFO: Expected: &{} to match Container's Termination Message:  --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  9 14:51:55.969: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-3377" for this suite.
Jan  9 14:52:02.007: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  9 14:52:02.147: INFO: namespace container-runtime-3377 deletion completed in 6.163597985s

• [SLOW TEST:15.464 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129
      should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-network] Services 
  should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  9 14:52:02.147: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88
[It] should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  9 14:52:02.241: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-6480" for this suite.
Jan  9 14:52:08.350: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  9 14:52:08.497: INFO: namespace services-6480 deletion completed in 6.175490737s
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92

• [SLOW TEST:6.350 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  9 14:52:08.498: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Jan  9 14:52:08.622: INFO: (0) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 32.308847ms)
Jan  9 14:52:08.627: INFO: (1) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.096684ms)
Jan  9 14:52:08.632: INFO: (2) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.23258ms)
Jan  9 14:52:08.636: INFO: (3) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.628593ms)
Jan  9 14:52:08.640: INFO: (4) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.749101ms)
Jan  9 14:52:08.644: INFO: (5) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.185682ms)
Jan  9 14:52:08.649: INFO: (6) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.981673ms)
Jan  9 14:52:08.655: INFO: (7) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.536041ms)
Jan  9 14:52:08.663: INFO: (8) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.712182ms)
Jan  9 14:52:08.672: INFO: (9) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.023838ms)
Jan  9 14:52:08.677: INFO: (10) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.581612ms)
Jan  9 14:52:08.683: INFO: (11) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.316343ms)
Jan  9 14:52:08.689: INFO: (12) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.104113ms)
Jan  9 14:52:08.695: INFO: (13) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.416382ms)
Jan  9 14:52:08.700: INFO: (14) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.257317ms)
Jan  9 14:52:08.707: INFO: (15) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.797024ms)
Jan  9 14:52:08.713: INFO: (16) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.835603ms)
Jan  9 14:52:08.720: INFO: (17) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.924747ms)
Jan  9 14:52:08.725: INFO: (18) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.989955ms)
Jan  9 14:52:08.730: INFO: (19) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.742836ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  9 14:52:08.730: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-4753" for this suite.
Jan  9 14:52:14.781: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  9 14:52:14.900: INFO: namespace proxy-4753 deletion completed in 6.16523216s

• [SLOW TEST:6.402 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:58
    should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  9 14:52:14.901: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the deployment
STEP: Wait for the Deployment to create new ReplicaSet
STEP: delete the deployment
STEP: wait for all rs to be garbage collected
STEP: expected 0 pods, got 2 pods
STEP: expected 0 rs, got 1 rs
STEP: expected 0 rs, got 1 rs
STEP: expected 0 pods, got 2 pods
STEP: Gathering metrics
W0109 14:52:18.150751       8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jan  9 14:52:18.150: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  9 14:52:18.151: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-2947" for this suite.
Jan  9 14:52:24.181: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  9 14:52:24.307: INFO: namespace gc-2947 deletion completed in 6.149441303s

• [SLOW TEST:9.407 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  9 14:52:24.309: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Jan  9 14:52:24.464: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d374586c-c631-486c-a475-99411e000ee2" in namespace "downward-api-9691" to be "success or failure"
Jan  9 14:52:24.478: INFO: Pod "downwardapi-volume-d374586c-c631-486c-a475-99411e000ee2": Phase="Pending", Reason="", readiness=false. Elapsed: 13.117423ms
Jan  9 14:52:26.487: INFO: Pod "downwardapi-volume-d374586c-c631-486c-a475-99411e000ee2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022449408s
Jan  9 14:52:28.502: INFO: Pod "downwardapi-volume-d374586c-c631-486c-a475-99411e000ee2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.037364237s
Jan  9 14:52:30.524: INFO: Pod "downwardapi-volume-d374586c-c631-486c-a475-99411e000ee2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.059294309s
Jan  9 14:52:32.538: INFO: Pod "downwardapi-volume-d374586c-c631-486c-a475-99411e000ee2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.073616204s
STEP: Saw pod success
Jan  9 14:52:32.538: INFO: Pod "downwardapi-volume-d374586c-c631-486c-a475-99411e000ee2" satisfied condition "success or failure"
Jan  9 14:52:32.542: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-d374586c-c631-486c-a475-99411e000ee2 container client-container: 
STEP: delete the pod
Jan  9 14:52:32.661: INFO: Waiting for pod downwardapi-volume-d374586c-c631-486c-a475-99411e000ee2 to disappear
Jan  9 14:52:32.666: INFO: Pod downwardapi-volume-d374586c-c631-486c-a475-99411e000ee2 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  9 14:52:32.667: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-9691" for this suite.
Jan  9 14:52:38.761: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  9 14:52:38.915: INFO: namespace downward-api-9691 deletion completed in 6.24152514s

• [SLOW TEST:14.606 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with projected pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  9 14:52:38.915: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with projected pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod pod-subpath-test-projected-777s
STEP: Creating a pod to test atomic-volume-subpath
Jan  9 14:52:39.018: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-777s" in namespace "subpath-2845" to be "success or failure"
Jan  9 14:52:39.022: INFO: Pod "pod-subpath-test-projected-777s": Phase="Pending", Reason="", readiness=false. Elapsed: 3.611712ms
Jan  9 14:52:41.030: INFO: Pod "pod-subpath-test-projected-777s": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011661987s
Jan  9 14:52:43.035: INFO: Pod "pod-subpath-test-projected-777s": Phase="Pending", Reason="", readiness=false. Elapsed: 4.017444058s
Jan  9 14:52:45.043: INFO: Pod "pod-subpath-test-projected-777s": Phase="Pending", Reason="", readiness=false. Elapsed: 6.024796499s
Jan  9 14:52:47.058: INFO: Pod "pod-subpath-test-projected-777s": Phase="Running", Reason="", readiness=true. Elapsed: 8.039644736s
Jan  9 14:52:49.069: INFO: Pod "pod-subpath-test-projected-777s": Phase="Running", Reason="", readiness=true. Elapsed: 10.050624615s
Jan  9 14:52:51.077: INFO: Pod "pod-subpath-test-projected-777s": Phase="Running", Reason="", readiness=true. Elapsed: 12.059023944s
Jan  9 14:52:53.085: INFO: Pod "pod-subpath-test-projected-777s": Phase="Running", Reason="", readiness=true. Elapsed: 14.067371279s
Jan  9 14:52:55.097: INFO: Pod "pod-subpath-test-projected-777s": Phase="Running", Reason="", readiness=true. Elapsed: 16.078751823s
Jan  9 14:52:57.106: INFO: Pod "pod-subpath-test-projected-777s": Phase="Running", Reason="", readiness=true. Elapsed: 18.087608478s
Jan  9 14:52:59.117: INFO: Pod "pod-subpath-test-projected-777s": Phase="Running", Reason="", readiness=true. Elapsed: 20.09882008s
Jan  9 14:53:01.125: INFO: Pod "pod-subpath-test-projected-777s": Phase="Running", Reason="", readiness=true. Elapsed: 22.107284745s
Jan  9 14:53:03.137: INFO: Pod "pod-subpath-test-projected-777s": Phase="Running", Reason="", readiness=true. Elapsed: 24.119041129s
Jan  9 14:53:05.144: INFO: Pod "pod-subpath-test-projected-777s": Phase="Running", Reason="", readiness=true. Elapsed: 26.126301354s
Jan  9 14:53:07.155: INFO: Pod "pod-subpath-test-projected-777s": Phase="Running", Reason="", readiness=true. Elapsed: 28.136755358s
Jan  9 14:53:09.163: INFO: Pod "pod-subpath-test-projected-777s": Phase="Succeeded", Reason="", readiness=false. Elapsed: 30.145008534s
STEP: Saw pod success
Jan  9 14:53:09.163: INFO: Pod "pod-subpath-test-projected-777s" satisfied condition "success or failure"
Jan  9 14:53:09.168: INFO: Trying to get logs from node iruya-node pod pod-subpath-test-projected-777s container test-container-subpath-projected-777s: 
STEP: delete the pod
Jan  9 14:53:09.223: INFO: Waiting for pod pod-subpath-test-projected-777s to disappear
Jan  9 14:53:09.227: INFO: Pod pod-subpath-test-projected-777s no longer exists
STEP: Deleting pod pod-subpath-test-projected-777s
Jan  9 14:53:09.227: INFO: Deleting pod "pod-subpath-test-projected-777s" in namespace "subpath-2845"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  9 14:53:09.229: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-2845" for this suite.
Jan  9 14:53:15.289: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  9 14:53:15.544: INFO: namespace subpath-2845 deletion completed in 6.311080425s

• [SLOW TEST:36.629 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with projected pod [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[k8s.io] Probing container 
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  9 14:53:15.544: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod busybox-b8a3b2c2-d9e6-4fd6-b935-c37bc94738d6 in namespace container-probe-3012
Jan  9 14:53:25.754: INFO: Started pod busybox-b8a3b2c2-d9e6-4fd6-b935-c37bc94738d6 in namespace container-probe-3012
STEP: checking the pod's current state and verifying that restartCount is present
Jan  9 14:53:25.760: INFO: Initial restart count of pod busybox-b8a3b2c2-d9e6-4fd6-b935-c37bc94738d6 is 0
Jan  9 14:54:22.330: INFO: Restart count of pod container-probe-3012/busybox-b8a3b2c2-d9e6-4fd6-b935-c37bc94738d6 is now 1 (56.570119352s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  9 14:54:22.363: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-3012" for this suite.
Jan  9 14:54:28.414: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  9 14:54:28.563: INFO: namespace container-probe-3012 deletion completed in 6.179186226s

• [SLOW TEST:73.019 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should create and stop a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  9 14:54:28.563: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[It] should create and stop a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a replication controller
Jan  9 14:54:28.667: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8657'
Jan  9 14:54:30.913: INFO: stderr: ""
Jan  9 14:54:30.913: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jan  9 14:54:30.913: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8657'
Jan  9 14:54:31.194: INFO: stderr: ""
Jan  9 14:54:31.194: INFO: stdout: "update-demo-nautilus-d47kc update-demo-nautilus-vrg79 "
Jan  9 14:54:31.195: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-d47kc -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8657'
Jan  9 14:54:31.328: INFO: stderr: ""
Jan  9 14:54:31.328: INFO: stdout: ""
Jan  9 14:54:31.328: INFO: update-demo-nautilus-d47kc is created but not running
Jan  9 14:54:36.328: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8657'
Jan  9 14:54:37.650: INFO: stderr: ""
Jan  9 14:54:37.650: INFO: stdout: "update-demo-nautilus-d47kc update-demo-nautilus-vrg79 "
Jan  9 14:54:37.651: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-d47kc -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8657'
Jan  9 14:54:37.975: INFO: stderr: ""
Jan  9 14:54:37.975: INFO: stdout: ""
Jan  9 14:54:37.975: INFO: update-demo-nautilus-d47kc is created but not running
Jan  9 14:54:42.976: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8657'
Jan  9 14:54:43.128: INFO: stderr: ""
Jan  9 14:54:43.129: INFO: stdout: "update-demo-nautilus-d47kc update-demo-nautilus-vrg79 "
Jan  9 14:54:43.129: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-d47kc -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8657'
Jan  9 14:54:43.280: INFO: stderr: ""
Jan  9 14:54:43.280: INFO: stdout: "true"
Jan  9 14:54:43.280: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-d47kc -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8657'
Jan  9 14:54:43.427: INFO: stderr: ""
Jan  9 14:54:43.427: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan  9 14:54:43.427: INFO: validating pod update-demo-nautilus-d47kc
Jan  9 14:54:43.451: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan  9 14:54:43.451: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan  9 14:54:43.451: INFO: update-demo-nautilus-d47kc is verified up and running
Jan  9 14:54:43.452: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-vrg79 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8657'
Jan  9 14:54:43.549: INFO: stderr: ""
Jan  9 14:54:43.549: INFO: stdout: "true"
Jan  9 14:54:43.549: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-vrg79 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8657'
Jan  9 14:54:43.641: INFO: stderr: ""
Jan  9 14:54:43.641: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan  9 14:54:43.641: INFO: validating pod update-demo-nautilus-vrg79
Jan  9 14:54:43.656: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan  9 14:54:43.656: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan  9 14:54:43.656: INFO: update-demo-nautilus-vrg79 is verified up and running
STEP: using delete to clean up resources
Jan  9 14:54:43.656: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8657'
Jan  9 14:54:43.783: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan  9 14:54:43.783: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Jan  9 14:54:43.784: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-8657'
Jan  9 14:54:44.202: INFO: stderr: "No resources found.\n"
Jan  9 14:54:44.202: INFO: stdout: ""
Jan  9 14:54:44.203: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-8657 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Jan  9 14:54:44.332: INFO: stderr: ""
Jan  9 14:54:44.332: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  9 14:54:44.333: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-8657" for this suite.
Jan  9 14:55:06.372: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  9 14:55:06.522: INFO: namespace kubectl-8657 deletion completed in 22.182115555s

• [SLOW TEST:37.959 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create and stop a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  9 14:55:06.523: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-2dd15cdf-00ab-493f-85b8-9fec4211f16c
STEP: Creating a pod to test consume secrets
Jan  9 14:55:06.836: INFO: Waiting up to 5m0s for pod "pod-secrets-9b545168-0685-4922-aca7-3b5da2780ada" in namespace "secrets-2514" to be "success or failure"
Jan  9 14:55:06.864: INFO: Pod "pod-secrets-9b545168-0685-4922-aca7-3b5da2780ada": Phase="Pending", Reason="", readiness=false. Elapsed: 27.701941ms
Jan  9 14:55:08.878: INFO: Pod "pod-secrets-9b545168-0685-4922-aca7-3b5da2780ada": Phase="Pending", Reason="", readiness=false. Elapsed: 2.041490742s
Jan  9 14:55:10.895: INFO: Pod "pod-secrets-9b545168-0685-4922-aca7-3b5da2780ada": Phase="Pending", Reason="", readiness=false. Elapsed: 4.057999312s
Jan  9 14:55:12.908: INFO: Pod "pod-secrets-9b545168-0685-4922-aca7-3b5da2780ada": Phase="Pending", Reason="", readiness=false. Elapsed: 6.071430961s
Jan  9 14:55:14.924: INFO: Pod "pod-secrets-9b545168-0685-4922-aca7-3b5da2780ada": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.087482139s
STEP: Saw pod success
Jan  9 14:55:14.924: INFO: Pod "pod-secrets-9b545168-0685-4922-aca7-3b5da2780ada" satisfied condition "success or failure"
Jan  9 14:55:14.933: INFO: Trying to get logs from node iruya-node pod pod-secrets-9b545168-0685-4922-aca7-3b5da2780ada container secret-volume-test: 
STEP: delete the pod
Jan  9 14:55:15.023: INFO: Waiting for pod pod-secrets-9b545168-0685-4922-aca7-3b5da2780ada to disappear
Jan  9 14:55:15.030: INFO: Pod pod-secrets-9b545168-0685-4922-aca7-3b5da2780ada no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  9 14:55:15.030: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-2514" for this suite.
Jan  9 14:55:21.077: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  9 14:55:21.180: INFO: namespace secrets-2514 deletion completed in 6.14410476s
STEP: Destroying namespace "secret-namespace-399" for this suite.
Jan  9 14:55:27.205: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  9 14:55:27.336: INFO: namespace secret-namespace-399 deletion completed in 6.156421615s

• [SLOW TEST:20.813 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  9 14:55:27.337: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name s-test-opt-del-5b6fe094-86c2-4251-a2ce-d66f865402c6
STEP: Creating secret with name s-test-opt-upd-01756c0e-43cf-488e-82de-f3446ed87134
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-5b6fe094-86c2-4251-a2ce-d66f865402c6
STEP: Updating secret s-test-opt-upd-01756c0e-43cf-488e-82de-f3446ed87134
STEP: Creating secret with name s-test-opt-create-ceb63f9e-61fb-4045-b3a5-087fb9a2bd84
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  9 14:55:41.978: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5656" for this suite.
Jan  9 14:56:06.008: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  9 14:56:06.146: INFO: namespace projected-5656 deletion completed in 24.163136484s

• [SLOW TEST:38.809 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[sig-apps] Deployment 
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  9 14:56:06.146: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72
[It] RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Jan  9 14:56:06.213: INFO: Creating deployment "test-recreate-deployment"
Jan  9 14:56:06.222: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1
Jan  9 14:56:06.241: INFO: deployment "test-recreate-deployment" doesn't have the required revision set
Jan  9 14:56:08.265: INFO: Waiting deployment "test-recreate-deployment" to complete
Jan  9 14:56:08.268: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714178566, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714178566, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714178566, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714178566, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  9 14:56:10.276: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714178566, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714178566, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714178566, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714178566, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  9 14:56:12.277: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714178566, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714178566, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714178566, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714178566, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  9 14:56:14.275: INFO: Triggering a new rollout for deployment "test-recreate-deployment"
Jan  9 14:56:14.288: INFO: Updating deployment test-recreate-deployment
Jan  9 14:56:14.288: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66
Jan  9 14:56:14.619: INFO: Deployment "test-recreate-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment,GenerateName:,Namespace:deployment-7068,SelfLink:/apis/apps/v1/namespaces/deployment-7068/deployments/test-recreate-deployment,UID:e35f735a-b170-4a3e-a364-70e988748f07,ResourceVersion:19916245,Generation:2,CreationTimestamp:2020-01-09 14:56:06 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[{Available False 2020-01-09 14:56:14 +0000 UTC 2020-01-09 14:56:14 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2020-01-09 14:56:14 +0000 UTC 2020-01-09 14:56:06 +0000 UTC ReplicaSetUpdated ReplicaSet "test-recreate-deployment-5c8c9cc69d" is progressing.}],ReadyReplicas:0,CollisionCount:nil,},}

Jan  9 14:56:14.632: INFO: New ReplicaSet "test-recreate-deployment-5c8c9cc69d" of Deployment "test-recreate-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5c8c9cc69d,GenerateName:,Namespace:deployment-7068,SelfLink:/apis/apps/v1/namespaces/deployment-7068/replicasets/test-recreate-deployment-5c8c9cc69d,UID:12fc1659-a8c9-481b-9594-f6e9e12cf71d,ResourceVersion:19916243,Generation:1,CreationTimestamp:2020-01-09 14:56:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment e35f735a-b170-4a3e-a364-70e988748f07 0xc002ee35e7 0xc002ee35e8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Jan  9 14:56:14.632: INFO: All old ReplicaSets of Deployment "test-recreate-deployment":
Jan  9 14:56:14.633: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-6df85df6b9,GenerateName:,Namespace:deployment-7068,SelfLink:/apis/apps/v1/namespaces/deployment-7068/replicasets/test-recreate-deployment-6df85df6b9,UID:7fea3227-729b-44a3-877b-b8a06c17eca5,ResourceVersion:19916233,Generation:2,CreationTimestamp:2020-01-09 14:56:06 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment e35f735a-b170-4a3e-a364-70e988748f07 0xc002ee36b7 0xc002ee36b8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Jan  9 14:56:14.641: INFO: Pod "test-recreate-deployment-5c8c9cc69d-psdw7" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5c8c9cc69d-psdw7,GenerateName:test-recreate-deployment-5c8c9cc69d-,Namespace:deployment-7068,SelfLink:/api/v1/namespaces/deployment-7068/pods/test-recreate-deployment-5c8c9cc69d-psdw7,UID:a7b7677c-73df-424c-84ed-cfe157d0ca37,ResourceVersion:19916241,Generation:0,CreationTimestamp:2020-01-09 14:56:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-recreate-deployment-5c8c9cc69d 12fc1659-a8c9-481b-9594-f6e9e12cf71d 0xc002ee3fa7 0xc002ee3fa8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-dxfhh {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dxfhh,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-dxfhh true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc003280020} {node.kubernetes.io/unreachable Exists  NoExecute 0xc003280040}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-09 14:56:14 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  9 14:56:14.641: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-7068" for this suite.
Jan  9 14:56:20.756: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  9 14:56:20.883: INFO: namespace deployment-7068 deletion completed in 6.231747943s

• [SLOW TEST:14.737 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  9 14:56:20.885: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the rc
STEP: delete the rc
STEP: wait for all pods to be garbage collected
STEP: Gathering metrics
W0109 14:56:31.086175       8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jan  9 14:56:31.086: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  9 14:56:31.086: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-7820" for this suite.
Jan  9 14:56:37.109: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  9 14:56:37.247: INFO: namespace gc-7820 deletion completed in 6.15726735s

• [SLOW TEST:16.362 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  9 14:56:37.247: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test override command
Jan  9 14:56:37.423: INFO: Waiting up to 5m0s for pod "client-containers-0a799bb2-1c68-45d9-8dc7-4bfa51424bfe" in namespace "containers-7142" to be "success or failure"
Jan  9 14:56:37.549: INFO: Pod "client-containers-0a799bb2-1c68-45d9-8dc7-4bfa51424bfe": Phase="Pending", Reason="", readiness=false. Elapsed: 126.04618ms
Jan  9 14:56:39.558: INFO: Pod "client-containers-0a799bb2-1c68-45d9-8dc7-4bfa51424bfe": Phase="Pending", Reason="", readiness=false. Elapsed: 2.13514645s
Jan  9 14:56:41.566: INFO: Pod "client-containers-0a799bb2-1c68-45d9-8dc7-4bfa51424bfe": Phase="Pending", Reason="", readiness=false. Elapsed: 4.143214712s
Jan  9 14:56:43.574: INFO: Pod "client-containers-0a799bb2-1c68-45d9-8dc7-4bfa51424bfe": Phase="Pending", Reason="", readiness=false. Elapsed: 6.15119784s
Jan  9 14:56:45.580: INFO: Pod "client-containers-0a799bb2-1c68-45d9-8dc7-4bfa51424bfe": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.156825733s
STEP: Saw pod success
Jan  9 14:56:45.580: INFO: Pod "client-containers-0a799bb2-1c68-45d9-8dc7-4bfa51424bfe" satisfied condition "success or failure"
Jan  9 14:56:45.584: INFO: Trying to get logs from node iruya-node pod client-containers-0a799bb2-1c68-45d9-8dc7-4bfa51424bfe container test-container: 
STEP: delete the pod
Jan  9 14:56:45.657: INFO: Waiting for pod client-containers-0a799bb2-1c68-45d9-8dc7-4bfa51424bfe to disappear
Jan  9 14:56:45.691: INFO: Pod client-containers-0a799bb2-1c68-45d9-8dc7-4bfa51424bfe no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  9 14:56:45.691: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-7142" for this suite.
Jan  9 14:56:51.721: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  9 14:56:52.335: INFO: namespace containers-7142 deletion completed in 6.63833717s

• [SLOW TEST:15.088 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources Simple CustomResourceDefinition 
  creating/deleting custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  9 14:56:52.336: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] creating/deleting custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Jan  9 14:56:52.488: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  9 14:56:53.628: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-5053" for this suite.
Jan  9 14:56:59.655: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  9 14:56:59.807: INFO: namespace custom-resource-definition-5053 deletion completed in 6.172947106s

• [SLOW TEST:7.471 seconds]
[sig-api-machinery] CustomResourceDefinition resources
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  Simple CustomResourceDefinition
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:35
    creating/deleting custom resource definition objects works  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  9 14:56:59.808: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Jan  9 14:56:59.947: INFO: Waiting up to 5m0s for pod "downwardapi-volume-42ce9c9e-0156-4037-a285-24dbed358b84" in namespace "projected-1796" to be "success or failure"
Jan  9 14:56:59.973: INFO: Pod "downwardapi-volume-42ce9c9e-0156-4037-a285-24dbed358b84": Phase="Pending", Reason="", readiness=false. Elapsed: 26.395014ms
Jan  9 14:57:01.988: INFO: Pod "downwardapi-volume-42ce9c9e-0156-4037-a285-24dbed358b84": Phase="Pending", Reason="", readiness=false. Elapsed: 2.041294725s
Jan  9 14:57:04.004: INFO: Pod "downwardapi-volume-42ce9c9e-0156-4037-a285-24dbed358b84": Phase="Pending", Reason="", readiness=false. Elapsed: 4.057292312s
Jan  9 14:57:06.009: INFO: Pod "downwardapi-volume-42ce9c9e-0156-4037-a285-24dbed358b84": Phase="Pending", Reason="", readiness=false. Elapsed: 6.062440871s
Jan  9 14:57:08.020: INFO: Pod "downwardapi-volume-42ce9c9e-0156-4037-a285-24dbed358b84": Phase="Pending", Reason="", readiness=false. Elapsed: 8.072980844s
Jan  9 14:57:10.026: INFO: Pod "downwardapi-volume-42ce9c9e-0156-4037-a285-24dbed358b84": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.079079239s
STEP: Saw pod success
Jan  9 14:57:10.026: INFO: Pod "downwardapi-volume-42ce9c9e-0156-4037-a285-24dbed358b84" satisfied condition "success or failure"
Jan  9 14:57:10.030: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-42ce9c9e-0156-4037-a285-24dbed358b84 container client-container: 
STEP: delete the pod
Jan  9 14:57:10.150: INFO: Waiting for pod downwardapi-volume-42ce9c9e-0156-4037-a285-24dbed358b84 to disappear
Jan  9 14:57:10.159: INFO: Pod downwardapi-volume-42ce9c9e-0156-4037-a285-24dbed358b84 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  9 14:57:10.159: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1796" for this suite.
Jan  9 14:57:16.203: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  9 14:57:16.362: INFO: namespace projected-1796 deletion completed in 6.196573636s

• [SLOW TEST:16.554 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  9 14:57:16.363: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-upd-278903ab-e82a-4930-8037-67503553c428
STEP: Creating the pod
STEP: Updating configmap configmap-test-upd-278903ab-e82a-4930-8037-67503553c428
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  9 14:58:38.309: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-8288" for this suite.
Jan  9 14:59:00.373: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  9 14:59:00.520: INFO: namespace configmap-8288 deletion completed in 22.196527314s

• [SLOW TEST:104.157 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  9 14:59:00.522: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81
Jan  9 14:59:00.652: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Jan  9 14:59:00.667: INFO: Waiting for terminating namespaces to be deleted...
Jan  9 14:59:00.670: INFO: 
Logging pods the kubelet thinks is on node iruya-node before test
Jan  9 14:59:00.689: INFO: kube-proxy-976zl from kube-system started at 2019-08-04 09:01:39 +0000 UTC (1 container statuses recorded)
Jan  9 14:59:00.689: INFO: 	Container kube-proxy ready: true, restart count 0
Jan  9 14:59:00.689: INFO: weave-net-rlp57 from kube-system started at 2019-10-12 11:56:39 +0000 UTC (2 container statuses recorded)
Jan  9 14:59:00.689: INFO: 	Container weave ready: true, restart count 0
Jan  9 14:59:00.689: INFO: 	Container weave-npc ready: true, restart count 0
Jan  9 14:59:00.689: INFO: 
Logging pods the kubelet thinks is on node iruya-server-sfge57q7djm7 before test
Jan  9 14:59:00.710: INFO: kube-controller-manager-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:42 +0000 UTC (1 container statuses recorded)
Jan  9 14:59:00.710: INFO: 	Container kube-controller-manager ready: true, restart count 18
Jan  9 14:59:00.710: INFO: kube-proxy-58v95 from kube-system started at 2019-08-04 08:52:37 +0000 UTC (1 container statuses recorded)
Jan  9 14:59:00.710: INFO: 	Container kube-proxy ready: true, restart count 0
Jan  9 14:59:00.710: INFO: kube-apiserver-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:39 +0000 UTC (1 container statuses recorded)
Jan  9 14:59:00.710: INFO: 	Container kube-apiserver ready: true, restart count 0
Jan  9 14:59:00.710: INFO: kube-scheduler-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:43 +0000 UTC (1 container statuses recorded)
Jan  9 14:59:00.710: INFO: 	Container kube-scheduler ready: true, restart count 12
Jan  9 14:59:00.710: INFO: coredns-5c98db65d4-xx8w8 from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded)
Jan  9 14:59:00.710: INFO: 	Container coredns ready: true, restart count 0
Jan  9 14:59:00.710: INFO: etcd-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:38 +0000 UTC (1 container statuses recorded)
Jan  9 14:59:00.710: INFO: 	Container etcd ready: true, restart count 0
Jan  9 14:59:00.710: INFO: weave-net-bzl4d from kube-system started at 2019-08-04 08:52:37 +0000 UTC (2 container statuses recorded)
Jan  9 14:59:00.710: INFO: 	Container weave ready: true, restart count 0
Jan  9 14:59:00.710: INFO: 	Container weave-npc ready: true, restart count 0
Jan  9 14:59:00.710: INFO: coredns-5c98db65d4-bm4gs from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded)
Jan  9 14:59:00.710: INFO: 	Container coredns ready: true, restart count 0
[It] validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Trying to schedule Pod with nonempty NodeSelector.
STEP: Considering event: 
Type = [Warning], Name = [restricted-pod.15e83fe952225f5c], Reason = [FailedScheduling], Message = [0/2 nodes are available: 2 node(s) didn't match node selector.]
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  9 14:59:01.742: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-8287" for this suite.
Jan  9 14:59:07.770: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  9 14:59:07.934: INFO: namespace sched-pred-8287 deletion completed in 6.185658344s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72

• [SLOW TEST:7.413 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23
  validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with secret pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  9 14:59:07.935: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with secret pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod pod-subpath-test-secret-dhjv
STEP: Creating a pod to test atomic-volume-subpath
Jan  9 14:59:08.110: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-dhjv" in namespace "subpath-2443" to be "success or failure"
Jan  9 14:59:08.117: INFO: Pod "pod-subpath-test-secret-dhjv": Phase="Pending", Reason="", readiness=false. Elapsed: 7.834331ms
Jan  9 14:59:10.130: INFO: Pod "pod-subpath-test-secret-dhjv": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020685166s
Jan  9 14:59:12.137: INFO: Pod "pod-subpath-test-secret-dhjv": Phase="Pending", Reason="", readiness=false. Elapsed: 4.027737845s
Jan  9 14:59:14.153: INFO: Pod "pod-subpath-test-secret-dhjv": Phase="Pending", Reason="", readiness=false. Elapsed: 6.04299184s
Jan  9 14:59:16.162: INFO: Pod "pod-subpath-test-secret-dhjv": Phase="Pending", Reason="", readiness=false. Elapsed: 8.051957153s
Jan  9 14:59:18.170: INFO: Pod "pod-subpath-test-secret-dhjv": Phase="Running", Reason="", readiness=true. Elapsed: 10.0606457s
Jan  9 14:59:20.179: INFO: Pod "pod-subpath-test-secret-dhjv": Phase="Running", Reason="", readiness=true. Elapsed: 12.069219777s
Jan  9 14:59:22.193: INFO: Pod "pod-subpath-test-secret-dhjv": Phase="Running", Reason="", readiness=true. Elapsed: 14.083218265s
Jan  9 14:59:24.203: INFO: Pod "pod-subpath-test-secret-dhjv": Phase="Running", Reason="", readiness=true. Elapsed: 16.093843818s
Jan  9 14:59:26.211: INFO: Pod "pod-subpath-test-secret-dhjv": Phase="Running", Reason="", readiness=true. Elapsed: 18.101783463s
Jan  9 14:59:28.223: INFO: Pod "pod-subpath-test-secret-dhjv": Phase="Running", Reason="", readiness=true. Elapsed: 20.113221415s
Jan  9 14:59:30.231: INFO: Pod "pod-subpath-test-secret-dhjv": Phase="Running", Reason="", readiness=true. Elapsed: 22.121401826s
Jan  9 14:59:32.239: INFO: Pod "pod-subpath-test-secret-dhjv": Phase="Running", Reason="", readiness=true. Elapsed: 24.129014691s
Jan  9 14:59:34.245: INFO: Pod "pod-subpath-test-secret-dhjv": Phase="Running", Reason="", readiness=true. Elapsed: 26.135721015s
Jan  9 14:59:36.256: INFO: Pod "pod-subpath-test-secret-dhjv": Phase="Running", Reason="", readiness=true. Elapsed: 28.146043338s
Jan  9 14:59:38.274: INFO: Pod "pod-subpath-test-secret-dhjv": Phase="Succeeded", Reason="", readiness=false. Elapsed: 30.164402375s
STEP: Saw pod success
Jan  9 14:59:38.274: INFO: Pod "pod-subpath-test-secret-dhjv" satisfied condition "success or failure"
Jan  9 14:59:38.281: INFO: Trying to get logs from node iruya-node pod pod-subpath-test-secret-dhjv container test-container-subpath-secret-dhjv: 
STEP: delete the pod
Jan  9 14:59:38.350: INFO: Waiting for pod pod-subpath-test-secret-dhjv to disappear
Jan  9 14:59:38.520: INFO: Pod pod-subpath-test-secret-dhjv no longer exists
STEP: Deleting pod pod-subpath-test-secret-dhjv
Jan  9 14:59:38.520: INFO: Deleting pod "pod-subpath-test-secret-dhjv" in namespace "subpath-2443"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  9 14:59:38.550: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-2443" for this suite.
Jan  9 14:59:44.580: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  9 14:59:44.682: INFO: namespace subpath-2443 deletion completed in 6.122882074s

• [SLOW TEST:36.748 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with secret pod [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-network] DNS 
  should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  9 14:59:44.683: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-3150.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-3150.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3150.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-3150.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-3150.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3150.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe /etc/hosts
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jan  9 14:59:56.869: INFO: Unable to read wheezy_udp@PodARecord from pod dns-3150/dns-test-ef7db7b5-9df6-4b13-a95d-16d0431680be: the server could not find the requested resource (get pods dns-test-ef7db7b5-9df6-4b13-a95d-16d0431680be)
Jan  9 14:59:56.874: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-3150/dns-test-ef7db7b5-9df6-4b13-a95d-16d0431680be: the server could not find the requested resource (get pods dns-test-ef7db7b5-9df6-4b13-a95d-16d0431680be)
Jan  9 14:59:56.879: INFO: Unable to read jessie_hosts@dns-querier-1.dns-test-service.dns-3150.svc.cluster.local from pod dns-3150/dns-test-ef7db7b5-9df6-4b13-a95d-16d0431680be: the server could not find the requested resource (get pods dns-test-ef7db7b5-9df6-4b13-a95d-16d0431680be)
Jan  9 14:59:56.887: INFO: Unable to read jessie_hosts@dns-querier-1 from pod dns-3150/dns-test-ef7db7b5-9df6-4b13-a95d-16d0431680be: the server could not find the requested resource (get pods dns-test-ef7db7b5-9df6-4b13-a95d-16d0431680be)
Jan  9 14:59:56.892: INFO: Unable to read jessie_udp@PodARecord from pod dns-3150/dns-test-ef7db7b5-9df6-4b13-a95d-16d0431680be: the server could not find the requested resource (get pods dns-test-ef7db7b5-9df6-4b13-a95d-16d0431680be)
Jan  9 14:59:56.896: INFO: Unable to read jessie_tcp@PodARecord from pod dns-3150/dns-test-ef7db7b5-9df6-4b13-a95d-16d0431680be: the server could not find the requested resource (get pods dns-test-ef7db7b5-9df6-4b13-a95d-16d0431680be)
Jan  9 14:59:56.896: INFO: Lookups using dns-3150/dns-test-ef7db7b5-9df6-4b13-a95d-16d0431680be failed for: [wheezy_udp@PodARecord wheezy_tcp@PodARecord jessie_hosts@dns-querier-1.dns-test-service.dns-3150.svc.cluster.local jessie_hosts@dns-querier-1 jessie_udp@PodARecord jessie_tcp@PodARecord]

Jan  9 15:00:01.968: INFO: DNS probes using dns-3150/dns-test-ef7db7b5-9df6-4b13-a95d-16d0431680be succeeded

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  9 15:00:02.075: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-3150" for this suite.
Jan  9 15:00:08.159: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  9 15:00:08.335: INFO: namespace dns-3150 deletion completed in 6.241400194s

• [SLOW TEST:23.653 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl describe 
  should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  9 15:00:08.338: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Jan  9 15:00:08.447: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2954'
Jan  9 15:00:08.967: INFO: stderr: ""
Jan  9 15:00:08.967: INFO: stdout: "replicationcontroller/redis-master created\n"
Jan  9 15:00:08.968: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2954'
Jan  9 15:00:09.629: INFO: stderr: ""
Jan  9 15:00:09.629: INFO: stdout: "service/redis-master created\n"
STEP: Waiting for Redis master to start.
Jan  9 15:00:10.647: INFO: Selector matched 1 pods for map[app:redis]
Jan  9 15:00:10.647: INFO: Found 0 / 1
Jan  9 15:00:11.655: INFO: Selector matched 1 pods for map[app:redis]
Jan  9 15:00:11.655: INFO: Found 0 / 1
Jan  9 15:00:12.638: INFO: Selector matched 1 pods for map[app:redis]
Jan  9 15:00:12.638: INFO: Found 0 / 1
Jan  9 15:00:13.640: INFO: Selector matched 1 pods for map[app:redis]
Jan  9 15:00:13.640: INFO: Found 0 / 1
Jan  9 15:00:14.637: INFO: Selector matched 1 pods for map[app:redis]
Jan  9 15:00:14.637: INFO: Found 0 / 1
Jan  9 15:00:15.641: INFO: Selector matched 1 pods for map[app:redis]
Jan  9 15:00:15.641: INFO: Found 0 / 1
Jan  9 15:00:16.637: INFO: Selector matched 1 pods for map[app:redis]
Jan  9 15:00:16.637: INFO: Found 1 / 1
Jan  9 15:00:16.637: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Jan  9 15:00:16.640: INFO: Selector matched 1 pods for map[app:redis]
Jan  9 15:00:16.640: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Jan  9 15:00:16.640: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe pod redis-master-4z9bj --namespace=kubectl-2954'
Jan  9 15:00:16.819: INFO: stderr: ""
Jan  9 15:00:16.819: INFO: stdout: "Name:           redis-master-4z9bj\nNamespace:      kubectl-2954\nPriority:       0\nNode:           iruya-node/10.96.3.65\nStart Time:     Thu, 09 Jan 2020 15:00:09 +0000\nLabels:         app=redis\n                role=master\nAnnotations:    \nStatus:         Running\nIP:             10.44.0.1\nControlled By:  ReplicationController/redis-master\nContainers:\n  redis-master:\n    Container ID:   docker://c16d00e4c79e623e58d774dbd65129cc47d71fbdb28bd4204a2c877edfdabd1a\n    Image:          gcr.io/kubernetes-e2e-test-images/redis:1.0\n    Image ID:       docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830\n    Port:           6379/TCP\n    Host Port:      0/TCP\n    State:          Running\n      Started:      Thu, 09 Jan 2020 15:00:15 +0000\n    Ready:          True\n    Restart Count:  0\n    Environment:    \n    Mounts:\n      /var/run/secrets/kubernetes.io/serviceaccount from default-token-8hthp (ro)\nConditions:\n  Type              Status\n  Initialized       True \n  Ready             True \n  ContainersReady   True \n  PodScheduled      True \nVolumes:\n  default-token-8hthp:\n    Type:        Secret (a volume populated by a Secret)\n    SecretName:  default-token-8hthp\n    Optional:    false\nQoS Class:       BestEffort\nNode-Selectors:  \nTolerations:     node.kubernetes.io/not-ready:NoExecute for 300s\n                 node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n  Type    Reason     Age   From                 Message\n  ----    ------     ----  ----                 -------\n  Normal  Scheduled  8s    default-scheduler    Successfully assigned kubectl-2954/redis-master-4z9bj to iruya-node\n  Normal  Pulled     4s    kubelet, iruya-node  Container image \"gcr.io/kubernetes-e2e-test-images/redis:1.0\" already present on machine\n  Normal  Created    2s    kubelet, iruya-node  Created container redis-master\n  Normal  Started    1s    kubelet, iruya-node  Started container redis-master\n"
Jan  9 15:00:16.820: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe rc redis-master --namespace=kubectl-2954'
Jan  9 15:00:16.948: INFO: stderr: ""
Jan  9 15:00:16.948: INFO: stdout: "Name:         redis-master\nNamespace:    kubectl-2954\nSelector:     app=redis,role=master\nLabels:       app=redis\n              role=master\nAnnotations:  \nReplicas:     1 current / 1 desired\nPods Status:  1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n  Labels:  app=redis\n           role=master\n  Containers:\n   redis-master:\n    Image:        gcr.io/kubernetes-e2e-test-images/redis:1.0\n    Port:         6379/TCP\n    Host Port:    0/TCP\n    Environment:  \n    Mounts:       \n  Volumes:        \nEvents:\n  Type    Reason            Age   From                    Message\n  ----    ------            ----  ----                    -------\n  Normal  SuccessfulCreate  8s    replication-controller  Created pod: redis-master-4z9bj\n"
Jan  9 15:00:16.948: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe service redis-master --namespace=kubectl-2954'
Jan  9 15:00:17.053: INFO: stderr: ""
Jan  9 15:00:17.053: INFO: stdout: "Name:              redis-master\nNamespace:         kubectl-2954\nLabels:            app=redis\n                   role=master\nAnnotations:       \nSelector:          app=redis,role=master\nType:              ClusterIP\nIP:                10.98.161.127\nPort:                6379/TCP\nTargetPort:        redis-server/TCP\nEndpoints:         10.44.0.1:6379\nSession Affinity:  None\nEvents:            \n"
Jan  9 15:00:17.058: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe node iruya-node'
Jan  9 15:00:17.172: INFO: stderr: ""
Jan  9 15:00:17.172: INFO: stdout: "Name:               iruya-node\nRoles:              \nLabels:             beta.kubernetes.io/arch=amd64\n                    beta.kubernetes.io/os=linux\n                    kubernetes.io/arch=amd64\n                    kubernetes.io/hostname=iruya-node\n                    kubernetes.io/os=linux\nAnnotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock\n                    node.alpha.kubernetes.io/ttl: 0\n                    volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp:  Sun, 04 Aug 2019 09:01:39 +0000\nTaints:             \nUnschedulable:      false\nConditions:\n  Type                 Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message\n  ----                 ------  -----------------                 ------------------                ------                       -------\n  NetworkUnavailable   False   Sat, 12 Oct 2019 11:56:49 +0000   Sat, 12 Oct 2019 11:56:49 +0000   WeaveIsUp                    Weave pod has set this\n  MemoryPressure       False   Thu, 09 Jan 2020 14:59:34 +0000   Sun, 04 Aug 2019 09:01:39 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available\n  DiskPressure         False   Thu, 09 Jan 2020 14:59:34 +0000   Sun, 04 Aug 2019 09:01:39 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure\n  PIDPressure          False   Thu, 09 Jan 2020 14:59:34 +0000   Sun, 04 Aug 2019 09:01:39 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available\n  Ready                True    Thu, 09 Jan 2020 14:59:34 +0000   Sun, 04 Aug 2019 09:02:19 +0000   KubeletReady                 kubelet is posting ready status. AppArmor enabled\nAddresses:\n  InternalIP:  10.96.3.65\n  Hostname:    iruya-node\nCapacity:\n cpu:                4\n ephemeral-storage:  20145724Ki\n hugepages-2Mi:      0\n memory:             4039076Ki\n pods:               110\nAllocatable:\n cpu:                4\n ephemeral-storage:  18566299208\n hugepages-2Mi:      0\n memory:             3936676Ki\n pods:               110\nSystem Info:\n Machine ID:                 f573dcf04d6f4a87856a35d266a2fa7a\n System UUID:                F573DCF0-4D6F-4A87-856A-35D266A2FA7A\n Boot ID:                    8baf4beb-8391-43e6-b17b-b1e184b5370a\n Kernel Version:             4.15.0-52-generic\n OS Image:                   Ubuntu 18.04.2 LTS\n Operating System:           linux\n Architecture:               amd64\n Container Runtime Version:  docker://18.9.7\n Kubelet Version:            v1.15.1\n Kube-Proxy Version:         v1.15.1\nPodCIDR:                     10.96.1.0/24\nNon-terminated Pods:         (3 in total)\n  Namespace                  Name                  CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE\n  ---------                  ----                  ------------  ----------  ---------------  -------------  ---\n  kube-system                kube-proxy-976zl      0 (0%)        0 (0%)      0 (0%)           0 (0%)         158d\n  kube-system                weave-net-rlp57       20m (0%)      0 (0%)      0 (0%)           0 (0%)         89d\n  kubectl-2954               redis-master-4z9bj    0 (0%)        0 (0%)      0 (0%)           0 (0%)         9s\nAllocated resources:\n  (Total limits may be over 100 percent, i.e., overcommitted.)\n  Resource           Requests  Limits\n  --------           --------  ------\n  cpu                20m (0%)  0 (0%)\n  memory             0 (0%)    0 (0%)\n  ephemeral-storage  0 (0%)    0 (0%)\nEvents:              \n"
Jan  9 15:00:17.173: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe namespace kubectl-2954'
Jan  9 15:00:17.292: INFO: stderr: ""
Jan  9 15:00:17.292: INFO: stdout: "Name:         kubectl-2954\nLabels:       e2e-framework=kubectl\n              e2e-run=cd08ceec-a962-4738-b750-0c49299814ab\nAnnotations:  \nStatus:       Active\n\nNo resource quota.\n\nNo resource limits.\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  9 15:00:17.292: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-2954" for this suite.
Jan  9 15:00:39.321: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  9 15:00:39.455: INFO: namespace kubectl-2954 deletion completed in 22.158759422s

• [SLOW TEST:31.117 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl describe
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should check if kubectl describe prints relevant information for rc and pods  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  9 15:00:39.456: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the deployment
STEP: Wait for the Deployment to create new ReplicaSet
STEP: delete the deployment
STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs
STEP: Gathering metrics
W0109 15:01:10.404853       8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jan  9 15:01:10.405: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  9 15:01:10.405: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-3928" for this suite.
Jan  9 15:01:18.878: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  9 15:01:19.342: INFO: namespace gc-3928 deletion completed in 8.931732754s

• [SLOW TEST:39.886 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  9 15:01:19.342: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Jan  9 15:01:19.662: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0599b404-c1cd-408c-aa19-671e287f06d6" in namespace "projected-3834" to be "success or failure"
Jan  9 15:01:19.673: INFO: Pod "downwardapi-volume-0599b404-c1cd-408c-aa19-671e287f06d6": Phase="Pending", Reason="", readiness=false. Elapsed: 11.57678ms
Jan  9 15:01:21.682: INFO: Pod "downwardapi-volume-0599b404-c1cd-408c-aa19-671e287f06d6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020066385s
Jan  9 15:01:23.708: INFO: Pod "downwardapi-volume-0599b404-c1cd-408c-aa19-671e287f06d6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.046063489s
Jan  9 15:01:25.716: INFO: Pod "downwardapi-volume-0599b404-c1cd-408c-aa19-671e287f06d6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.05474709s
Jan  9 15:01:27.725: INFO: Pod "downwardapi-volume-0599b404-c1cd-408c-aa19-671e287f06d6": Phase="Pending", Reason="", readiness=false. Elapsed: 8.063381067s
Jan  9 15:01:29.734: INFO: Pod "downwardapi-volume-0599b404-c1cd-408c-aa19-671e287f06d6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.071904305s
STEP: Saw pod success
Jan  9 15:01:29.734: INFO: Pod "downwardapi-volume-0599b404-c1cd-408c-aa19-671e287f06d6" satisfied condition "success or failure"
Jan  9 15:01:29.738: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-0599b404-c1cd-408c-aa19-671e287f06d6 container client-container: 
STEP: delete the pod
Jan  9 15:01:29.994: INFO: Waiting for pod downwardapi-volume-0599b404-c1cd-408c-aa19-671e287f06d6 to disappear
Jan  9 15:01:30.131: INFO: Pod downwardapi-volume-0599b404-c1cd-408c-aa19-671e287f06d6 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  9 15:01:30.131: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3834" for this suite.
Jan  9 15:01:36.182: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  9 15:01:36.341: INFO: namespace projected-3834 deletion completed in 6.201879211s

• [SLOW TEST:16.998 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  9 15:01:36.342: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Performing setup for networking test in namespace pod-network-test-3039
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Jan  9 15:01:36.465: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Jan  9 15:02:12.740: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.32.0.4:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-3039 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan  9 15:02:12.740: INFO: >>> kubeConfig: /root/.kube/config
I0109 15:02:12.833583       8 log.go:172] (0xc002a168f0) (0xc0015d4460) Create stream
I0109 15:02:12.833644       8 log.go:172] (0xc002a168f0) (0xc0015d4460) Stream added, broadcasting: 1
I0109 15:02:12.844025       8 log.go:172] (0xc002a168f0) Reply frame received for 1
I0109 15:02:12.844158       8 log.go:172] (0xc002a168f0) (0xc0024eca00) Create stream
I0109 15:02:12.844182       8 log.go:172] (0xc002a168f0) (0xc0024eca00) Stream added, broadcasting: 3
I0109 15:02:12.846796       8 log.go:172] (0xc002a168f0) Reply frame received for 3
I0109 15:02:12.846846       8 log.go:172] (0xc002a168f0) (0xc0015d4500) Create stream
I0109 15:02:12.846859       8 log.go:172] (0xc002a168f0) (0xc0015d4500) Stream added, broadcasting: 5
I0109 15:02:12.850031       8 log.go:172] (0xc002a168f0) Reply frame received for 5
I0109 15:02:12.995832       8 log.go:172] (0xc002a168f0) Data frame received for 3
I0109 15:02:12.995923       8 log.go:172] (0xc0024eca00) (3) Data frame handling
I0109 15:02:12.995957       8 log.go:172] (0xc0024eca00) (3) Data frame sent
I0109 15:02:13.147130       8 log.go:172] (0xc002a168f0) (0xc0024eca00) Stream removed, broadcasting: 3
I0109 15:02:13.147255       8 log.go:172] (0xc002a168f0) Data frame received for 1
I0109 15:02:13.147287       8 log.go:172] (0xc0015d4460) (1) Data frame handling
I0109 15:02:13.147308       8 log.go:172] (0xc0015d4460) (1) Data frame sent
I0109 15:02:13.147327       8 log.go:172] (0xc002a168f0) (0xc0015d4460) Stream removed, broadcasting: 1
I0109 15:02:13.147426       8 log.go:172] (0xc002a168f0) (0xc0015d4500) Stream removed, broadcasting: 5
I0109 15:02:13.147466       8 log.go:172] (0xc002a168f0) Go away received
I0109 15:02:13.147552       8 log.go:172] (0xc002a168f0) (0xc0015d4460) Stream removed, broadcasting: 1
I0109 15:02:13.147568       8 log.go:172] (0xc002a168f0) (0xc0024eca00) Stream removed, broadcasting: 3
I0109 15:02:13.147577       8 log.go:172] (0xc002a168f0) (0xc0015d4500) Stream removed, broadcasting: 5
Jan  9 15:02:13.147: INFO: Found all expected endpoints: [netserver-0]
Jan  9 15:02:13.155: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.44.0.1:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-3039 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan  9 15:02:13.155: INFO: >>> kubeConfig: /root/.kube/config
I0109 15:02:13.208973       8 log.go:172] (0xc000d9ac60) (0xc00176bea0) Create stream
I0109 15:02:13.209073       8 log.go:172] (0xc000d9ac60) (0xc00176bea0) Stream added, broadcasting: 1
I0109 15:02:13.216273       8 log.go:172] (0xc000d9ac60) Reply frame received for 1
I0109 15:02:13.216370       8 log.go:172] (0xc000d9ac60) (0xc00247a000) Create stream
I0109 15:02:13.216378       8 log.go:172] (0xc000d9ac60) (0xc00247a000) Stream added, broadcasting: 3
I0109 15:02:13.217710       8 log.go:172] (0xc000d9ac60) Reply frame received for 3
I0109 15:02:13.217731       8 log.go:172] (0xc000d9ac60) (0xc0021fb7c0) Create stream
I0109 15:02:13.217739       8 log.go:172] (0xc000d9ac60) (0xc0021fb7c0) Stream added, broadcasting: 5
I0109 15:02:13.219300       8 log.go:172] (0xc000d9ac60) Reply frame received for 5
I0109 15:02:13.370400       8 log.go:172] (0xc000d9ac60) Data frame received for 3
I0109 15:02:13.370478       8 log.go:172] (0xc00247a000) (3) Data frame handling
I0109 15:02:13.370506       8 log.go:172] (0xc00247a000) (3) Data frame sent
I0109 15:02:13.536582       8 log.go:172] (0xc000d9ac60) Data frame received for 1
I0109 15:02:13.536666       8 log.go:172] (0xc000d9ac60) (0xc0021fb7c0) Stream removed, broadcasting: 5
I0109 15:02:13.536720       8 log.go:172] (0xc00176bea0) (1) Data frame handling
I0109 15:02:13.536750       8 log.go:172] (0xc00176bea0) (1) Data frame sent
I0109 15:02:13.536797       8 log.go:172] (0xc000d9ac60) (0xc00247a000) Stream removed, broadcasting: 3
I0109 15:02:13.536864       8 log.go:172] (0xc000d9ac60) (0xc00176bea0) Stream removed, broadcasting: 1
I0109 15:02:13.536896       8 log.go:172] (0xc000d9ac60) Go away received
I0109 15:02:13.537185       8 log.go:172] (0xc000d9ac60) (0xc00176bea0) Stream removed, broadcasting: 1
I0109 15:02:13.537235       8 log.go:172] (0xc000d9ac60) (0xc00247a000) Stream removed, broadcasting: 3
I0109 15:02:13.537252       8 log.go:172] (0xc000d9ac60) (0xc0021fb7c0) Stream removed, broadcasting: 5
Jan  9 15:02:13.537: INFO: Found all expected endpoints: [netserver-1]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  9 15:02:13.537: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-3039" for this suite.
Jan  9 15:02:37.593: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  9 15:02:37.731: INFO: namespace pod-network-test-3039 deletion completed in 24.179601852s

• [SLOW TEST:61.389 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[k8s.io] Probing container 
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  9 15:02:37.731: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  9 15:03:37.893: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-8109" for this suite.
Jan  9 15:04:00.033: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  9 15:04:00.190: INFO: namespace container-probe-8109 deletion completed in 22.289662819s

• [SLOW TEST:82.459 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  9 15:04:00.191: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating secret secrets-5145/secret-test-5fb66f4d-a525-4464-86fc-49529dd6bd9d
STEP: Creating a pod to test consume secrets
Jan  9 15:04:00.445: INFO: Waiting up to 5m0s for pod "pod-configmaps-f841535c-6733-499e-88b9-4c902a5969ac" in namespace "secrets-5145" to be "success or failure"
Jan  9 15:04:00.458: INFO: Pod "pod-configmaps-f841535c-6733-499e-88b9-4c902a5969ac": Phase="Pending", Reason="", readiness=false. Elapsed: 13.177585ms
Jan  9 15:04:02.470: INFO: Pod "pod-configmaps-f841535c-6733-499e-88b9-4c902a5969ac": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025257681s
Jan  9 15:04:04.477: INFO: Pod "pod-configmaps-f841535c-6733-499e-88b9-4c902a5969ac": Phase="Pending", Reason="", readiness=false. Elapsed: 4.032590438s
Jan  9 15:04:06.597: INFO: Pod "pod-configmaps-f841535c-6733-499e-88b9-4c902a5969ac": Phase="Pending", Reason="", readiness=false. Elapsed: 6.152264386s
Jan  9 15:04:08.636: INFO: Pod "pod-configmaps-f841535c-6733-499e-88b9-4c902a5969ac": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.190819929s
STEP: Saw pod success
Jan  9 15:04:08.636: INFO: Pod "pod-configmaps-f841535c-6733-499e-88b9-4c902a5969ac" satisfied condition "success or failure"
Jan  9 15:04:08.641: INFO: Trying to get logs from node iruya-node pod pod-configmaps-f841535c-6733-499e-88b9-4c902a5969ac container env-test: 
STEP: delete the pod
Jan  9 15:04:08.689: INFO: Waiting for pod pod-configmaps-f841535c-6733-499e-88b9-4c902a5969ac to disappear
Jan  9 15:04:08.693: INFO: Pod pod-configmaps-f841535c-6733-499e-88b9-4c902a5969ac no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  9 15:04:08.694: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-5145" for this suite.
Jan  9 15:04:14.719: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  9 15:04:14.844: INFO: namespace secrets-5145 deletion completed in 6.144214662s

• [SLOW TEST:14.654 seconds]
[sig-api-machinery] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  9 15:04:14.845: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72
[It] deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Jan  9 15:04:14.966: INFO: Pod name rollover-pod: Found 0 pods out of 1
Jan  9 15:04:19.974: INFO: Pod name rollover-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Jan  9 15:04:21.985: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready
Jan  9 15:04:23.991: INFO: Creating deployment "test-rollover-deployment"
Jan  9 15:04:24.065: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations
Jan  9 15:04:26.084: INFO: Check revision of new replica set for deployment "test-rollover-deployment"
Jan  9 15:04:26.095: INFO: Ensure that both replica sets have 1 created replica
Jan  9 15:04:26.104: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update
Jan  9 15:04:26.119: INFO: Updating deployment test-rollover-deployment
Jan  9 15:04:26.119: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller
Jan  9 15:04:28.140: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2
Jan  9 15:04:28.153: INFO: Make sure deployment "test-rollover-deployment" is complete
Jan  9 15:04:28.159: INFO: all replica sets need to contain the pod-template-hash label
Jan  9 15:04:28.159: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714179064, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714179064, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714179066, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714179064, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  9 15:04:30.177: INFO: all replica sets need to contain the pod-template-hash label
Jan  9 15:04:30.177: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714179064, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714179064, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714179066, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714179064, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  9 15:04:32.185: INFO: all replica sets need to contain the pod-template-hash label
Jan  9 15:04:32.185: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714179064, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714179064, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714179066, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714179064, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  9 15:04:34.189: INFO: all replica sets need to contain the pod-template-hash label
Jan  9 15:04:34.189: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714179064, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714179064, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714179066, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714179064, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  9 15:04:36.172: INFO: all replica sets need to contain the pod-template-hash label
Jan  9 15:04:36.172: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714179064, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714179064, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714179074, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714179064, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  9 15:04:38.172: INFO: all replica sets need to contain the pod-template-hash label
Jan  9 15:04:38.172: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714179064, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714179064, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714179074, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714179064, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  9 15:04:40.180: INFO: all replica sets need to contain the pod-template-hash label
Jan  9 15:04:40.180: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714179064, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714179064, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714179074, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714179064, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  9 15:04:42.182: INFO: all replica sets need to contain the pod-template-hash label
Jan  9 15:04:42.182: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714179064, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714179064, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714179074, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714179064, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  9 15:04:44.170: INFO: all replica sets need to contain the pod-template-hash label
Jan  9 15:04:44.170: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714179064, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714179064, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714179074, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714179064, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  9 15:04:46.172: INFO: 
Jan  9 15:04:46.172: INFO: Ensure that both old replica sets have no replicas
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66
Jan  9 15:04:46.181: INFO: Deployment "test-rollover-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment,GenerateName:,Namespace:deployment-928,SelfLink:/apis/apps/v1/namespaces/deployment-928/deployments/test-rollover-deployment,UID:8f4742aa-f2ee-4c55-9d7e-956a1f6de079,ResourceVersion:19917475,Generation:2,CreationTimestamp:2020-01-09 15:04:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-01-09 15:04:24 +0000 UTC 2020-01-09 15:04:24 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-01-09 15:04:44 +0000 UTC 2020-01-09 15:04:24 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rollover-deployment-854595fc44" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},}

Jan  9 15:04:46.185: INFO: New ReplicaSet "test-rollover-deployment-854595fc44" of Deployment "test-rollover-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-854595fc44,GenerateName:,Namespace:deployment-928,SelfLink:/apis/apps/v1/namespaces/deployment-928/replicasets/test-rollover-deployment-854595fc44,UID:4d9becb3-3840-42b3-b40f-1fc19aa73d93,ResourceVersion:19917464,Generation:2,CreationTimestamp:2020-01-09 15:04:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 8f4742aa-f2ee-4c55-9d7e-956a1f6de079 0xc002ee3cb7 0xc002ee3cb8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},}
Jan  9 15:04:46.185: INFO: All old ReplicaSets of Deployment "test-rollover-deployment":
Jan  9 15:04:46.185: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-controller,GenerateName:,Namespace:deployment-928,SelfLink:/apis/apps/v1/namespaces/deployment-928/replicasets/test-rollover-controller,UID:93e3a6f6-8b51-4a78-8fa1-bb87b4bde4dc,ResourceVersion:19917474,Generation:2,CreationTimestamp:2020-01-09 15:04:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 8f4742aa-f2ee-4c55-9d7e-956a1f6de079 0xc002ee3bbf 0xc002ee3be0}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Jan  9 15:04:46.186: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-9b8b997cf,GenerateName:,Namespace:deployment-928,SelfLink:/apis/apps/v1/namespaces/deployment-928/replicasets/test-rollover-deployment-9b8b997cf,UID:4242aecd-8d62-4784-ae88-0c334c7c07a5,ResourceVersion:19917426,Generation:2,CreationTimestamp:2020-01-09 15:04:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 8f4742aa-f2ee-4c55-9d7e-956a1f6de079 0xc002ee3d90 0xc002ee3d91}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Jan  9 15:04:46.190: INFO: Pod "test-rollover-deployment-854595fc44-8fl4v" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-854595fc44-8fl4v,GenerateName:test-rollover-deployment-854595fc44-,Namespace:deployment-928,SelfLink:/api/v1/namespaces/deployment-928/pods/test-rollover-deployment-854595fc44-8fl4v,UID:0faad115-fa3c-4de8-8efc-d3e41023cbf8,ResourceVersion:19917447,Generation:0,CreationTimestamp:2020-01-09 15:04:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rollover-deployment-854595fc44 4d9becb3-3840-42b3-b40f-1fc19aa73d93 0xc00234a6a7 0xc00234a6a8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-2khnf {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-2khnf,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [{default-token-2khnf true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00234a730} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00234a750}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-09 15:04:26 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-09 15:04:34 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-09 15:04:34 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-09 15:04:26 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.2,StartTime:2020-01-09 15:04:26 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-01-09 15:04:34 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 docker://acd03701e0e7b7c4744f78896eb0bb844f570861d6ca189a8fb63153bbc06590}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  9 15:04:46.190: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-928" for this suite.
Jan  9 15:04:52.225: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  9 15:04:52.388: INFO: namespace deployment-928 deletion completed in 6.19208549s

• [SLOW TEST:37.543 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should do a rolling update of a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  9 15:04:52.388: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[It] should do a rolling update of a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the initial replication controller
Jan  9 15:04:52.556: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4142'
Jan  9 15:04:55.963: INFO: stderr: ""
Jan  9 15:04:55.963: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jan  9 15:04:55.963: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4142'
Jan  9 15:04:56.228: INFO: stderr: ""
Jan  9 15:04:56.228: INFO: stdout: "update-demo-nautilus-5jfjk update-demo-nautilus-jk5f7 "
Jan  9 15:04:56.228: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-5jfjk -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4142'
Jan  9 15:04:56.379: INFO: stderr: ""
Jan  9 15:04:56.380: INFO: stdout: ""
Jan  9 15:04:56.380: INFO: update-demo-nautilus-5jfjk is created but not running
Jan  9 15:05:01.380: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4142'
Jan  9 15:05:01.556: INFO: stderr: ""
Jan  9 15:05:01.556: INFO: stdout: "update-demo-nautilus-5jfjk update-demo-nautilus-jk5f7 "
Jan  9 15:05:01.557: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-5jfjk -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4142'
Jan  9 15:05:01.673: INFO: stderr: ""
Jan  9 15:05:01.673: INFO: stdout: ""
Jan  9 15:05:01.673: INFO: update-demo-nautilus-5jfjk is created but not running
Jan  9 15:05:06.675: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4142'
Jan  9 15:05:06.788: INFO: stderr: ""
Jan  9 15:05:06.788: INFO: stdout: "update-demo-nautilus-5jfjk update-demo-nautilus-jk5f7 "
Jan  9 15:05:06.788: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-5jfjk -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4142'
Jan  9 15:05:06.945: INFO: stderr: ""
Jan  9 15:05:06.945: INFO: stdout: ""
Jan  9 15:05:06.945: INFO: update-demo-nautilus-5jfjk is created but not running
Jan  9 15:05:11.946: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4142'
Jan  9 15:05:12.069: INFO: stderr: ""
Jan  9 15:05:12.069: INFO: stdout: "update-demo-nautilus-5jfjk update-demo-nautilus-jk5f7 "
Jan  9 15:05:12.070: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-5jfjk -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4142'
Jan  9 15:05:12.265: INFO: stderr: ""
Jan  9 15:05:12.265: INFO: stdout: "true"
Jan  9 15:05:12.266: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-5jfjk -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4142'
Jan  9 15:05:12.393: INFO: stderr: ""
Jan  9 15:05:12.394: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan  9 15:05:12.394: INFO: validating pod update-demo-nautilus-5jfjk
Jan  9 15:05:12.409: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan  9 15:05:12.409: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan  9 15:05:12.409: INFO: update-demo-nautilus-5jfjk is verified up and running
Jan  9 15:05:12.409: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jk5f7 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4142'
Jan  9 15:05:12.550: INFO: stderr: ""
Jan  9 15:05:12.550: INFO: stdout: "true"
Jan  9 15:05:12.551: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jk5f7 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4142'
Jan  9 15:05:12.649: INFO: stderr: ""
Jan  9 15:05:12.649: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan  9 15:05:12.650: INFO: validating pod update-demo-nautilus-jk5f7
Jan  9 15:05:12.663: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan  9 15:05:12.663: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan  9 15:05:12.663: INFO: update-demo-nautilus-jk5f7 is verified up and running
STEP: rolling-update to new replication controller
Jan  9 15:05:12.665: INFO: scanned /root for discovery docs: 
Jan  9 15:05:12.665: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=kubectl-4142'
Jan  9 15:05:44.066: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n"
Jan  9 15:05:44.066: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jan  9 15:05:44.067: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4142'
Jan  9 15:05:44.215: INFO: stderr: ""
Jan  9 15:05:44.216: INFO: stdout: "update-demo-kitten-wswxn update-demo-kitten-xqvts "
Jan  9 15:05:44.216: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-wswxn -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4142'
Jan  9 15:05:44.509: INFO: stderr: ""
Jan  9 15:05:44.509: INFO: stdout: "true"
Jan  9 15:05:44.509: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-wswxn -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4142'
Jan  9 15:05:44.650: INFO: stderr: ""
Jan  9 15:05:44.650: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
Jan  9 15:05:44.650: INFO: validating pod update-demo-kitten-wswxn
Jan  9 15:05:44.693: INFO: got data: {
  "image": "kitten.jpg"
}

Jan  9 15:05:44.693: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Jan  9 15:05:44.693: INFO: update-demo-kitten-wswxn is verified up and running
Jan  9 15:05:44.693: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-xqvts -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4142'
Jan  9 15:05:44.786: INFO: stderr: ""
Jan  9 15:05:44.786: INFO: stdout: "true"
Jan  9 15:05:44.787: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-xqvts -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4142'
Jan  9 15:05:44.881: INFO: stderr: ""
Jan  9 15:05:44.881: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
Jan  9 15:05:44.881: INFO: validating pod update-demo-kitten-xqvts
Jan  9 15:05:44.906: INFO: got data: {
  "image": "kitten.jpg"
}

Jan  9 15:05:44.906: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Jan  9 15:05:44.906: INFO: update-demo-kitten-xqvts is verified up and running
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  9 15:05:44.906: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-4142" for this suite.
Jan  9 15:06:24.938: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  9 15:06:25.074: INFO: namespace kubectl-4142 deletion completed in 40.162896931s

• [SLOW TEST:92.685 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should do a rolling update of a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] HostPath 
  should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  9 15:06:25.075: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename hostpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37
[It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test hostPath mode
Jan  9 15:06:26.135: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-2609" to be "success or failure"
Jan  9 15:06:26.140: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.676366ms
Jan  9 15:06:28.150: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014487878s
Jan  9 15:06:30.223: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.087651656s
Jan  9 15:06:32.231: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 6.095718608s
Jan  9 15:06:34.253: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 8.11726539s
Jan  9 15:06:36.358: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 10.222280744s
Jan  9 15:06:38.371: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.235643748s
STEP: Saw pod success
Jan  9 15:06:38.371: INFO: Pod "pod-host-path-test" satisfied condition "success or failure"
Jan  9 15:06:38.376: INFO: Trying to get logs from node iruya-node pod pod-host-path-test container test-container-1: 
STEP: delete the pod
Jan  9 15:06:38.761: INFO: Waiting for pod pod-host-path-test to disappear
Jan  9 15:06:38.774: INFO: Pod pod-host-path-test no longer exists
[AfterEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  9 15:06:38.774: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "hostpath-2609" for this suite.
Jan  9 15:06:44.809: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  9 15:06:45.006: INFO: namespace hostpath-2609 deletion completed in 6.224194483s

• [SLOW TEST:19.931 seconds]
[sig-storage] HostPath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34
  should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  9 15:06:45.006: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating pod
Jan  9 15:06:53.151: INFO: Pod pod-hostip-5a0e3a5f-2dc9-4c05-be40-aeef65957503 has hostIP: 10.96.3.65
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  9 15:06:53.151: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-3282" for this suite.
Jan  9 15:07:15.187: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  9 15:07:15.348: INFO: namespace pods-3282 deletion completed in 22.19151262s

• [SLOW TEST:30.342 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  9 15:07:15.349: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test env composition
Jan  9 15:07:15.444: INFO: Waiting up to 5m0s for pod "var-expansion-afa77df7-bc55-4b78-866a-9dda6820e8ff" in namespace "var-expansion-1074" to be "success or failure"
Jan  9 15:07:15.517: INFO: Pod "var-expansion-afa77df7-bc55-4b78-866a-9dda6820e8ff": Phase="Pending", Reason="", readiness=false. Elapsed: 73.103101ms
Jan  9 15:07:17.527: INFO: Pod "var-expansion-afa77df7-bc55-4b78-866a-9dda6820e8ff": Phase="Pending", Reason="", readiness=false. Elapsed: 2.082618246s
Jan  9 15:07:19.535: INFO: Pod "var-expansion-afa77df7-bc55-4b78-866a-9dda6820e8ff": Phase="Pending", Reason="", readiness=false. Elapsed: 4.090859774s
Jan  9 15:07:21.544: INFO: Pod "var-expansion-afa77df7-bc55-4b78-866a-9dda6820e8ff": Phase="Pending", Reason="", readiness=false. Elapsed: 6.100573552s
Jan  9 15:07:23.558: INFO: Pod "var-expansion-afa77df7-bc55-4b78-866a-9dda6820e8ff": Phase="Pending", Reason="", readiness=false. Elapsed: 8.113936862s
Jan  9 15:07:25.585: INFO: Pod "var-expansion-afa77df7-bc55-4b78-866a-9dda6820e8ff": Phase="Pending", Reason="", readiness=false. Elapsed: 10.14110329s
Jan  9 15:07:28.239: INFO: Pod "var-expansion-afa77df7-bc55-4b78-866a-9dda6820e8ff": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.794910224s
STEP: Saw pod success
Jan  9 15:07:28.239: INFO: Pod "var-expansion-afa77df7-bc55-4b78-866a-9dda6820e8ff" satisfied condition "success or failure"
Jan  9 15:07:28.244: INFO: Trying to get logs from node iruya-node pod var-expansion-afa77df7-bc55-4b78-866a-9dda6820e8ff container dapi-container: 
STEP: delete the pod
Jan  9 15:07:28.296: INFO: Waiting for pod var-expansion-afa77df7-bc55-4b78-866a-9dda6820e8ff to disappear
Jan  9 15:07:28.312: INFO: Pod var-expansion-afa77df7-bc55-4b78-866a-9dda6820e8ff no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  9 15:07:28.313: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-1074" for this suite.
Jan  9 15:07:34.403: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  9 15:07:34.559: INFO: namespace var-expansion-1074 deletion completed in 6.232576926s

• [SLOW TEST:19.211 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  9 15:07:34.561: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a watch on configmaps with label A
STEP: creating a watch on configmaps with label B
STEP: creating a watch on configmaps with label A or B
STEP: creating a configmap with label A and ensuring the correct watchers observe the notification
Jan  9 15:07:34.669: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-2322,SelfLink:/api/v1/namespaces/watch-2322/configmaps/e2e-watch-test-configmap-a,UID:e2791545-d789-4f14-8c48-c085de463bdf,ResourceVersion:19917935,Generation:0,CreationTimestamp:2020-01-09 15:07:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
Jan  9 15:07:34.669: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-2322,SelfLink:/api/v1/namespaces/watch-2322/configmaps/e2e-watch-test-configmap-a,UID:e2791545-d789-4f14-8c48-c085de463bdf,ResourceVersion:19917935,Generation:0,CreationTimestamp:2020-01-09 15:07:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
STEP: modifying configmap A and ensuring the correct watchers observe the notification
Jan  9 15:07:44.681: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-2322,SelfLink:/api/v1/namespaces/watch-2322/configmaps/e2e-watch-test-configmap-a,UID:e2791545-d789-4f14-8c48-c085de463bdf,ResourceVersion:19917951,Generation:0,CreationTimestamp:2020-01-09 15:07:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
Jan  9 15:07:44.681: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-2322,SelfLink:/api/v1/namespaces/watch-2322/configmaps/e2e-watch-test-configmap-a,UID:e2791545-d789-4f14-8c48-c085de463bdf,ResourceVersion:19917951,Generation:0,CreationTimestamp:2020-01-09 15:07:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying configmap A again and ensuring the correct watchers observe the notification
Jan  9 15:07:54.697: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-2322,SelfLink:/api/v1/namespaces/watch-2322/configmaps/e2e-watch-test-configmap-a,UID:e2791545-d789-4f14-8c48-c085de463bdf,ResourceVersion:19917965,Generation:0,CreationTimestamp:2020-01-09 15:07:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Jan  9 15:07:54.697: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-2322,SelfLink:/api/v1/namespaces/watch-2322/configmaps/e2e-watch-test-configmap-a,UID:e2791545-d789-4f14-8c48-c085de463bdf,ResourceVersion:19917965,Generation:0,CreationTimestamp:2020-01-09 15:07:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
STEP: deleting configmap A and ensuring the correct watchers observe the notification
Jan  9 15:08:04.724: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-2322,SelfLink:/api/v1/namespaces/watch-2322/configmaps/e2e-watch-test-configmap-a,UID:e2791545-d789-4f14-8c48-c085de463bdf,ResourceVersion:19917979,Generation:0,CreationTimestamp:2020-01-09 15:07:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Jan  9 15:08:04.725: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-2322,SelfLink:/api/v1/namespaces/watch-2322/configmaps/e2e-watch-test-configmap-a,UID:e2791545-d789-4f14-8c48-c085de463bdf,ResourceVersion:19917979,Generation:0,CreationTimestamp:2020-01-09 15:07:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
STEP: creating a configmap with label B and ensuring the correct watchers observe the notification
Jan  9 15:08:14.744: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-2322,SelfLink:/api/v1/namespaces/watch-2322/configmaps/e2e-watch-test-configmap-b,UID:bb7686c4-419b-444f-8cef-537516efafe0,ResourceVersion:19917993,Generation:0,CreationTimestamp:2020-01-09 15:08:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
Jan  9 15:08:14.744: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-2322,SelfLink:/api/v1/namespaces/watch-2322/configmaps/e2e-watch-test-configmap-b,UID:bb7686c4-419b-444f-8cef-537516efafe0,ResourceVersion:19917993,Generation:0,CreationTimestamp:2020-01-09 15:08:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
STEP: deleting configmap B and ensuring the correct watchers observe the notification
Jan  9 15:08:24.760: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-2322,SelfLink:/api/v1/namespaces/watch-2322/configmaps/e2e-watch-test-configmap-b,UID:bb7686c4-419b-444f-8cef-537516efafe0,ResourceVersion:19918008,Generation:0,CreationTimestamp:2020-01-09 15:08:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
Jan  9 15:08:24.761: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-2322,SelfLink:/api/v1/namespaces/watch-2322/configmaps/e2e-watch-test-configmap-b,UID:bb7686c4-419b-444f-8cef-537516efafe0,ResourceVersion:19918008,Generation:0,CreationTimestamp:2020-01-09 15:08:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  9 15:08:34.761: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-2322" for this suite.
Jan  9 15:08:40.814: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  9 15:08:40.917: INFO: namespace watch-2322 deletion completed in 6.144558464s

• [SLOW TEST:66.356 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  9 15:08:40.918: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating the pod
Jan  9 15:08:51.588: INFO: Successfully updated pod "labelsupdate06872a01-8cf0-4f80-924f-ab85676cbb69"
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  9 15:08:53.712: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5862" for this suite.
Jan  9 15:09:15.777: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  9 15:09:15.967: INFO: namespace projected-5862 deletion completed in 22.245866129s

• [SLOW TEST:35.049 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  9 15:09:15.968: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name projected-secret-test-f5036159-cc70-4a73-bf70-a12a7a88e9e3
STEP: Creating a pod to test consume secrets
Jan  9 15:09:16.107: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-017ba14f-f61e-4873-aaa4-a7f87e240463" in namespace "projected-3368" to be "success or failure"
Jan  9 15:09:16.135: INFO: Pod "pod-projected-secrets-017ba14f-f61e-4873-aaa4-a7f87e240463": Phase="Pending", Reason="", readiness=false. Elapsed: 27.239477ms
Jan  9 15:09:18.183: INFO: Pod "pod-projected-secrets-017ba14f-f61e-4873-aaa4-a7f87e240463": Phase="Pending", Reason="", readiness=false. Elapsed: 2.075578963s
Jan  9 15:09:20.193: INFO: Pod "pod-projected-secrets-017ba14f-f61e-4873-aaa4-a7f87e240463": Phase="Pending", Reason="", readiness=false. Elapsed: 4.085829321s
Jan  9 15:09:22.200: INFO: Pod "pod-projected-secrets-017ba14f-f61e-4873-aaa4-a7f87e240463": Phase="Pending", Reason="", readiness=false. Elapsed: 6.09282742s
Jan  9 15:09:24.213: INFO: Pod "pod-projected-secrets-017ba14f-f61e-4873-aaa4-a7f87e240463": Phase="Pending", Reason="", readiness=false. Elapsed: 8.106017287s
Jan  9 15:09:26.236: INFO: Pod "pod-projected-secrets-017ba14f-f61e-4873-aaa4-a7f87e240463": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.128537393s
STEP: Saw pod success
Jan  9 15:09:26.236: INFO: Pod "pod-projected-secrets-017ba14f-f61e-4873-aaa4-a7f87e240463" satisfied condition "success or failure"
Jan  9 15:09:26.248: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-017ba14f-f61e-4873-aaa4-a7f87e240463 container secret-volume-test: 
STEP: delete the pod
Jan  9 15:09:26.435: INFO: Waiting for pod pod-projected-secrets-017ba14f-f61e-4873-aaa4-a7f87e240463 to disappear
Jan  9 15:09:26.444: INFO: Pod pod-projected-secrets-017ba14f-f61e-4873-aaa4-a7f87e240463 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  9 15:09:26.444: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3368" for this suite.
Jan  9 15:09:32.671: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  9 15:09:32.781: INFO: namespace projected-3368 deletion completed in 6.292967641s

• [SLOW TEST:16.813 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-apps] Job 
  should delete a job [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  9 15:09:32.781: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename job
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete a job [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a job
STEP: Ensuring active pods == parallelism
STEP: delete a job
STEP: deleting Job.batch foo in namespace job-9867, will wait for the garbage collector to delete the pods
Jan  9 15:09:43.031: INFO: Deleting Job.batch foo took: 64.432246ms
Jan  9 15:09:43.332: INFO: Terminating Job.batch foo pods took: 300.444236ms
STEP: Ensuring job was deleted
[AfterEach] [sig-apps] Job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  9 15:10:26.651: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-9867" for this suite.
Jan  9 15:10:32.702: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  9 15:10:32.847: INFO: namespace job-9867 deletion completed in 6.188866559s

• [SLOW TEST:60.066 seconds]
[sig-apps] Job
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should delete a job [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSJan  9 15:10:32.847: INFO: Running AfterSuite actions on all nodes
Jan  9 15:10:32.848: INFO: Running AfterSuite actions on node 1
Jan  9 15:10:32.848: INFO: Skipping dumping logs from cluster

Ran 215 of 4412 Specs in 8058.720 seconds
SUCCESS! -- 215 Passed | 0 Failed | 0 Pending | 4197 Skipped
PASS