I0201 12:56:11.588700 8 e2e.go:243] Starting e2e run "6a02dc8e-b166-467e-9f0f-1642e32af73b" on Ginkgo node 1 Running Suite: Kubernetes e2e suite =================================== Random Seed: 1580561770 - Will randomize all specs Will run 215 of 4412 specs Feb 1 12:56:12.091: INFO: >>> kubeConfig: /root/.kube/config Feb 1 12:56:12.097: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Feb 1 12:56:12.121: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Feb 1 12:56:12.163: INFO: 10 / 10 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Feb 1 12:56:12.163: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Feb 1 12:56:12.163: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Feb 1 12:56:12.174: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Feb 1 12:56:12.174: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'weave-net' (0 seconds elapsed) Feb 1 12:56:12.174: INFO: e2e test version: v1.15.7 Feb 1 12:56:12.176: INFO: kube-apiserver version: v1.15.1 SSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 1 12:56:12.177: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller Feb 1 12:56:12.295: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating replication controller my-hostname-basic-bd171a7a-51e0-4cd2-b0af-5c7ab5be737e Feb 1 12:56:12.308: INFO: Pod name my-hostname-basic-bd171a7a-51e0-4cd2-b0af-5c7ab5be737e: Found 0 pods out of 1 Feb 1 12:56:17.319: INFO: Pod name my-hostname-basic-bd171a7a-51e0-4cd2-b0af-5c7ab5be737e: Found 1 pods out of 1 Feb 1 12:56:17.319: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-bd171a7a-51e0-4cd2-b0af-5c7ab5be737e" are running Feb 1 12:56:21.333: INFO: Pod "my-hostname-basic-bd171a7a-51e0-4cd2-b0af-5c7ab5be737e-tx5hr" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-01 12:56:12 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-01 12:56:12 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-bd171a7a-51e0-4cd2-b0af-5c7ab5be737e]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-01 12:56:12 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-bd171a7a-51e0-4cd2-b0af-5c7ab5be737e]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-01 12:56:12 +0000 UTC Reason: Message:}]) Feb 1 12:56:21.334: INFO: Trying to dial the pod Feb 1 12:56:26.386: INFO: Controller my-hostname-basic-bd171a7a-51e0-4cd2-b0af-5c7ab5be737e: Got expected result from replica 1 [my-hostname-basic-bd171a7a-51e0-4cd2-b0af-5c7ab5be737e-tx5hr]: "my-hostname-basic-bd171a7a-51e0-4cd2-b0af-5c7ab5be737e-tx5hr", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 1 12:56:26.386: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-6347" for this suite. Feb 1 12:56:32.445: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 1 12:56:32.638: INFO: namespace replication-controller-6347 deletion completed in 6.228319501s • [SLOW TEST:20.461 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 1 12:56:32.639: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted STEP: Gathering metrics W0201 12:56:45.806152 8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Feb 1 12:56:45.806: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 1 12:56:45.806: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-3449" for this suite. Feb 1 12:57:08.020: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 1 12:57:08.145: INFO: namespace gc-3449 deletion completed in 22.331394157s • [SLOW TEST:35.507 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 1 12:57:08.147: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44 [It] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod Feb 1 12:57:08.409: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 1 12:57:38.911: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-1686" for this suite. Feb 1 12:58:01.017: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 1 12:58:01.095: INFO: namespace init-container-1686 deletion completed in 22.138081059s • [SLOW TEST:52.949 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 1 12:58:01.097: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-projected-2pzk STEP: Creating a pod to test atomic-volume-subpath Feb 1 12:58:01.257: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-2pzk" in namespace "subpath-6726" to be "success or failure" Feb 1 12:58:01.270: INFO: Pod "pod-subpath-test-projected-2pzk": Phase="Pending", Reason="", readiness=false. Elapsed: 12.66371ms Feb 1 12:58:03.286: INFO: Pod "pod-subpath-test-projected-2pzk": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028710026s Feb 1 12:58:05.293: INFO: Pod "pod-subpath-test-projected-2pzk": Phase="Pending", Reason="", readiness=false. Elapsed: 4.03543179s Feb 1 12:58:07.317: INFO: Pod "pod-subpath-test-projected-2pzk": Phase="Pending", Reason="", readiness=false. Elapsed: 6.060128473s Feb 1 12:58:09.326: INFO: Pod "pod-subpath-test-projected-2pzk": Phase="Pending", Reason="", readiness=false. Elapsed: 8.069336502s Feb 1 12:58:11.336: INFO: Pod "pod-subpath-test-projected-2pzk": Phase="Pending", Reason="", readiness=false. Elapsed: 10.078592391s Feb 1 12:58:13.346: INFO: Pod "pod-subpath-test-projected-2pzk": Phase="Running", Reason="", readiness=true. Elapsed: 12.089161108s Feb 1 12:58:15.358: INFO: Pod "pod-subpath-test-projected-2pzk": Phase="Running", Reason="", readiness=true. Elapsed: 14.100466023s Feb 1 12:58:17.364: INFO: Pod "pod-subpath-test-projected-2pzk": Phase="Running", Reason="", readiness=true. Elapsed: 16.106786233s Feb 1 12:58:19.374: INFO: Pod "pod-subpath-test-projected-2pzk": Phase="Running", Reason="", readiness=true. Elapsed: 18.116444341s Feb 1 12:58:21.383: INFO: Pod "pod-subpath-test-projected-2pzk": Phase="Running", Reason="", readiness=true. Elapsed: 20.125565811s Feb 1 12:58:23.392: INFO: Pod "pod-subpath-test-projected-2pzk": Phase="Running", Reason="", readiness=true. Elapsed: 22.135295313s Feb 1 12:58:25.405: INFO: Pod "pod-subpath-test-projected-2pzk": Phase="Running", Reason="", readiness=true. Elapsed: 24.14752196s Feb 1 12:58:27.413: INFO: Pod "pod-subpath-test-projected-2pzk": Phase="Running", Reason="", readiness=true. Elapsed: 26.155698183s Feb 1 12:58:29.426: INFO: Pod "pod-subpath-test-projected-2pzk": Phase="Running", Reason="", readiness=true. Elapsed: 28.169160501s Feb 1 12:58:31.434: INFO: Pod "pod-subpath-test-projected-2pzk": Phase="Running", Reason="", readiness=true. Elapsed: 30.176895393s Feb 1 12:58:33.440: INFO: Pod "pod-subpath-test-projected-2pzk": Phase="Succeeded", Reason="", readiness=false. Elapsed: 32.183093244s STEP: Saw pod success Feb 1 12:58:33.440: INFO: Pod "pod-subpath-test-projected-2pzk" satisfied condition "success or failure" Feb 1 12:58:33.443: INFO: Trying to get logs from node iruya-node pod pod-subpath-test-projected-2pzk container test-container-subpath-projected-2pzk: STEP: delete the pod Feb 1 12:58:33.525: INFO: Waiting for pod pod-subpath-test-projected-2pzk to disappear Feb 1 12:58:33.538: INFO: Pod pod-subpath-test-projected-2pzk no longer exists STEP: Deleting pod pod-subpath-test-projected-2pzk Feb 1 12:58:33.538: INFO: Deleting pod "pod-subpath-test-projected-2pzk" in namespace "subpath-6726" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 1 12:58:33.539: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-6726" for this suite. Feb 1 12:58:39.620: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 1 12:58:39.726: INFO: namespace subpath-6726 deletion completed in 6.1841749s • [SLOW TEST:38.630 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 1 12:58:39.728: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:167 [It] should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating server pod server in namespace prestop-2800 STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace prestop-2800 STEP: Deleting pre-stop pod Feb 1 12:59:05.053: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 1 12:59:05.069: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "prestop-2800" for this suite. Feb 1 12:59:43.119: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 1 12:59:43.219: INFO: namespace prestop-2800 deletion completed in 38.135798827s • [SLOW TEST:63.491 seconds] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 1 12:59:43.219: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Feb 1 12:59:53.421: INFO: Expected: &{OK} to match Container's Termination Message: OK -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 1 12:59:53.483: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-6918" for this suite. Feb 1 12:59:59.627: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 1 12:59:59.753: INFO: namespace container-runtime-6918 deletion completed in 6.252232653s • [SLOW TEST:16.534 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129 should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 1 12:59:59.755: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-1355.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-1355.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1355.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-1355.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-1355.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1355.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe /etc/hosts STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Feb 1 13:00:13.951: INFO: Unable to read wheezy_udp@PodARecord from pod dns-1355/dns-test-ca5948b0-2fb6-48e5-9555-25cb30e9dec8: the server could not find the requested resource (get pods dns-test-ca5948b0-2fb6-48e5-9555-25cb30e9dec8) Feb 1 13:00:13.957: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-1355/dns-test-ca5948b0-2fb6-48e5-9555-25cb30e9dec8: the server could not find the requested resource (get pods dns-test-ca5948b0-2fb6-48e5-9555-25cb30e9dec8) Feb 1 13:00:13.966: INFO: Unable to read jessie_hosts@dns-querier-1.dns-test-service.dns-1355.svc.cluster.local from pod dns-1355/dns-test-ca5948b0-2fb6-48e5-9555-25cb30e9dec8: the server could not find the requested resource (get pods dns-test-ca5948b0-2fb6-48e5-9555-25cb30e9dec8) Feb 1 13:00:13.975: INFO: Unable to read jessie_hosts@dns-querier-1 from pod dns-1355/dns-test-ca5948b0-2fb6-48e5-9555-25cb30e9dec8: the server could not find the requested resource (get pods dns-test-ca5948b0-2fb6-48e5-9555-25cb30e9dec8) Feb 1 13:00:13.984: INFO: Unable to read jessie_udp@PodARecord from pod dns-1355/dns-test-ca5948b0-2fb6-48e5-9555-25cb30e9dec8: the server could not find the requested resource (get pods dns-test-ca5948b0-2fb6-48e5-9555-25cb30e9dec8) Feb 1 13:00:13.992: INFO: Unable to read jessie_tcp@PodARecord from pod dns-1355/dns-test-ca5948b0-2fb6-48e5-9555-25cb30e9dec8: the server could not find the requested resource (get pods dns-test-ca5948b0-2fb6-48e5-9555-25cb30e9dec8) Feb 1 13:00:13.992: INFO: Lookups using dns-1355/dns-test-ca5948b0-2fb6-48e5-9555-25cb30e9dec8 failed for: [wheezy_udp@PodARecord wheezy_tcp@PodARecord jessie_hosts@dns-querier-1.dns-test-service.dns-1355.svc.cluster.local jessie_hosts@dns-querier-1 jessie_udp@PodARecord jessie_tcp@PodARecord] Feb 1 13:00:19.085: INFO: DNS probes using dns-1355/dns-test-ca5948b0-2fb6-48e5-9555-25cb30e9dec8 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 1 13:00:19.197: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-1355" for this suite. Feb 1 13:00:27.421: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 1 13:00:27.588: INFO: namespace dns-1355 deletion completed in 8.357116025s • [SLOW TEST:27.833 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 1 13:00:27.589: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-map-5516f2cd-1875-4953-8002-1bd11bc81b4d STEP: Creating a pod to test consume configMaps Feb 1 13:00:27.748: INFO: Waiting up to 5m0s for pod "pod-configmaps-9bb0a6ac-19c5-4e46-b080-aaa70729b48d" in namespace "configmap-2159" to be "success or failure" Feb 1 13:00:27.752: INFO: Pod "pod-configmaps-9bb0a6ac-19c5-4e46-b080-aaa70729b48d": Phase="Pending", Reason="", readiness=false. Elapsed: 3.770055ms Feb 1 13:00:29.775: INFO: Pod "pod-configmaps-9bb0a6ac-19c5-4e46-b080-aaa70729b48d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026973433s Feb 1 13:00:31.801: INFO: Pod "pod-configmaps-9bb0a6ac-19c5-4e46-b080-aaa70729b48d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.052345217s Feb 1 13:00:33.907: INFO: Pod "pod-configmaps-9bb0a6ac-19c5-4e46-b080-aaa70729b48d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.158484525s Feb 1 13:00:35.916: INFO: Pod "pod-configmaps-9bb0a6ac-19c5-4e46-b080-aaa70729b48d": Phase="Pending", Reason="", readiness=false. Elapsed: 8.167626088s Feb 1 13:00:37.925: INFO: Pod "pod-configmaps-9bb0a6ac-19c5-4e46-b080-aaa70729b48d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.176685295s STEP: Saw pod success Feb 1 13:00:37.925: INFO: Pod "pod-configmaps-9bb0a6ac-19c5-4e46-b080-aaa70729b48d" satisfied condition "success or failure" Feb 1 13:00:37.935: INFO: Trying to get logs from node iruya-node pod pod-configmaps-9bb0a6ac-19c5-4e46-b080-aaa70729b48d container configmap-volume-test: STEP: delete the pod Feb 1 13:00:38.004: INFO: Waiting for pod pod-configmaps-9bb0a6ac-19c5-4e46-b080-aaa70729b48d to disappear Feb 1 13:00:38.012: INFO: Pod pod-configmaps-9bb0a6ac-19c5-4e46-b080-aaa70729b48d no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 1 13:00:38.012: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2159" for this suite. Feb 1 13:00:44.047: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 1 13:00:44.133: INFO: namespace configmap-2159 deletion completed in 6.11091348s • [SLOW TEST:16.545 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 1 13:00:44.134: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update Feb 1 13:00:44.346: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:watch-8114,SelfLink:/api/v1/namespaces/watch-8114/configmaps/e2e-watch-test-resource-version,UID:d75baa63-91a6-4555-8d48-a1ac458bb078,ResourceVersion:22687102,Generation:0,CreationTimestamp:2020-02-01 13:00:44 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Feb 1 13:00:44.346: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:watch-8114,SelfLink:/api/v1/namespaces/watch-8114/configmaps/e2e-watch-test-resource-version,UID:d75baa63-91a6-4555-8d48-a1ac458bb078,ResourceVersion:22687103,Generation:0,CreationTimestamp:2020-02-01 13:00:44 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 1 13:00:44.346: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-8114" for this suite. Feb 1 13:00:50.454: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 1 13:00:50.596: INFO: namespace watch-8114 deletion completed in 6.238507262s • [SLOW TEST:6.463 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 1 13:00:50.597: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-map-f1f144a8-d869-4c67-b4f3-8cac20552e63 STEP: Creating a pod to test consume configMaps Feb 1 13:00:50.770: INFO: Waiting up to 5m0s for pod "pod-configmaps-4723c555-a191-4507-8830-3d7c414b11bf" in namespace "configmap-9806" to be "success or failure" Feb 1 13:00:50.787: INFO: Pod "pod-configmaps-4723c555-a191-4507-8830-3d7c414b11bf": Phase="Pending", Reason="", readiness=false. Elapsed: 16.045605ms Feb 1 13:00:52.792: INFO: Pod "pod-configmaps-4723c555-a191-4507-8830-3d7c414b11bf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021505191s Feb 1 13:00:54.800: INFO: Pod "pod-configmaps-4723c555-a191-4507-8830-3d7c414b11bf": Phase="Pending", Reason="", readiness=false. Elapsed: 4.029730082s Feb 1 13:00:56.807: INFO: Pod "pod-configmaps-4723c555-a191-4507-8830-3d7c414b11bf": Phase="Pending", Reason="", readiness=false. Elapsed: 6.036747014s Feb 1 13:00:58.833: INFO: Pod "pod-configmaps-4723c555-a191-4507-8830-3d7c414b11bf": Phase="Pending", Reason="", readiness=false. Elapsed: 8.062173767s Feb 1 13:01:00.842: INFO: Pod "pod-configmaps-4723c555-a191-4507-8830-3d7c414b11bf": Phase="Running", Reason="", readiness=true. Elapsed: 10.071096467s Feb 1 13:01:02.857: INFO: Pod "pod-configmaps-4723c555-a191-4507-8830-3d7c414b11bf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.08595362s STEP: Saw pod success Feb 1 13:01:02.857: INFO: Pod "pod-configmaps-4723c555-a191-4507-8830-3d7c414b11bf" satisfied condition "success or failure" Feb 1 13:01:02.865: INFO: Trying to get logs from node iruya-node pod pod-configmaps-4723c555-a191-4507-8830-3d7c414b11bf container configmap-volume-test: STEP: delete the pod Feb 1 13:01:03.086: INFO: Waiting for pod pod-configmaps-4723c555-a191-4507-8830-3d7c414b11bf to disappear Feb 1 13:01:03.100: INFO: Pod pod-configmaps-4723c555-a191-4507-8830-3d7c414b11bf no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 1 13:01:03.100: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9806" for this suite. Feb 1 13:01:11.131: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 1 13:01:11.226: INFO: namespace configmap-9806 deletion completed in 8.116868192s • [SLOW TEST:20.629 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 1 13:01:11.226: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Feb 1 13:01:11.336: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) Feb 1 13:01:11.374: INFO: Pod name sample-pod: Found 0 pods out of 1 Feb 1 13:01:16.391: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Feb 1 13:01:20.405: INFO: Creating deployment "test-rolling-update-deployment" Feb 1 13:01:20.416: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has Feb 1 13:01:20.432: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created Feb 1 13:01:22.445: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected Feb 1 13:01:22.449: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716158880, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716158880, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716158880, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716158880, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-79f6b9d75c\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 1 13:01:24.467: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716158880, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716158880, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716158880, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716158880, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-79f6b9d75c\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 1 13:01:26.461: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716158880, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716158880, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716158880, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716158880, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-79f6b9d75c\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 1 13:01:28.469: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716158880, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716158880, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716158880, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716158880, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-79f6b9d75c\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 1 13:01:30.462: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 Feb 1 13:01:30.478: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment,GenerateName:,Namespace:deployment-2973,SelfLink:/apis/apps/v1/namespaces/deployment-2973/deployments/test-rolling-update-deployment,UID:38bde95c-8189-4697-b0f3-142e757fc8e0,ResourceVersion:22687241,Generation:1,CreationTimestamp:2020-02-01 13:01:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-02-01 13:01:20 +0000 UTC 2020-02-01 13:01:20 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-02-01 13:01:29 +0000 UTC 2020-02-01 13:01:20 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rolling-update-deployment-79f6b9d75c" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},} Feb 1 13:01:30.484: INFO: New ReplicaSet "test-rolling-update-deployment-79f6b9d75c" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-79f6b9d75c,GenerateName:,Namespace:deployment-2973,SelfLink:/apis/apps/v1/namespaces/deployment-2973/replicasets/test-rolling-update-deployment-79f6b9d75c,UID:40624654-1761-418b-9af3-4baad38b96e4,ResourceVersion:22687230,Generation:1,CreationTimestamp:2020-02-01 13:01:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment 38bde95c-8189-4697-b0f3-142e757fc8e0 0xc002782337 0xc002782338}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} Feb 1 13:01:30.484: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": Feb 1 13:01:30.484: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-controller,GenerateName:,Namespace:deployment-2973,SelfLink:/apis/apps/v1/namespaces/deployment-2973/replicasets/test-rolling-update-controller,UID:83f54da3-3c92-425c-95d0-e1620051fa4c,ResourceVersion:22687239,Generation:2,CreationTimestamp:2020-02-01 13:01:11 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305832,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment 38bde95c-8189-4697-b0f3-142e757fc8e0 0xc002782267 0xc002782268}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Feb 1 13:01:30.490: INFO: Pod "test-rolling-update-deployment-79f6b9d75c-9dchp" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-79f6b9d75c-9dchp,GenerateName:test-rolling-update-deployment-79f6b9d75c-,Namespace:deployment-2973,SelfLink:/api/v1/namespaces/deployment-2973/pods/test-rolling-update-deployment-79f6b9d75c-9dchp,UID:75722585-5354-4287-be35-3691aa28d635,ResourceVersion:22687229,Generation:0,CreationTimestamp:2020-02-01 13:01:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rolling-update-deployment-79f6b9d75c 40624654-1761-418b-9af3-4baad38b96e4 0xc002782c37 0xc002782c38}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-p5zp8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-p5zp8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-p5zp8 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002782cb0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002782cd0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-01 13:01:20 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-01 13:01:29 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-01 13:01:29 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-01 13:01:20 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.2,StartTime:2020-02-01 13:01:20 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-02-01 13:01:28 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 docker://4ea09d4c17b60f06fc5d83c9f315381b3112c45d14882eb793954358632e26b3}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 1 13:01:30.491: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-2973" for this suite. Feb 1 13:01:36.532: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 1 13:01:36.674: INFO: namespace deployment-2973 deletion completed in 6.176498232s • [SLOW TEST:25.448 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 1 13:01:36.675: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-ec81b0df-7fc8-464c-b33c-dc1a2385930e STEP: Creating a pod to test consume secrets Feb 1 13:01:37.082: INFO: Waiting up to 5m0s for pod "pod-secrets-79979ce4-f97b-4f00-ae0a-1df1dd852fa5" in namespace "secrets-9280" to be "success or failure" Feb 1 13:01:37.118: INFO: Pod "pod-secrets-79979ce4-f97b-4f00-ae0a-1df1dd852fa5": Phase="Pending", Reason="", readiness=false. Elapsed: 35.783547ms Feb 1 13:01:39.127: INFO: Pod "pod-secrets-79979ce4-f97b-4f00-ae0a-1df1dd852fa5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.045373797s Feb 1 13:01:41.146: INFO: Pod "pod-secrets-79979ce4-f97b-4f00-ae0a-1df1dd852fa5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.064136161s Feb 1 13:01:43.160: INFO: Pod "pod-secrets-79979ce4-f97b-4f00-ae0a-1df1dd852fa5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.07805031s Feb 1 13:01:45.356: INFO: Pod "pod-secrets-79979ce4-f97b-4f00-ae0a-1df1dd852fa5": Phase="Pending", Reason="", readiness=false. Elapsed: 8.273692654s Feb 1 13:01:47.370: INFO: Pod "pod-secrets-79979ce4-f97b-4f00-ae0a-1df1dd852fa5": Phase="Pending", Reason="", readiness=false. Elapsed: 10.28809464s Feb 1 13:01:49.380: INFO: Pod "pod-secrets-79979ce4-f97b-4f00-ae0a-1df1dd852fa5": Phase="Pending", Reason="", readiness=false. Elapsed: 12.297944392s Feb 1 13:01:51.387: INFO: Pod "pod-secrets-79979ce4-f97b-4f00-ae0a-1df1dd852fa5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.305418934s STEP: Saw pod success Feb 1 13:01:51.387: INFO: Pod "pod-secrets-79979ce4-f97b-4f00-ae0a-1df1dd852fa5" satisfied condition "success or failure" Feb 1 13:01:51.393: INFO: Trying to get logs from node iruya-node pod pod-secrets-79979ce4-f97b-4f00-ae0a-1df1dd852fa5 container secret-volume-test: STEP: delete the pod Feb 1 13:01:51.620: INFO: Waiting for pod pod-secrets-79979ce4-f97b-4f00-ae0a-1df1dd852fa5 to disappear Feb 1 13:01:51.628: INFO: Pod pod-secrets-79979ce4-f97b-4f00-ae0a-1df1dd852fa5 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 1 13:01:51.629: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-9280" for this suite. Feb 1 13:01:57.671: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 1 13:01:57.836: INFO: namespace secrets-9280 deletion completed in 6.201394662s STEP: Destroying namespace "secret-namespace-2109" for this suite. Feb 1 13:02:03.867: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 1 13:02:03.958: INFO: namespace secret-namespace-2109 deletion completed in 6.121568711s • [SLOW TEST:27.283 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl label should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 1 13:02:03.960: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1210 STEP: creating the pod Feb 1 13:02:04.025: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6823' Feb 1 13:02:06.714: INFO: stderr: "" Feb 1 13:02:06.714: INFO: stdout: "pod/pause created\n" Feb 1 13:02:06.714: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] Feb 1 13:02:06.715: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-6823" to be "running and ready" Feb 1 13:02:06.774: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 58.797314ms Feb 1 13:02:08.788: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.071963599s Feb 1 13:02:10.807: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 4.091593029s Feb 1 13:02:12.821: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 6.105063926s Feb 1 13:02:14.837: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 8.121141454s Feb 1 13:02:16.856: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 10.140724621s Feb 1 13:02:16.857: INFO: Pod "pause" satisfied condition "running and ready" Feb 1 13:02:16.857: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: adding the label testing-label with value testing-label-value to a pod Feb 1 13:02:16.858: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-6823' Feb 1 13:02:17.021: INFO: stderr: "" Feb 1 13:02:17.021: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value Feb 1 13:02:17.021: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-6823' Feb 1 13:02:17.178: INFO: stderr: "" Feb 1 13:02:17.178: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 11s testing-label-value\n" STEP: removing the label testing-label of a pod Feb 1 13:02:17.178: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-6823' Feb 1 13:02:17.387: INFO: stderr: "" Feb 1 13:02:17.387: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label Feb 1 13:02:17.387: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-6823' Feb 1 13:02:17.541: INFO: stderr: "" Feb 1 13:02:17.541: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 11s \n" [AfterEach] [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1217 STEP: using delete to clean up resources Feb 1 13:02:17.542: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6823' Feb 1 13:02:17.679: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Feb 1 13:02:17.679: INFO: stdout: "pod \"pause\" force deleted\n" Feb 1 13:02:17.679: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-6823' Feb 1 13:02:17.880: INFO: stderr: "No resources found.\n" Feb 1 13:02:17.880: INFO: stdout: "" Feb 1 13:02:17.881: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-6823 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Feb 1 13:02:17.987: INFO: stderr: "" Feb 1 13:02:17.987: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 1 13:02:17.987: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6823" for this suite. Feb 1 13:02:24.037: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 1 13:02:24.129: INFO: namespace kubectl-6823 deletion completed in 6.132821189s • [SLOW TEST:20.169 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 1 13:02:24.130: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name s-test-opt-del-4f6ac1e8-bf43-42ff-bb14-848e81a20832 STEP: Creating secret with name s-test-opt-upd-ab645b1c-2e12-4305-ac43-876aa3508028 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-4f6ac1e8-bf43-42ff-bb14-848e81a20832 STEP: Updating secret s-test-opt-upd-ab645b1c-2e12-4305-ac43-876aa3508028 STEP: Creating secret with name s-test-opt-create-e7d79514-e43b-4acf-82e1-48aa5d5275c1 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 1 13:02:44.825: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-4303" for this suite. Feb 1 13:03:06.851: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 1 13:03:06.960: INFO: namespace secrets-4303 deletion completed in 22.131464798s • [SLOW TEST:42.830 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 1 13:03:06.960: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Feb 1 13:03:07.077: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f466b241-f341-4468-94a9-bdbaf2416a81" in namespace "projected-4073" to be "success or failure" Feb 1 13:03:07.082: INFO: Pod "downwardapi-volume-f466b241-f341-4468-94a9-bdbaf2416a81": Phase="Pending", Reason="", readiness=false. Elapsed: 4.976593ms Feb 1 13:03:09.089: INFO: Pod "downwardapi-volume-f466b241-f341-4468-94a9-bdbaf2416a81": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012279702s Feb 1 13:03:11.099: INFO: Pod "downwardapi-volume-f466b241-f341-4468-94a9-bdbaf2416a81": Phase="Pending", Reason="", readiness=false. Elapsed: 4.022190408s Feb 1 13:03:13.107: INFO: Pod "downwardapi-volume-f466b241-f341-4468-94a9-bdbaf2416a81": Phase="Pending", Reason="", readiness=false. Elapsed: 6.030315121s Feb 1 13:03:15.114: INFO: Pod "downwardapi-volume-f466b241-f341-4468-94a9-bdbaf2416a81": Phase="Pending", Reason="", readiness=false. Elapsed: 8.037508371s Feb 1 13:03:17.125: INFO: Pod "downwardapi-volume-f466b241-f341-4468-94a9-bdbaf2416a81": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.047786571s STEP: Saw pod success Feb 1 13:03:17.125: INFO: Pod "downwardapi-volume-f466b241-f341-4468-94a9-bdbaf2416a81" satisfied condition "success or failure" Feb 1 13:03:17.129: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-f466b241-f341-4468-94a9-bdbaf2416a81 container client-container: STEP: delete the pod Feb 1 13:03:17.254: INFO: Waiting for pod downwardapi-volume-f466b241-f341-4468-94a9-bdbaf2416a81 to disappear Feb 1 13:03:17.264: INFO: Pod downwardapi-volume-f466b241-f341-4468-94a9-bdbaf2416a81 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 1 13:03:17.264: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4073" for this suite. Feb 1 13:03:23.532: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 1 13:03:23.748: INFO: namespace projected-4073 deletion completed in 6.47679823s • [SLOW TEST:16.788 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 1 13:03:23.749: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Feb 1 13:03:23.884: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace STEP: Creating rc "condition-test" that asks for more than the allowed pod quota STEP: Checking rc "condition-test" has the desired failure condition set STEP: Scaling down rc "condition-test" to satisfy pod quota Feb 1 13:03:26.179: INFO: Updating replication controller "condition-test" STEP: Checking rc "condition-test" has no failure condition set [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 1 13:03:26.614: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-6305" for this suite. Feb 1 13:03:37.108: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 1 13:03:37.638: INFO: namespace replication-controller-6305 deletion completed in 11.004381456s • [SLOW TEST:13.889 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 1 13:03:37.640: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test env composition Feb 1 13:03:37.885: INFO: Waiting up to 5m0s for pod "var-expansion-766960c8-2efb-4ead-a220-4cfa919ba656" in namespace "var-expansion-9288" to be "success or failure" Feb 1 13:03:37.903: INFO: Pod "var-expansion-766960c8-2efb-4ead-a220-4cfa919ba656": Phase="Pending", Reason="", readiness=false. Elapsed: 17.14103ms Feb 1 13:03:39.918: INFO: Pod "var-expansion-766960c8-2efb-4ead-a220-4cfa919ba656": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032829383s Feb 1 13:03:41.926: INFO: Pod "var-expansion-766960c8-2efb-4ead-a220-4cfa919ba656": Phase="Pending", Reason="", readiness=false. Elapsed: 4.040385446s Feb 1 13:03:43.938: INFO: Pod "var-expansion-766960c8-2efb-4ead-a220-4cfa919ba656": Phase="Pending", Reason="", readiness=false. Elapsed: 6.052817847s Feb 1 13:03:45.951: INFO: Pod "var-expansion-766960c8-2efb-4ead-a220-4cfa919ba656": Phase="Pending", Reason="", readiness=false. Elapsed: 8.065179602s Feb 1 13:03:48.011: INFO: Pod "var-expansion-766960c8-2efb-4ead-a220-4cfa919ba656": Phase="Pending", Reason="", readiness=false. Elapsed: 10.125031021s Feb 1 13:03:50.018: INFO: Pod "var-expansion-766960c8-2efb-4ead-a220-4cfa919ba656": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.132015667s STEP: Saw pod success Feb 1 13:03:50.018: INFO: Pod "var-expansion-766960c8-2efb-4ead-a220-4cfa919ba656" satisfied condition "success or failure" Feb 1 13:03:50.021: INFO: Trying to get logs from node iruya-node pod var-expansion-766960c8-2efb-4ead-a220-4cfa919ba656 container dapi-container: STEP: delete the pod Feb 1 13:03:50.099: INFO: Waiting for pod var-expansion-766960c8-2efb-4ead-a220-4cfa919ba656 to disappear Feb 1 13:03:50.108: INFO: Pod var-expansion-766960c8-2efb-4ead-a220-4cfa919ba656 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 1 13:03:50.108: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-9288" for this suite. Feb 1 13:03:56.195: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 1 13:03:56.302: INFO: namespace var-expansion-9288 deletion completed in 6.164884268s • [SLOW TEST:18.663 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 1 13:03:56.303: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-232c4a65-23c8-43c1-9836-543132f2ae11 STEP: Creating a pod to test consume configMaps Feb 1 13:03:56.409: INFO: Waiting up to 5m0s for pod "pod-configmaps-c22a0b42-1150-48aa-979a-3f7a5e64e22f" in namespace "configmap-279" to be "success or failure" Feb 1 13:03:56.421: INFO: Pod "pod-configmaps-c22a0b42-1150-48aa-979a-3f7a5e64e22f": Phase="Pending", Reason="", readiness=false. Elapsed: 12.784257ms Feb 1 13:03:58.435: INFO: Pod "pod-configmaps-c22a0b42-1150-48aa-979a-3f7a5e64e22f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025970435s Feb 1 13:04:00.450: INFO: Pod "pod-configmaps-c22a0b42-1150-48aa-979a-3f7a5e64e22f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.041350953s Feb 1 13:04:02.467: INFO: Pod "pod-configmaps-c22a0b42-1150-48aa-979a-3f7a5e64e22f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.05814453s Feb 1 13:04:04.477: INFO: Pod "pod-configmaps-c22a0b42-1150-48aa-979a-3f7a5e64e22f": Phase="Pending", Reason="", readiness=false. Elapsed: 8.068372825s Feb 1 13:04:06.498: INFO: Pod "pod-configmaps-c22a0b42-1150-48aa-979a-3f7a5e64e22f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.089123031s STEP: Saw pod success Feb 1 13:04:06.498: INFO: Pod "pod-configmaps-c22a0b42-1150-48aa-979a-3f7a5e64e22f" satisfied condition "success or failure" Feb 1 13:04:06.504: INFO: Trying to get logs from node iruya-node pod pod-configmaps-c22a0b42-1150-48aa-979a-3f7a5e64e22f container configmap-volume-test: STEP: delete the pod Feb 1 13:04:06.780: INFO: Waiting for pod pod-configmaps-c22a0b42-1150-48aa-979a-3f7a5e64e22f to disappear Feb 1 13:04:06.791: INFO: Pod pod-configmaps-c22a0b42-1150-48aa-979a-3f7a5e64e22f no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 1 13:04:06.792: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-279" for this suite. Feb 1 13:04:12.851: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 1 13:04:12.981: INFO: namespace configmap-279 deletion completed in 6.177087963s • [SLOW TEST:16.678 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl logs should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 1 13:04:12.981: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1292 STEP: creating an rc Feb 1 13:04:13.111: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4331' Feb 1 13:04:13.456: INFO: stderr: "" Feb 1 13:04:13.457: INFO: stdout: "replicationcontroller/redis-master created\n" [It] should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Waiting for Redis master to start. Feb 1 13:04:14.482: INFO: Selector matched 1 pods for map[app:redis] Feb 1 13:04:14.482: INFO: Found 0 / 1 Feb 1 13:04:15.475: INFO: Selector matched 1 pods for map[app:redis] Feb 1 13:04:15.475: INFO: Found 0 / 1 Feb 1 13:04:16.470: INFO: Selector matched 1 pods for map[app:redis] Feb 1 13:04:16.470: INFO: Found 0 / 1 Feb 1 13:04:17.464: INFO: Selector matched 1 pods for map[app:redis] Feb 1 13:04:17.464: INFO: Found 0 / 1 Feb 1 13:04:18.474: INFO: Selector matched 1 pods for map[app:redis] Feb 1 13:04:18.474: INFO: Found 0 / 1 Feb 1 13:04:19.471: INFO: Selector matched 1 pods for map[app:redis] Feb 1 13:04:19.471: INFO: Found 0 / 1 Feb 1 13:04:20.471: INFO: Selector matched 1 pods for map[app:redis] Feb 1 13:04:20.472: INFO: Found 0 / 1 Feb 1 13:04:21.465: INFO: Selector matched 1 pods for map[app:redis] Feb 1 13:04:21.465: INFO: Found 0 / 1 Feb 1 13:04:22.476: INFO: Selector matched 1 pods for map[app:redis] Feb 1 13:04:22.476: INFO: Found 0 / 1 Feb 1 13:04:23.469: INFO: Selector matched 1 pods for map[app:redis] Feb 1 13:04:23.469: INFO: Found 0 / 1 Feb 1 13:04:24.475: INFO: Selector matched 1 pods for map[app:redis] Feb 1 13:04:24.475: INFO: Found 1 / 1 Feb 1 13:04:24.475: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Feb 1 13:04:24.485: INFO: Selector matched 1 pods for map[app:redis] Feb 1 13:04:24.485: INFO: ForEach: Found 1 pods from the filter. Now looping through them. STEP: checking for a matching strings Feb 1 13:04:24.485: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-pmpmg redis-master --namespace=kubectl-4331' Feb 1 13:04:24.632: INFO: stderr: "" Feb 1 13:04:24.632: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 01 Feb 13:04:23.119 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 01 Feb 13:04:23.120 # Server started, Redis version 3.2.12\n1:M 01 Feb 13:04:23.121 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 01 Feb 13:04:23.121 * The server is now ready to accept connections on port 6379\n" STEP: limiting log lines Feb 1 13:04:24.633: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-pmpmg redis-master --namespace=kubectl-4331 --tail=1' Feb 1 13:04:24.736: INFO: stderr: "" Feb 1 13:04:24.736: INFO: stdout: "1:M 01 Feb 13:04:23.121 * The server is now ready to accept connections on port 6379\n" STEP: limiting log bytes Feb 1 13:04:24.736: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-pmpmg redis-master --namespace=kubectl-4331 --limit-bytes=1' Feb 1 13:04:24.894: INFO: stderr: "" Feb 1 13:04:24.895: INFO: stdout: " " STEP: exposing timestamps Feb 1 13:04:24.895: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-pmpmg redis-master --namespace=kubectl-4331 --tail=1 --timestamps' Feb 1 13:04:25.027: INFO: stderr: "" Feb 1 13:04:25.027: INFO: stdout: "2020-02-01T13:04:23.12379489Z 1:M 01 Feb 13:04:23.121 * The server is now ready to accept connections on port 6379\n" STEP: restricting to a time range Feb 1 13:04:27.528: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-pmpmg redis-master --namespace=kubectl-4331 --since=1s' Feb 1 13:04:27.778: INFO: stderr: "" Feb 1 13:04:27.778: INFO: stdout: "" Feb 1 13:04:27.778: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-pmpmg redis-master --namespace=kubectl-4331 --since=24h' Feb 1 13:04:27.931: INFO: stderr: "" Feb 1 13:04:27.932: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 01 Feb 13:04:23.119 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 01 Feb 13:04:23.120 # Server started, Redis version 3.2.12\n1:M 01 Feb 13:04:23.121 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 01 Feb 13:04:23.121 * The server is now ready to accept connections on port 6379\n" [AfterEach] [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1298 STEP: using delete to clean up resources Feb 1 13:04:27.932: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-4331' Feb 1 13:04:28.047: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Feb 1 13:04:28.047: INFO: stdout: "replicationcontroller \"redis-master\" force deleted\n" Feb 1 13:04:28.047: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=nginx --no-headers --namespace=kubectl-4331' Feb 1 13:04:28.148: INFO: stderr: "No resources found.\n" Feb 1 13:04:28.148: INFO: stdout: "" Feb 1 13:04:28.148: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=nginx --namespace=kubectl-4331 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Feb 1 13:04:28.281: INFO: stderr: "" Feb 1 13:04:28.281: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 1 13:04:28.281: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4331" for this suite. Feb 1 13:04:34.316: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 1 13:04:34.545: INFO: namespace kubectl-4331 deletion completed in 6.257981833s • [SLOW TEST:21.564 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 1 13:04:34.549: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 1 13:04:41.145: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-6824" for this suite. Feb 1 13:04:47.255: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 1 13:04:47.381: INFO: namespace namespaces-6824 deletion completed in 6.196155267s STEP: Destroying namespace "nsdeletetest-8472" for this suite. Feb 1 13:04:47.386: INFO: Namespace nsdeletetest-8472 was already deleted STEP: Destroying namespace "nsdeletetest-6922" for this suite. Feb 1 13:04:53.416: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 1 13:04:53.543: INFO: namespace nsdeletetest-6922 deletion completed in 6.15626034s • [SLOW TEST:18.994 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run deployment should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 1 13:04:53.546: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1557 [It] should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Feb 1 13:04:53.652: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --generator=deployment/apps.v1 --namespace=kubectl-8462' Feb 1 13:04:53.905: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Feb 1 13:04:53.905: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n" STEP: verifying the deployment e2e-test-nginx-deployment was created STEP: verifying the pod controlled by deployment e2e-test-nginx-deployment was created [AfterEach] [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1562 Feb 1 13:04:55.996: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=kubectl-8462' Feb 1 13:04:56.174: INFO: stderr: "" Feb 1 13:04:56.175: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 1 13:04:56.175: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8462" for this suite. Feb 1 13:05:02.278: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 1 13:05:02.371: INFO: namespace kubectl-8462 deletion completed in 6.190891055s • [SLOW TEST:8.826 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 1 13:05:02.374: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod Feb 1 13:05:02.607: INFO: PodSpec: initContainers in spec.initContainers Feb 1 13:06:06.936: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-330bcd22-8849-4e28-a2db-45c1ecd9e22a", GenerateName:"", Namespace:"init-container-2834", SelfLink:"/api/v1/namespaces/init-container-2834/pods/pod-init-330bcd22-8849-4e28-a2db-45c1ecd9e22a", UID:"9149bd20-7d2d-4161-9dbc-d6f4831a275d", ResourceVersion:"22687983", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63716159102, loc:(*time.Location)(0x7ea48a0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"607604532"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-lv6dj", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc0027f6900), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-lv6dj", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-lv6dj", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-lv6dj", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc002650da8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"iruya-node", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc001e39e60), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc002650e30)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc002650e50)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc002650e58), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc002650e5c), PreemptionPolicy:(*v1.PreemptionPolicy)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716159102, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716159102, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716159102, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716159102, loc:(*time.Location)(0x7ea48a0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.96.3.65", PodIP:"10.44.0.1", StartTime:(*v1.Time)(0xc002342960), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc002652850)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0026528c0)}, Ready:false, RestartCount:3, Image:"busybox:1.29", ImageID:"docker-pullable://busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"docker://12e9e627ab45075857e3e3050aa4d3883045570b91f6322034a85dd5b59c684a"}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0023429a0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:""}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc002342980), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:""}}, QOSClass:"Guaranteed"}} [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 1 13:06:06.939: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-2834" for this suite. Feb 1 13:06:28.969: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 1 13:06:29.037: INFO: namespace init-container-2834 deletion completed in 22.088320234s • [SLOW TEST:86.662 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 1 13:06:29.037: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Feb 1 13:06:29.137: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 1 13:06:30.240: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-5366" for this suite. Feb 1 13:06:36.333: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 1 13:06:36.469: INFO: namespace custom-resource-definition-5366 deletion completed in 6.180440605s • [SLOW TEST:7.433 seconds] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Simple CustomResourceDefinition /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:35 creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 1 13:06:36.472: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Feb 1 13:06:36.672: INFO: Number of nodes with available pods: 0 Feb 1 13:06:36.672: INFO: Node iruya-node is running more than one daemon pod Feb 1 13:06:37.696: INFO: Number of nodes with available pods: 0 Feb 1 13:06:37.696: INFO: Node iruya-node is running more than one daemon pod Feb 1 13:06:38.689: INFO: Number of nodes with available pods: 0 Feb 1 13:06:38.689: INFO: Node iruya-node is running more than one daemon pod Feb 1 13:06:39.943: INFO: Number of nodes with available pods: 0 Feb 1 13:06:39.943: INFO: Node iruya-node is running more than one daemon pod Feb 1 13:06:40.699: INFO: Number of nodes with available pods: 0 Feb 1 13:06:40.700: INFO: Node iruya-node is running more than one daemon pod Feb 1 13:06:41.685: INFO: Number of nodes with available pods: 0 Feb 1 13:06:41.685: INFO: Node iruya-node is running more than one daemon pod Feb 1 13:06:43.505: INFO: Number of nodes with available pods: 0 Feb 1 13:06:43.505: INFO: Node iruya-node is running more than one daemon pod Feb 1 13:06:43.691: INFO: Number of nodes with available pods: 0 Feb 1 13:06:43.691: INFO: Node iruya-node is running more than one daemon pod Feb 1 13:06:44.692: INFO: Number of nodes with available pods: 0 Feb 1 13:06:44.692: INFO: Node iruya-node is running more than one daemon pod Feb 1 13:06:45.752: INFO: Number of nodes with available pods: 0 Feb 1 13:06:45.752: INFO: Node iruya-node is running more than one daemon pod Feb 1 13:06:46.693: INFO: Number of nodes with available pods: 1 Feb 1 13:06:46.693: INFO: Node iruya-node is running more than one daemon pod Feb 1 13:06:47.706: INFO: Number of nodes with available pods: 1 Feb 1 13:06:47.706: INFO: Node iruya-node is running more than one daemon pod Feb 1 13:06:48.729: INFO: Number of nodes with available pods: 2 Feb 1 13:06:48.729: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. Feb 1 13:06:48.834: INFO: Number of nodes with available pods: 1 Feb 1 13:06:48.834: INFO: Node iruya-node is running more than one daemon pod Feb 1 13:06:49.849: INFO: Number of nodes with available pods: 1 Feb 1 13:06:49.849: INFO: Node iruya-node is running more than one daemon pod Feb 1 13:06:50.849: INFO: Number of nodes with available pods: 1 Feb 1 13:06:50.849: INFO: Node iruya-node is running more than one daemon pod Feb 1 13:06:51.846: INFO: Number of nodes with available pods: 1 Feb 1 13:06:51.846: INFO: Node iruya-node is running more than one daemon pod Feb 1 13:06:52.853: INFO: Number of nodes with available pods: 1 Feb 1 13:06:52.853: INFO: Node iruya-node is running more than one daemon pod Feb 1 13:06:53.904: INFO: Number of nodes with available pods: 1 Feb 1 13:06:53.904: INFO: Node iruya-node is running more than one daemon pod Feb 1 13:06:54.890: INFO: Number of nodes with available pods: 1 Feb 1 13:06:54.890: INFO: Node iruya-node is running more than one daemon pod Feb 1 13:06:55.864: INFO: Number of nodes with available pods: 1 Feb 1 13:06:55.864: INFO: Node iruya-node is running more than one daemon pod Feb 1 13:06:56.880: INFO: Number of nodes with available pods: 1 Feb 1 13:06:56.881: INFO: Node iruya-node is running more than one daemon pod Feb 1 13:06:57.868: INFO: Number of nodes with available pods: 1 Feb 1 13:06:57.868: INFO: Node iruya-node is running more than one daemon pod Feb 1 13:06:58.857: INFO: Number of nodes with available pods: 2 Feb 1 13:06:58.857: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-3348, will wait for the garbage collector to delete the pods Feb 1 13:06:58.936: INFO: Deleting DaemonSet.extensions daemon-set took: 15.687206ms Feb 1 13:06:59.237: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.782716ms Feb 1 13:07:07.916: INFO: Number of nodes with available pods: 0 Feb 1 13:07:07.916: INFO: Number of running nodes: 0, number of available pods: 0 Feb 1 13:07:07.923: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-3348/daemonsets","resourceVersion":"22688155"},"items":null} Feb 1 13:07:07.927: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-3348/pods","resourceVersion":"22688155"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 1 13:07:07.945: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-3348" for this suite. Feb 1 13:07:13.982: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 1 13:07:14.106: INFO: namespace daemonsets-3348 deletion completed in 6.15705353s • [SLOW TEST:37.634 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 1 13:07:14.106: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Performing setup for networking test in namespace pod-network-test-6036 STEP: creating a selector STEP: Creating the service pods in kubernetes Feb 1 13:07:14.233: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Feb 1 13:07:50.571: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostName&protocol=http&host=10.32.0.4&port=8080&tries=1'] Namespace:pod-network-test-6036 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 1 13:07:50.571: INFO: >>> kubeConfig: /root/.kube/config I0201 13:07:50.688589 8 log.go:172] (0xc0024c22c0) (0xc001b58dc0) Create stream I0201 13:07:50.688833 8 log.go:172] (0xc0024c22c0) (0xc001b58dc0) Stream added, broadcasting: 1 I0201 13:07:50.704713 8 log.go:172] (0xc0024c22c0) Reply frame received for 1 I0201 13:07:50.704813 8 log.go:172] (0xc0024c22c0) (0xc0012abb80) Create stream I0201 13:07:50.704832 8 log.go:172] (0xc0024c22c0) (0xc0012abb80) Stream added, broadcasting: 3 I0201 13:07:50.708292 8 log.go:172] (0xc0024c22c0) Reply frame received for 3 I0201 13:07:50.708580 8 log.go:172] (0xc0024c22c0) (0xc0022540a0) Create stream I0201 13:07:50.708605 8 log.go:172] (0xc0024c22c0) (0xc0022540a0) Stream added, broadcasting: 5 I0201 13:07:50.713273 8 log.go:172] (0xc0024c22c0) Reply frame received for 5 I0201 13:07:50.968395 8 log.go:172] (0xc0024c22c0) Data frame received for 3 I0201 13:07:50.968632 8 log.go:172] (0xc0012abb80) (3) Data frame handling I0201 13:07:50.968688 8 log.go:172] (0xc0012abb80) (3) Data frame sent I0201 13:07:51.234700 8 log.go:172] (0xc0024c22c0) (0xc0012abb80) Stream removed, broadcasting: 3 I0201 13:07:51.235061 8 log.go:172] (0xc0024c22c0) Data frame received for 1 I0201 13:07:51.235126 8 log.go:172] (0xc0024c22c0) (0xc0022540a0) Stream removed, broadcasting: 5 I0201 13:07:51.235267 8 log.go:172] (0xc001b58dc0) (1) Data frame handling I0201 13:07:51.235325 8 log.go:172] (0xc001b58dc0) (1) Data frame sent I0201 13:07:51.235360 8 log.go:172] (0xc0024c22c0) (0xc001b58dc0) Stream removed, broadcasting: 1 I0201 13:07:51.235402 8 log.go:172] (0xc0024c22c0) Go away received I0201 13:07:51.236537 8 log.go:172] (0xc0024c22c0) (0xc001b58dc0) Stream removed, broadcasting: 1 I0201 13:07:51.236612 8 log.go:172] (0xc0024c22c0) (0xc0012abb80) Stream removed, broadcasting: 3 I0201 13:07:51.236642 8 log.go:172] (0xc0024c22c0) (0xc0022540a0) Stream removed, broadcasting: 5 Feb 1 13:07:51.237: INFO: Waiting for endpoints: map[] Feb 1 13:07:51.248: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostName&protocol=http&host=10.44.0.1&port=8080&tries=1'] Namespace:pod-network-test-6036 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 1 13:07:51.248: INFO: >>> kubeConfig: /root/.kube/config I0201 13:07:51.336444 8 log.go:172] (0xc0024c2a50) (0xc001b590e0) Create stream I0201 13:07:51.336627 8 log.go:172] (0xc0024c2a50) (0xc001b590e0) Stream added, broadcasting: 1 I0201 13:07:51.349594 8 log.go:172] (0xc0024c2a50) Reply frame received for 1 I0201 13:07:51.349659 8 log.go:172] (0xc0024c2a50) (0xc002254140) Create stream I0201 13:07:51.349681 8 log.go:172] (0xc0024c2a50) (0xc002254140) Stream added, broadcasting: 3 I0201 13:07:51.352627 8 log.go:172] (0xc0024c2a50) Reply frame received for 3 I0201 13:07:51.352684 8 log.go:172] (0xc0024c2a50) (0xc0012abea0) Create stream I0201 13:07:51.352710 8 log.go:172] (0xc0024c2a50) (0xc0012abea0) Stream added, broadcasting: 5 I0201 13:07:51.354723 8 log.go:172] (0xc0024c2a50) Reply frame received for 5 I0201 13:07:51.514003 8 log.go:172] (0xc0024c2a50) Data frame received for 3 I0201 13:07:51.514067 8 log.go:172] (0xc002254140) (3) Data frame handling I0201 13:07:51.514112 8 log.go:172] (0xc002254140) (3) Data frame sent I0201 13:07:51.654978 8 log.go:172] (0xc0024c2a50) Data frame received for 1 I0201 13:07:51.655064 8 log.go:172] (0xc0024c2a50) (0xc002254140) Stream removed, broadcasting: 3 I0201 13:07:51.655197 8 log.go:172] (0xc001b590e0) (1) Data frame handling I0201 13:07:51.655232 8 log.go:172] (0xc001b590e0) (1) Data frame sent I0201 13:07:51.655251 8 log.go:172] (0xc0024c2a50) (0xc001b590e0) Stream removed, broadcasting: 1 I0201 13:07:51.655825 8 log.go:172] (0xc0024c2a50) (0xc0012abea0) Stream removed, broadcasting: 5 I0201 13:07:51.655934 8 log.go:172] (0xc0024c2a50) Go away received I0201 13:07:51.655965 8 log.go:172] (0xc0024c2a50) (0xc001b590e0) Stream removed, broadcasting: 1 I0201 13:07:51.655978 8 log.go:172] (0xc0024c2a50) (0xc002254140) Stream removed, broadcasting: 3 I0201 13:07:51.655987 8 log.go:172] (0xc0024c2a50) (0xc0012abea0) Stream removed, broadcasting: 5 Feb 1 13:07:51.656: INFO: Waiting for endpoints: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 1 13:07:51.656: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-6036" for this suite. Feb 1 13:08:17.688: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 1 13:08:17.859: INFO: namespace pod-network-test-6036 deletion completed in 26.19536688s • [SLOW TEST:63.753 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 1 13:08:17.861: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-configmap-plbx STEP: Creating a pod to test atomic-volume-subpath Feb 1 13:08:18.016: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-plbx" in namespace "subpath-1378" to be "success or failure" Feb 1 13:08:18.023: INFO: Pod "pod-subpath-test-configmap-plbx": Phase="Pending", Reason="", readiness=false. Elapsed: 6.720786ms Feb 1 13:08:20.048: INFO: Pod "pod-subpath-test-configmap-plbx": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03168203s Feb 1 13:08:22.056: INFO: Pod "pod-subpath-test-configmap-plbx": Phase="Pending", Reason="", readiness=false. Elapsed: 4.039974828s Feb 1 13:08:24.069: INFO: Pod "pod-subpath-test-configmap-plbx": Phase="Pending", Reason="", readiness=false. Elapsed: 6.053442221s Feb 1 13:08:26.076: INFO: Pod "pod-subpath-test-configmap-plbx": Phase="Pending", Reason="", readiness=false. Elapsed: 8.059660404s Feb 1 13:08:28.083: INFO: Pod "pod-subpath-test-configmap-plbx": Phase="Pending", Reason="", readiness=false. Elapsed: 10.067502814s Feb 1 13:08:30.096: INFO: Pod "pod-subpath-test-configmap-plbx": Phase="Running", Reason="", readiness=true. Elapsed: 12.080409571s Feb 1 13:08:32.110: INFO: Pod "pod-subpath-test-configmap-plbx": Phase="Running", Reason="", readiness=true. Elapsed: 14.094103545s Feb 1 13:08:34.120: INFO: Pod "pod-subpath-test-configmap-plbx": Phase="Running", Reason="", readiness=true. Elapsed: 16.103955563s Feb 1 13:08:36.134: INFO: Pod "pod-subpath-test-configmap-plbx": Phase="Running", Reason="", readiness=true. Elapsed: 18.118246719s Feb 1 13:08:38.140: INFO: Pod "pod-subpath-test-configmap-plbx": Phase="Running", Reason="", readiness=true. Elapsed: 20.124181552s Feb 1 13:08:40.150: INFO: Pod "pod-subpath-test-configmap-plbx": Phase="Running", Reason="", readiness=true. Elapsed: 22.133609606s Feb 1 13:08:42.169: INFO: Pod "pod-subpath-test-configmap-plbx": Phase="Running", Reason="", readiness=true. Elapsed: 24.153270563s Feb 1 13:08:44.197: INFO: Pod "pod-subpath-test-configmap-plbx": Phase="Running", Reason="", readiness=true. Elapsed: 26.180703261s Feb 1 13:08:46.208: INFO: Pod "pod-subpath-test-configmap-plbx": Phase="Running", Reason="", readiness=true. Elapsed: 28.192394104s Feb 1 13:08:48.217: INFO: Pod "pod-subpath-test-configmap-plbx": Phase="Running", Reason="", readiness=true. Elapsed: 30.201089542s Feb 1 13:08:50.225: INFO: Pod "pod-subpath-test-configmap-plbx": Phase="Succeeded", Reason="", readiness=false. Elapsed: 32.208850176s STEP: Saw pod success Feb 1 13:08:50.225: INFO: Pod "pod-subpath-test-configmap-plbx" satisfied condition "success or failure" Feb 1 13:08:50.229: INFO: Trying to get logs from node iruya-node pod pod-subpath-test-configmap-plbx container test-container-subpath-configmap-plbx: STEP: delete the pod Feb 1 13:08:50.608: INFO: Waiting for pod pod-subpath-test-configmap-plbx to disappear Feb 1 13:08:50.618: INFO: Pod pod-subpath-test-configmap-plbx no longer exists STEP: Deleting pod pod-subpath-test-configmap-plbx Feb 1 13:08:50.618: INFO: Deleting pod "pod-subpath-test-configmap-plbx" in namespace "subpath-1378" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 1 13:08:50.624: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-1378" for this suite. Feb 1 13:08:56.655: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 1 13:08:56.752: INFO: namespace subpath-1378 deletion completed in 6.120235864s • [SLOW TEST:38.892 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 1 13:08:56.753: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-upd-29e68f15-84a6-492d-bf7f-03b243f6eb99 STEP: Creating the pod STEP: Updating configmap configmap-test-upd-29e68f15-84a6-492d-bf7f-03b243f6eb99 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 1 13:09:07.110: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5509" for this suite. Feb 1 13:09:29.141: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 1 13:09:29.353: INFO: namespace configmap-5509 deletion completed in 22.235030517s • [SLOW TEST:32.601 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 1 13:09:29.354: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars Feb 1 13:09:29.443: INFO: Waiting up to 5m0s for pod "downward-api-f8aa9258-11be-48c6-90df-eacc4fbf1a77" in namespace "downward-api-9288" to be "success or failure" Feb 1 13:09:29.463: INFO: Pod "downward-api-f8aa9258-11be-48c6-90df-eacc4fbf1a77": Phase="Pending", Reason="", readiness=false. Elapsed: 19.192444ms Feb 1 13:09:31.472: INFO: Pod "downward-api-f8aa9258-11be-48c6-90df-eacc4fbf1a77": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02850423s Feb 1 13:09:33.480: INFO: Pod "downward-api-f8aa9258-11be-48c6-90df-eacc4fbf1a77": Phase="Pending", Reason="", readiness=false. Elapsed: 4.036999597s Feb 1 13:09:35.497: INFO: Pod "downward-api-f8aa9258-11be-48c6-90df-eacc4fbf1a77": Phase="Pending", Reason="", readiness=false. Elapsed: 6.053711292s Feb 1 13:09:37.508: INFO: Pod "downward-api-f8aa9258-11be-48c6-90df-eacc4fbf1a77": Phase="Pending", Reason="", readiness=false. Elapsed: 8.064269717s Feb 1 13:09:39.522: INFO: Pod "downward-api-f8aa9258-11be-48c6-90df-eacc4fbf1a77": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.079138626s STEP: Saw pod success Feb 1 13:09:39.523: INFO: Pod "downward-api-f8aa9258-11be-48c6-90df-eacc4fbf1a77" satisfied condition "success or failure" Feb 1 13:09:39.530: INFO: Trying to get logs from node iruya-node pod downward-api-f8aa9258-11be-48c6-90df-eacc4fbf1a77 container dapi-container: STEP: delete the pod Feb 1 13:09:39.788: INFO: Waiting for pod downward-api-f8aa9258-11be-48c6-90df-eacc4fbf1a77 to disappear Feb 1 13:09:39.802: INFO: Pod downward-api-f8aa9258-11be-48c6-90df-eacc4fbf1a77 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 1 13:09:39.802: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9288" for this suite. Feb 1 13:09:45.987: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 1 13:09:46.166: INFO: namespace downward-api-9288 deletion completed in 6.3550665s • [SLOW TEST:16.812 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 1 13:09:46.167: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Feb 1 13:09:46.305: INFO: (0) /api/v1/nodes/iruya-node/proxy/logs/:
alternatives.log
alternatives.l... (200; 21.591081ms)
Feb  1 13:09:46.318: INFO: (1) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 13.15702ms)
Feb  1 13:09:46.338: INFO: (2) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 19.297906ms)
Feb  1 13:09:46.502: INFO: (3) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 163.932229ms)
Feb  1 13:09:46.519: INFO: (4) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 17.047036ms)
Feb  1 13:09:46.532: INFO: (5) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 12.860143ms)
Feb  1 13:09:46.553: INFO: (6) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 20.586661ms)
Feb  1 13:09:46.561: INFO: (7) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.990754ms)
Feb  1 13:09:46.568: INFO: (8) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.031632ms)
Feb  1 13:09:46.574: INFO: (9) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.158774ms)
Feb  1 13:09:46.581: INFO: (10) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.645588ms)
Feb  1 13:09:46.593: INFO: (11) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 12.271659ms)
Feb  1 13:09:46.615: INFO: (12) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 21.852504ms)
Feb  1 13:09:46.624: INFO: (13) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.21333ms)
Feb  1 13:09:46.628: INFO: (14) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.753226ms)
Feb  1 13:09:46.633: INFO: (15) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.879327ms)
Feb  1 13:09:46.637: INFO: (16) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.933451ms)
Feb  1 13:09:46.641: INFO: (17) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.897121ms)
Feb  1 13:09:46.645: INFO: (18) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.714096ms)
Feb  1 13:09:46.648: INFO: (19) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.568229ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  1 13:09:46.649: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-4004" for this suite.
Feb  1 13:09:52.784: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  1 13:09:52.926: INFO: namespace proxy-4004 deletion completed in 6.274341691s

• [SLOW TEST:6.760 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:58
    should proxy logs on node using proxy subresource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  1 13:09:52.927: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81
Feb  1 13:09:53.010: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Feb  1 13:09:53.023: INFO: Waiting for terminating namespaces to be deleted...
Feb  1 13:09:53.027: INFO: 
Logging pods the kubelet thinks is on node iruya-node before test
Feb  1 13:09:53.069: INFO: kube-proxy-976zl from kube-system started at 2019-08-04 09:01:39 +0000 UTC (1 container statuses recorded)
Feb  1 13:09:53.069: INFO: 	Container kube-proxy ready: true, restart count 0
Feb  1 13:09:53.069: INFO: weave-net-rlp57 from kube-system started at 2019-10-12 11:56:39 +0000 UTC (2 container statuses recorded)
Feb  1 13:09:53.069: INFO: 	Container weave ready: true, restart count 0
Feb  1 13:09:53.069: INFO: 	Container weave-npc ready: true, restart count 0
Feb  1 13:09:53.069: INFO: 
Logging pods the kubelet thinks is on node iruya-server-sfge57q7djm7 before test
Feb  1 13:09:53.091: INFO: kube-controller-manager-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:42 +0000 UTC (1 container statuses recorded)
Feb  1 13:09:53.091: INFO: 	Container kube-controller-manager ready: true, restart count 19
Feb  1 13:09:53.091: INFO: kube-proxy-58v95 from kube-system started at 2019-08-04 08:52:37 +0000 UTC (1 container statuses recorded)
Feb  1 13:09:53.091: INFO: 	Container kube-proxy ready: true, restart count 0
Feb  1 13:09:53.091: INFO: kube-apiserver-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:39 +0000 UTC (1 container statuses recorded)
Feb  1 13:09:53.091: INFO: 	Container kube-apiserver ready: true, restart count 0
Feb  1 13:09:53.091: INFO: kube-scheduler-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:43 +0000 UTC (1 container statuses recorded)
Feb  1 13:09:53.091: INFO: 	Container kube-scheduler ready: true, restart count 13
Feb  1 13:09:53.091: INFO: coredns-5c98db65d4-xx8w8 from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded)
Feb  1 13:09:53.091: INFO: 	Container coredns ready: true, restart count 0
Feb  1 13:09:53.091: INFO: etcd-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:38 +0000 UTC (1 container statuses recorded)
Feb  1 13:09:53.091: INFO: 	Container etcd ready: true, restart count 0
Feb  1 13:09:53.091: INFO: weave-net-bzl4d from kube-system started at 2019-08-04 08:52:37 +0000 UTC (2 container statuses recorded)
Feb  1 13:09:53.091: INFO: 	Container weave ready: true, restart count 0
Feb  1 13:09:53.091: INFO: 	Container weave-npc ready: true, restart count 0
Feb  1 13:09:53.091: INFO: coredns-5c98db65d4-bm4gs from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded)
Feb  1 13:09:53.091: INFO: 	Container coredns ready: true, restart count 0
[It] validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: verifying the node has the label node iruya-node
STEP: verifying the node has the label node iruya-server-sfge57q7djm7
Feb  1 13:09:53.188: INFO: Pod coredns-5c98db65d4-bm4gs requesting resource cpu=100m on Node iruya-server-sfge57q7djm7
Feb  1 13:09:53.188: INFO: Pod coredns-5c98db65d4-xx8w8 requesting resource cpu=100m on Node iruya-server-sfge57q7djm7
Feb  1 13:09:53.188: INFO: Pod etcd-iruya-server-sfge57q7djm7 requesting resource cpu=0m on Node iruya-server-sfge57q7djm7
Feb  1 13:09:53.188: INFO: Pod kube-apiserver-iruya-server-sfge57q7djm7 requesting resource cpu=250m on Node iruya-server-sfge57q7djm7
Feb  1 13:09:53.188: INFO: Pod kube-controller-manager-iruya-server-sfge57q7djm7 requesting resource cpu=200m on Node iruya-server-sfge57q7djm7
Feb  1 13:09:53.188: INFO: Pod kube-proxy-58v95 requesting resource cpu=0m on Node iruya-server-sfge57q7djm7
Feb  1 13:09:53.188: INFO: Pod kube-proxy-976zl requesting resource cpu=0m on Node iruya-node
Feb  1 13:09:53.188: INFO: Pod kube-scheduler-iruya-server-sfge57q7djm7 requesting resource cpu=100m on Node iruya-server-sfge57q7djm7
Feb  1 13:09:53.188: INFO: Pod weave-net-bzl4d requesting resource cpu=20m on Node iruya-server-sfge57q7djm7
Feb  1 13:09:53.188: INFO: Pod weave-net-rlp57 requesting resource cpu=20m on Node iruya-node
STEP: Starting Pods to consume most of the cluster CPU.
STEP: Creating another pod that requires unavailable amount of CPU.
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-16e77470-eccd-428b-bf65-241c3c37b8ae.15ef494dea7de616], Reason = [Scheduled], Message = [Successfully assigned sched-pred-8348/filler-pod-16e77470-eccd-428b-bf65-241c3c37b8ae to iruya-server-sfge57q7djm7]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-16e77470-eccd-428b-bf65-241c3c37b8ae.15ef494f129d97d1], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-16e77470-eccd-428b-bf65-241c3c37b8ae.15ef4950204eab8a], Reason = [Created], Message = [Created container filler-pod-16e77470-eccd-428b-bf65-241c3c37b8ae]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-16e77470-eccd-428b-bf65-241c3c37b8ae.15ef49503c4789cc], Reason = [Started], Message = [Started container filler-pod-16e77470-eccd-428b-bf65-241c3c37b8ae]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-b445465d-568f-4352-b398-2cb3fdcfae9f.15ef494deaa19ba0], Reason = [Scheduled], Message = [Successfully assigned sched-pred-8348/filler-pod-b445465d-568f-4352-b398-2cb3fdcfae9f to iruya-node]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-b445465d-568f-4352-b398-2cb3fdcfae9f.15ef494f1f487218], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-b445465d-568f-4352-b398-2cb3fdcfae9f.15ef494ffeb9498b], Reason = [Created], Message = [Created container filler-pod-b445465d-568f-4352-b398-2cb3fdcfae9f]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-b445465d-568f-4352-b398-2cb3fdcfae9f.15ef49502df35f7c], Reason = [Started], Message = [Started container filler-pod-b445465d-568f-4352-b398-2cb3fdcfae9f]
STEP: Considering event: 
Type = [Warning], Name = [additional-pod.15ef4950bb656c98], Reason = [FailedScheduling], Message = [0/2 nodes are available: 2 Insufficient cpu.]
STEP: removing the label node off the node iruya-node
STEP: verifying the node doesn't have the label node
STEP: removing the label node off the node iruya-server-sfge57q7djm7
STEP: verifying the node doesn't have the label node
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  1 13:10:06.554: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-8348" for this suite.
Feb  1 13:10:16.804: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  1 13:10:16.978: INFO: namespace sched-pred-8348 deletion completed in 10.339670326s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72

• [SLOW TEST:24.051 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23
  validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform canary updates and phased rolling updates of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  1 13:10:16.978: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75
STEP: Creating service test in namespace statefulset-1154
[It] should perform canary updates and phased rolling updates of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a new StatefulSet
Feb  1 13:10:18.837: INFO: Found 0 stateful pods, waiting for 3
Feb  1 13:10:28.853: INFO: Found 2 stateful pods, waiting for 3
Feb  1 13:10:38.856: INFO: Found 2 stateful pods, waiting for 3
Feb  1 13:10:48.846: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Feb  1 13:10:48.846: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Feb  1 13:10:48.846: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Feb  1 13:10:58.860: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Feb  1 13:10:58.860: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Feb  1 13:10:58.860: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Updating stateful set template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine
Feb  1 13:10:58.906: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Not applying an update when the partition is greater than the number of replicas
STEP: Performing a canary update
Feb  1 13:11:08.981: INFO: Updating stateful set ss2
Feb  1 13:11:09.024: INFO: Waiting for Pod statefulset-1154/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
STEP: Restoring Pods to the correct revision when they are deleted
Feb  1 13:11:19.703: INFO: Found 2 stateful pods, waiting for 3
Feb  1 13:11:29.711: INFO: Found 2 stateful pods, waiting for 3
Feb  1 13:11:40.184: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Feb  1 13:11:40.184: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Feb  1 13:11:40.184: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Feb  1 13:11:49.713: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Feb  1 13:11:49.713: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Feb  1 13:11:49.713: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Performing a phased rolling update
Feb  1 13:11:49.741: INFO: Updating stateful set ss2
Feb  1 13:11:49.757: INFO: Waiting for Pod statefulset-1154/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb  1 13:11:59.967: INFO: Updating stateful set ss2
Feb  1 13:12:00.095: INFO: Waiting for StatefulSet statefulset-1154/ss2 to complete update
Feb  1 13:12:00.095: INFO: Waiting for Pod statefulset-1154/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb  1 13:12:10.252: INFO: Waiting for StatefulSet statefulset-1154/ss2 to complete update
Feb  1 13:12:10.252: INFO: Waiting for Pod statefulset-1154/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb  1 13:12:20.108: INFO: Waiting for StatefulSet statefulset-1154/ss2 to complete update
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86
Feb  1 13:12:30.111: INFO: Deleting all statefulset in ns statefulset-1154
Feb  1 13:12:30.116: INFO: Scaling statefulset ss2 to 0
Feb  1 13:13:10.168: INFO: Waiting for statefulset status.replicas updated to 0
Feb  1 13:13:10.173: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  1 13:13:10.207: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-1154" for this suite.
Feb  1 13:13:18.279: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  1 13:13:18.405: INFO: namespace statefulset-1154 deletion completed in 8.187466573s

• [SLOW TEST:181.427 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should perform canary updates and phased rolling updates of template modifications [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  1 13:13:18.406: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-536.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-536.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-536.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-536.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-536.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-536.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-536.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-536.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-536.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-536.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-536.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-536.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-536.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 47.90.104.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.104.90.47_udp@PTR;check="$$(dig +tcp +noall +answer +search 47.90.104.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.104.90.47_tcp@PTR;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-536.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-536.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-536.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-536.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-536.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-536.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-536.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-536.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-536.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-536.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-536.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-536.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-536.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 47.90.104.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.104.90.47_udp@PTR;check="$$(dig +tcp +noall +answer +search 47.90.104.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.104.90.47_tcp@PTR;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Feb  1 13:13:30.809: INFO: Unable to read wheezy_udp@dns-test-service.dns-536.svc.cluster.local from pod dns-536/dns-test-c50d25ee-70c6-4d54-9990-d5d7faf45e4b: the server could not find the requested resource (get pods dns-test-c50d25ee-70c6-4d54-9990-d5d7faf45e4b)
Feb  1 13:13:30.835: INFO: Unable to read wheezy_tcp@dns-test-service.dns-536.svc.cluster.local from pod dns-536/dns-test-c50d25ee-70c6-4d54-9990-d5d7faf45e4b: the server could not find the requested resource (get pods dns-test-c50d25ee-70c6-4d54-9990-d5d7faf45e4b)
Feb  1 13:13:30.839: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-536.svc.cluster.local from pod dns-536/dns-test-c50d25ee-70c6-4d54-9990-d5d7faf45e4b: the server could not find the requested resource (get pods dns-test-c50d25ee-70c6-4d54-9990-d5d7faf45e4b)
Feb  1 13:13:30.844: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-536.svc.cluster.local from pod dns-536/dns-test-c50d25ee-70c6-4d54-9990-d5d7faf45e4b: the server could not find the requested resource (get pods dns-test-c50d25ee-70c6-4d54-9990-d5d7faf45e4b)
Feb  1 13:13:30.853: INFO: Unable to read wheezy_udp@_http._tcp.test-service-2.dns-536.svc.cluster.local from pod dns-536/dns-test-c50d25ee-70c6-4d54-9990-d5d7faf45e4b: the server could not find the requested resource (get pods dns-test-c50d25ee-70c6-4d54-9990-d5d7faf45e4b)
Feb  1 13:13:30.858: INFO: Unable to read wheezy_tcp@_http._tcp.test-service-2.dns-536.svc.cluster.local from pod dns-536/dns-test-c50d25ee-70c6-4d54-9990-d5d7faf45e4b: the server could not find the requested resource (get pods dns-test-c50d25ee-70c6-4d54-9990-d5d7faf45e4b)
Feb  1 13:13:30.866: INFO: Unable to read wheezy_udp@PodARecord from pod dns-536/dns-test-c50d25ee-70c6-4d54-9990-d5d7faf45e4b: the server could not find the requested resource (get pods dns-test-c50d25ee-70c6-4d54-9990-d5d7faf45e4b)
Feb  1 13:13:30.874: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-536/dns-test-c50d25ee-70c6-4d54-9990-d5d7faf45e4b: the server could not find the requested resource (get pods dns-test-c50d25ee-70c6-4d54-9990-d5d7faf45e4b)
Feb  1 13:13:30.880: INFO: Unable to read 10.104.90.47_udp@PTR from pod dns-536/dns-test-c50d25ee-70c6-4d54-9990-d5d7faf45e4b: the server could not find the requested resource (get pods dns-test-c50d25ee-70c6-4d54-9990-d5d7faf45e4b)
Feb  1 13:13:30.886: INFO: Unable to read 10.104.90.47_tcp@PTR from pod dns-536/dns-test-c50d25ee-70c6-4d54-9990-d5d7faf45e4b: the server could not find the requested resource (get pods dns-test-c50d25ee-70c6-4d54-9990-d5d7faf45e4b)
Feb  1 13:13:30.890: INFO: Unable to read jessie_udp@dns-test-service.dns-536.svc.cluster.local from pod dns-536/dns-test-c50d25ee-70c6-4d54-9990-d5d7faf45e4b: the server could not find the requested resource (get pods dns-test-c50d25ee-70c6-4d54-9990-d5d7faf45e4b)
Feb  1 13:13:30.893: INFO: Unable to read jessie_tcp@dns-test-service.dns-536.svc.cluster.local from pod dns-536/dns-test-c50d25ee-70c6-4d54-9990-d5d7faf45e4b: the server could not find the requested resource (get pods dns-test-c50d25ee-70c6-4d54-9990-d5d7faf45e4b)
Feb  1 13:13:30.897: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-536.svc.cluster.local from pod dns-536/dns-test-c50d25ee-70c6-4d54-9990-d5d7faf45e4b: the server could not find the requested resource (get pods dns-test-c50d25ee-70c6-4d54-9990-d5d7faf45e4b)
Feb  1 13:13:30.900: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-536.svc.cluster.local from pod dns-536/dns-test-c50d25ee-70c6-4d54-9990-d5d7faf45e4b: the server could not find the requested resource (get pods dns-test-c50d25ee-70c6-4d54-9990-d5d7faf45e4b)
Feb  1 13:13:30.904: INFO: Unable to read jessie_udp@_http._tcp.test-service-2.dns-536.svc.cluster.local from pod dns-536/dns-test-c50d25ee-70c6-4d54-9990-d5d7faf45e4b: the server could not find the requested resource (get pods dns-test-c50d25ee-70c6-4d54-9990-d5d7faf45e4b)
Feb  1 13:13:30.908: INFO: Unable to read jessie_tcp@_http._tcp.test-service-2.dns-536.svc.cluster.local from pod dns-536/dns-test-c50d25ee-70c6-4d54-9990-d5d7faf45e4b: the server could not find the requested resource (get pods dns-test-c50d25ee-70c6-4d54-9990-d5d7faf45e4b)
Feb  1 13:13:30.911: INFO: Unable to read jessie_udp@PodARecord from pod dns-536/dns-test-c50d25ee-70c6-4d54-9990-d5d7faf45e4b: the server could not find the requested resource (get pods dns-test-c50d25ee-70c6-4d54-9990-d5d7faf45e4b)
Feb  1 13:13:30.913: INFO: Unable to read jessie_tcp@PodARecord from pod dns-536/dns-test-c50d25ee-70c6-4d54-9990-d5d7faf45e4b: the server could not find the requested resource (get pods dns-test-c50d25ee-70c6-4d54-9990-d5d7faf45e4b)
Feb  1 13:13:30.919: INFO: Unable to read 10.104.90.47_udp@PTR from pod dns-536/dns-test-c50d25ee-70c6-4d54-9990-d5d7faf45e4b: the server could not find the requested resource (get pods dns-test-c50d25ee-70c6-4d54-9990-d5d7faf45e4b)
Feb  1 13:13:30.929: INFO: Unable to read 10.104.90.47_tcp@PTR from pod dns-536/dns-test-c50d25ee-70c6-4d54-9990-d5d7faf45e4b: the server could not find the requested resource (get pods dns-test-c50d25ee-70c6-4d54-9990-d5d7faf45e4b)
Feb  1 13:13:30.929: INFO: Lookups using dns-536/dns-test-c50d25ee-70c6-4d54-9990-d5d7faf45e4b failed for: [wheezy_udp@dns-test-service.dns-536.svc.cluster.local wheezy_tcp@dns-test-service.dns-536.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-536.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-536.svc.cluster.local wheezy_udp@_http._tcp.test-service-2.dns-536.svc.cluster.local wheezy_tcp@_http._tcp.test-service-2.dns-536.svc.cluster.local wheezy_udp@PodARecord wheezy_tcp@PodARecord 10.104.90.47_udp@PTR 10.104.90.47_tcp@PTR jessie_udp@dns-test-service.dns-536.svc.cluster.local jessie_tcp@dns-test-service.dns-536.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-536.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-536.svc.cluster.local jessie_udp@_http._tcp.test-service-2.dns-536.svc.cluster.local jessie_tcp@_http._tcp.test-service-2.dns-536.svc.cluster.local jessie_udp@PodARecord jessie_tcp@PodARecord 10.104.90.47_udp@PTR 10.104.90.47_tcp@PTR]

Feb  1 13:13:36.044: INFO: DNS probes using dns-536/dns-test-c50d25ee-70c6-4d54-9990-d5d7faf45e4b succeeded

STEP: deleting the pod
STEP: deleting the test service
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  1 13:13:36.270: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-536" for this suite.
Feb  1 13:13:42.343: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  1 13:13:42.457: INFO: namespace dns-536 deletion completed in 6.137276986s

• [SLOW TEST:24.051 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  1 13:13:42.458: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir volume type on tmpfs
Feb  1 13:13:42.540: INFO: Waiting up to 5m0s for pod "pod-13855025-4c72-4215-9201-7cb26b585318" in namespace "emptydir-630" to be "success or failure"
Feb  1 13:13:42.570: INFO: Pod "pod-13855025-4c72-4215-9201-7cb26b585318": Phase="Pending", Reason="", readiness=false. Elapsed: 29.725886ms
Feb  1 13:13:44.580: INFO: Pod "pod-13855025-4c72-4215-9201-7cb26b585318": Phase="Pending", Reason="", readiness=false. Elapsed: 2.039900297s
Feb  1 13:13:46.599: INFO: Pod "pod-13855025-4c72-4215-9201-7cb26b585318": Phase="Pending", Reason="", readiness=false. Elapsed: 4.058258073s
Feb  1 13:13:48.658: INFO: Pod "pod-13855025-4c72-4215-9201-7cb26b585318": Phase="Pending", Reason="", readiness=false. Elapsed: 6.117412677s
Feb  1 13:13:50.669: INFO: Pod "pod-13855025-4c72-4215-9201-7cb26b585318": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.12819624s
STEP: Saw pod success
Feb  1 13:13:50.669: INFO: Pod "pod-13855025-4c72-4215-9201-7cb26b585318" satisfied condition "success or failure"
Feb  1 13:13:50.673: INFO: Trying to get logs from node iruya-node pod pod-13855025-4c72-4215-9201-7cb26b585318 container test-container: 
STEP: delete the pod
Feb  1 13:13:50.766: INFO: Waiting for pod pod-13855025-4c72-4215-9201-7cb26b585318 to disappear
Feb  1 13:13:50.784: INFO: Pod pod-13855025-4c72-4215-9201-7cb26b585318 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  1 13:13:50.784: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-630" for this suite.
Feb  1 13:13:56.823: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  1 13:13:56.949: INFO: namespace emptydir-630 deletion completed in 6.158868362s

• [SLOW TEST:14.492 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run rc 
  should create an rc from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  1 13:13:56.951: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1456
[It] should create an rc from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Feb  1 13:13:57.108: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=kubectl-8926'
Feb  1 13:13:59.048: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Feb  1 13:13:59.048: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n"
STEP: verifying the rc e2e-test-nginx-rc was created
STEP: verifying the pod controlled by rc e2e-test-nginx-rc was created
STEP: confirm that you can get logs from an rc
Feb  1 13:13:59.123: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-nginx-rc-jxf8p]
Feb  1 13:13:59.124: INFO: Waiting up to 5m0s for pod "e2e-test-nginx-rc-jxf8p" in namespace "kubectl-8926" to be "running and ready"
Feb  1 13:13:59.162: INFO: Pod "e2e-test-nginx-rc-jxf8p": Phase="Pending", Reason="", readiness=false. Elapsed: 38.119005ms
Feb  1 13:14:01.171: INFO: Pod "e2e-test-nginx-rc-jxf8p": Phase="Pending", Reason="", readiness=false. Elapsed: 2.046782134s
Feb  1 13:14:03.179: INFO: Pod "e2e-test-nginx-rc-jxf8p": Phase="Pending", Reason="", readiness=false. Elapsed: 4.055460476s
Feb  1 13:14:05.190: INFO: Pod "e2e-test-nginx-rc-jxf8p": Phase="Pending", Reason="", readiness=false. Elapsed: 6.066120149s
Feb  1 13:14:07.201: INFO: Pod "e2e-test-nginx-rc-jxf8p": Phase="Running", Reason="", readiness=true. Elapsed: 8.077212275s
Feb  1 13:14:07.201: INFO: Pod "e2e-test-nginx-rc-jxf8p" satisfied condition "running and ready"
Feb  1 13:14:07.201: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-nginx-rc-jxf8p]
Feb  1 13:14:07.202: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-nginx-rc --namespace=kubectl-8926'
Feb  1 13:14:07.384: INFO: stderr: ""
Feb  1 13:14:07.384: INFO: stdout: ""
[AfterEach] [k8s.io] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1461
Feb  1 13:14:07.384: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=kubectl-8926'
Feb  1 13:14:07.669: INFO: stderr: ""
Feb  1 13:14:07.670: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  1 13:14:07.670: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-8926" for this suite.
Feb  1 13:14:13.712: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  1 13:14:13.908: INFO: namespace kubectl-8926 deletion completed in 6.2273979s

• [SLOW TEST:16.957 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create an rc from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-storage] Projected downwardAPI 
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  1 13:14:13.909: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Feb  1 13:14:14.053: INFO: Waiting up to 5m0s for pod "downwardapi-volume-72e9b050-736f-4f1d-94fb-4759d37e9ca0" in namespace "projected-7724" to be "success or failure"
Feb  1 13:14:14.077: INFO: Pod "downwardapi-volume-72e9b050-736f-4f1d-94fb-4759d37e9ca0": Phase="Pending", Reason="", readiness=false. Elapsed: 23.128194ms
Feb  1 13:14:16.092: INFO: Pod "downwardapi-volume-72e9b050-736f-4f1d-94fb-4759d37e9ca0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03850791s
Feb  1 13:14:18.102: INFO: Pod "downwardapi-volume-72e9b050-736f-4f1d-94fb-4759d37e9ca0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.047713947s
Feb  1 13:14:20.109: INFO: Pod "downwardapi-volume-72e9b050-736f-4f1d-94fb-4759d37e9ca0": Phase="Pending", Reason="", readiness=false. Elapsed: 6.05563912s
Feb  1 13:14:22.124: INFO: Pod "downwardapi-volume-72e9b050-736f-4f1d-94fb-4759d37e9ca0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.069858787s
STEP: Saw pod success
Feb  1 13:14:22.124: INFO: Pod "downwardapi-volume-72e9b050-736f-4f1d-94fb-4759d37e9ca0" satisfied condition "success or failure"
Feb  1 13:14:22.133: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-72e9b050-736f-4f1d-94fb-4759d37e9ca0 container client-container: 
STEP: delete the pod
Feb  1 13:14:22.223: INFO: Waiting for pod downwardapi-volume-72e9b050-736f-4f1d-94fb-4759d37e9ca0 to disappear
Feb  1 13:14:22.233: INFO: Pod downwardapi-volume-72e9b050-736f-4f1d-94fb-4759d37e9ca0 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  1 13:14:22.234: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7724" for this suite.
Feb  1 13:14:28.264: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  1 13:14:28.468: INFO: namespace projected-7724 deletion completed in 6.227028261s

• [SLOW TEST:14.560 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  1 13:14:28.471: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0644 on tmpfs
Feb  1 13:14:28.596: INFO: Waiting up to 5m0s for pod "pod-a6fbb61f-848e-4443-9aad-7d327ec7169e" in namespace "emptydir-3399" to be "success or failure"
Feb  1 13:14:28.611: INFO: Pod "pod-a6fbb61f-848e-4443-9aad-7d327ec7169e": Phase="Pending", Reason="", readiness=false. Elapsed: 14.857932ms
Feb  1 13:14:30.621: INFO: Pod "pod-a6fbb61f-848e-4443-9aad-7d327ec7169e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024937887s
Feb  1 13:14:32.631: INFO: Pod "pod-a6fbb61f-848e-4443-9aad-7d327ec7169e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.035080883s
Feb  1 13:14:34.640: INFO: Pod "pod-a6fbb61f-848e-4443-9aad-7d327ec7169e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.043768415s
Feb  1 13:14:36.657: INFO: Pod "pod-a6fbb61f-848e-4443-9aad-7d327ec7169e": Phase="Pending", Reason="", readiness=false. Elapsed: 8.061261363s
Feb  1 13:14:38.667: INFO: Pod "pod-a6fbb61f-848e-4443-9aad-7d327ec7169e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.071034123s
STEP: Saw pod success
Feb  1 13:14:38.667: INFO: Pod "pod-a6fbb61f-848e-4443-9aad-7d327ec7169e" satisfied condition "success or failure"
Feb  1 13:14:38.672: INFO: Trying to get logs from node iruya-node pod pod-a6fbb61f-848e-4443-9aad-7d327ec7169e container test-container: 
STEP: delete the pod
Feb  1 13:14:38.782: INFO: Waiting for pod pod-a6fbb61f-848e-4443-9aad-7d327ec7169e to disappear
Feb  1 13:14:38.797: INFO: Pod pod-a6fbb61f-848e-4443-9aad-7d327ec7169e no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  1 13:14:38.797: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-3399" for this suite.
Feb  1 13:14:44.843: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  1 13:14:44.972: INFO: namespace emptydir-3399 deletion completed in 6.166548129s

• [SLOW TEST:16.502 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-storage] ConfigMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  1 13:14:44.973: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-d59b7521-a94b-49bf-9db6-ce4f6a4b5bfd
STEP: Creating a pod to test consume configMaps
Feb  1 13:14:45.107: INFO: Waiting up to 5m0s for pod "pod-configmaps-806cb99a-a05d-416f-b2c5-99fa721d6106" in namespace "configmap-1165" to be "success or failure"
Feb  1 13:14:45.117: INFO: Pod "pod-configmaps-806cb99a-a05d-416f-b2c5-99fa721d6106": Phase="Pending", Reason="", readiness=false. Elapsed: 9.894181ms
Feb  1 13:14:47.125: INFO: Pod "pod-configmaps-806cb99a-a05d-416f-b2c5-99fa721d6106": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017593132s
Feb  1 13:14:49.142: INFO: Pod "pod-configmaps-806cb99a-a05d-416f-b2c5-99fa721d6106": Phase="Pending", Reason="", readiness=false. Elapsed: 4.03453505s
Feb  1 13:14:51.153: INFO: Pod "pod-configmaps-806cb99a-a05d-416f-b2c5-99fa721d6106": Phase="Pending", Reason="", readiness=false. Elapsed: 6.045860656s
Feb  1 13:14:53.162: INFO: Pod "pod-configmaps-806cb99a-a05d-416f-b2c5-99fa721d6106": Phase="Running", Reason="", readiness=true. Elapsed: 8.055134596s
Feb  1 13:14:55.170: INFO: Pod "pod-configmaps-806cb99a-a05d-416f-b2c5-99fa721d6106": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.062281216s
STEP: Saw pod success
Feb  1 13:14:55.170: INFO: Pod "pod-configmaps-806cb99a-a05d-416f-b2c5-99fa721d6106" satisfied condition "success or failure"
Feb  1 13:14:55.173: INFO: Trying to get logs from node iruya-node pod pod-configmaps-806cb99a-a05d-416f-b2c5-99fa721d6106 container configmap-volume-test: 
STEP: delete the pod
Feb  1 13:14:55.214: INFO: Waiting for pod pod-configmaps-806cb99a-a05d-416f-b2c5-99fa721d6106 to disappear
Feb  1 13:14:55.225: INFO: Pod pod-configmaps-806cb99a-a05d-416f-b2c5-99fa721d6106 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  1 13:14:55.225: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-1165" for this suite.
Feb  1 13:15:01.279: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  1 13:15:01.400: INFO: namespace configmap-1165 deletion completed in 6.172087664s

• [SLOW TEST:16.427 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  1 13:15:01.401: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with configMap that has name projected-configmap-test-upd-e46c4382-3ef9-4a7e-b34c-44661eb8a763
STEP: Creating the pod
STEP: Updating configmap projected-configmap-test-upd-e46c4382-3ef9-4a7e-b34c-44661eb8a763
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  1 13:16:15.151: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-648" for this suite.
Feb  1 13:16:37.205: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  1 13:16:37.383: INFO: namespace projected-648 deletion completed in 22.217408877s

• [SLOW TEST:95.982 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  1 13:16:37.383: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Feb  1 13:19:39.907: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  1 13:19:39.964: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  1 13:19:41.965: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  1 13:19:41.974: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  1 13:19:43.965: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  1 13:19:43.979: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  1 13:19:45.965: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  1 13:19:45.977: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  1 13:19:47.965: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  1 13:19:47.974: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  1 13:19:49.965: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  1 13:19:49.975: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  1 13:19:51.965: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  1 13:19:51.976: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  1 13:19:53.965: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  1 13:19:53.978: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  1 13:19:55.965: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  1 13:19:55.973: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  1 13:19:57.965: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  1 13:19:57.974: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  1 13:19:59.965: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  1 13:19:59.974: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  1 13:20:01.965: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  1 13:20:01.984: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  1 13:20:03.965: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  1 13:20:03.975: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  1 13:20:05.965: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  1 13:20:05.975: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  1 13:20:07.965: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  1 13:20:07.971: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  1 13:20:09.965: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  1 13:20:09.973: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  1 13:20:11.965: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  1 13:20:11.982: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  1 13:20:13.965: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  1 13:20:13.979: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  1 13:20:15.965: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  1 13:20:15.979: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  1 13:20:17.966: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  1 13:20:17.979: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  1 13:20:19.965: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  1 13:20:19.972: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  1 13:20:21.965: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  1 13:20:21.978: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  1 13:20:23.965: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  1 13:20:23.984: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  1 13:20:25.965: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  1 13:20:25.979: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  1 13:20:27.965: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  1 13:20:27.977: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  1 13:20:29.966: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  1 13:20:29.979: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  1 13:20:31.965: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  1 13:20:31.975: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  1 13:20:33.965: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  1 13:20:33.986: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  1 13:20:35.965: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  1 13:20:35.977: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  1 13:20:37.966: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  1 13:20:37.978: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  1 13:20:39.966: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  1 13:20:39.979: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  1 13:20:41.965: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  1 13:20:41.973: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  1 13:20:43.965: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  1 13:20:43.981: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  1 13:20:45.965: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  1 13:20:45.979: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  1 13:20:47.965: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  1 13:20:47.976: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  1 13:20:49.965: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  1 13:20:49.974: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  1 13:20:51.965: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  1 13:20:51.974: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  1 13:20:53.965: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  1 13:20:53.974: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  1 13:20:55.965: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  1 13:20:55.974: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  1 13:20:57.965: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  1 13:20:57.977: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  1 13:20:59.965: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  1 13:20:59.978: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  1 13:21:01.965: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  1 13:21:01.973: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  1 13:21:03.965: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  1 13:21:03.978: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  1 13:21:05.965: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  1 13:21:05.977: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  1 13:21:07.965: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  1 13:21:07.979: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  1 13:21:09.965: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  1 13:21:09.976: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  1 13:21:11.965: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  1 13:21:11.975: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  1 13:21:13.965: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  1 13:21:13.975: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  1 13:21:15.965: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  1 13:21:15.974: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  1 13:21:17.965: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  1 13:21:17.976: INFO: Pod pod-with-poststart-exec-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  1 13:21:17.977: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-8387" for this suite.
Feb  1 13:21:40.014: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  1 13:21:40.110: INFO: namespace container-lifecycle-hook-8387 deletion completed in 22.120968168s

• [SLOW TEST:302.727 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute poststart exec hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  1 13:21:40.111: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81
Feb  1 13:21:40.185: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Feb  1 13:21:40.242: INFO: Waiting for terminating namespaces to be deleted...
Feb  1 13:21:40.247: INFO: 
Logging pods the kubelet thinks is on node iruya-node before test
Feb  1 13:21:40.261: INFO: kube-proxy-976zl from kube-system started at 2019-08-04 09:01:39 +0000 UTC (1 container statuses recorded)
Feb  1 13:21:40.261: INFO: 	Container kube-proxy ready: true, restart count 0
Feb  1 13:21:40.261: INFO: weave-net-rlp57 from kube-system started at 2019-10-12 11:56:39 +0000 UTC (2 container statuses recorded)
Feb  1 13:21:40.261: INFO: 	Container weave ready: true, restart count 0
Feb  1 13:21:40.261: INFO: 	Container weave-npc ready: true, restart count 0
Feb  1 13:21:40.261: INFO: 
Logging pods the kubelet thinks is on node iruya-server-sfge57q7djm7 before test
Feb  1 13:21:40.279: INFO: kube-scheduler-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:43 +0000 UTC (1 container statuses recorded)
Feb  1 13:21:40.279: INFO: 	Container kube-scheduler ready: true, restart count 13
Feb  1 13:21:40.279: INFO: coredns-5c98db65d4-xx8w8 from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded)
Feb  1 13:21:40.279: INFO: 	Container coredns ready: true, restart count 0
Feb  1 13:21:40.279: INFO: coredns-5c98db65d4-bm4gs from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded)
Feb  1 13:21:40.279: INFO: 	Container coredns ready: true, restart count 0
Feb  1 13:21:40.279: INFO: etcd-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:38 +0000 UTC (1 container statuses recorded)
Feb  1 13:21:40.279: INFO: 	Container etcd ready: true, restart count 0
Feb  1 13:21:40.279: INFO: weave-net-bzl4d from kube-system started at 2019-08-04 08:52:37 +0000 UTC (2 container statuses recorded)
Feb  1 13:21:40.279: INFO: 	Container weave ready: true, restart count 0
Feb  1 13:21:40.279: INFO: 	Container weave-npc ready: true, restart count 0
Feb  1 13:21:40.279: INFO: kube-controller-manager-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:42 +0000 UTC (1 container statuses recorded)
Feb  1 13:21:40.279: INFO: 	Container kube-controller-manager ready: true, restart count 19
Feb  1 13:21:40.279: INFO: kube-proxy-58v95 from kube-system started at 2019-08-04 08:52:37 +0000 UTC (1 container statuses recorded)
Feb  1 13:21:40.279: INFO: 	Container kube-proxy ready: true, restart count 0
Feb  1 13:21:40.279: INFO: kube-apiserver-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:39 +0000 UTC (1 container statuses recorded)
Feb  1 13:21:40.279: INFO: 	Container kube-apiserver ready: true, restart count 0
[It] validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Trying to launch a pod without a label to get a node which can launch it.
STEP: Explicitly delete pod here to free the resource it takes.
STEP: Trying to apply a random label on the found node.
STEP: verifying the node has the label kubernetes.io/e2e-8682d439-1916-47e6-ad52-7a28765fc5d5 42
STEP: Trying to relaunch the pod, now with labels.
STEP: removing the label kubernetes.io/e2e-8682d439-1916-47e6-ad52-7a28765fc5d5 off the node iruya-node
STEP: verifying the node doesn't have the label kubernetes.io/e2e-8682d439-1916-47e6-ad52-7a28765fc5d5
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  1 13:21:58.590: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-2817" for this suite.
Feb  1 13:22:12.642: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  1 13:22:12.763: INFO: namespace sched-pred-2817 deletion completed in 14.161819992s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72

• [SLOW TEST:32.652 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23
  validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  1 13:22:12.763: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Performing setup for networking test in namespace pod-network-test-6045
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Feb  1 13:22:12.907: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Feb  1 13:22:53.111: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.44.0.1:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-6045 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb  1 13:22:53.111: INFO: >>> kubeConfig: /root/.kube/config
I0201 13:22:53.200609       8 log.go:172] (0xc001688210) (0xc002cc6320) Create stream
I0201 13:22:53.200904       8 log.go:172] (0xc001688210) (0xc002cc6320) Stream added, broadcasting: 1
I0201 13:22:53.216610       8 log.go:172] (0xc001688210) Reply frame received for 1
I0201 13:22:53.216735       8 log.go:172] (0xc001688210) (0xc002cc63c0) Create stream
I0201 13:22:53.216759       8 log.go:172] (0xc001688210) (0xc002cc63c0) Stream added, broadcasting: 3
I0201 13:22:53.219467       8 log.go:172] (0xc001688210) Reply frame received for 3
I0201 13:22:53.219506       8 log.go:172] (0xc001688210) (0xc00090a460) Create stream
I0201 13:22:53.219522       8 log.go:172] (0xc001688210) (0xc00090a460) Stream added, broadcasting: 5
I0201 13:22:53.221398       8 log.go:172] (0xc001688210) Reply frame received for 5
I0201 13:22:53.386527       8 log.go:172] (0xc001688210) Data frame received for 3
I0201 13:22:53.386620       8 log.go:172] (0xc002cc63c0) (3) Data frame handling
I0201 13:22:53.386661       8 log.go:172] (0xc002cc63c0) (3) Data frame sent
I0201 13:22:53.527648       8 log.go:172] (0xc001688210) Data frame received for 1
I0201 13:22:53.527787       8 log.go:172] (0xc001688210) (0xc00090a460) Stream removed, broadcasting: 5
I0201 13:22:53.527942       8 log.go:172] (0xc002cc6320) (1) Data frame handling
I0201 13:22:53.527984       8 log.go:172] (0xc001688210) (0xc002cc63c0) Stream removed, broadcasting: 3
I0201 13:22:53.528059       8 log.go:172] (0xc002cc6320) (1) Data frame sent
I0201 13:22:53.528075       8 log.go:172] (0xc001688210) (0xc002cc6320) Stream removed, broadcasting: 1
I0201 13:22:53.528108       8 log.go:172] (0xc001688210) Go away received
I0201 13:22:53.528596       8 log.go:172] (0xc001688210) (0xc002cc6320) Stream removed, broadcasting: 1
I0201 13:22:53.528633       8 log.go:172] (0xc001688210) (0xc002cc63c0) Stream removed, broadcasting: 3
I0201 13:22:53.528664       8 log.go:172] (0xc001688210) (0xc00090a460) Stream removed, broadcasting: 5
Feb  1 13:22:53.528: INFO: Found all expected endpoints: [netserver-0]
Feb  1 13:22:53.539: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.32.0.4:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-6045 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb  1 13:22:53.539: INFO: >>> kubeConfig: /root/.kube/config
I0201 13:22:53.626449       8 log.go:172] (0xc000aef760) (0xc0001d6640) Create stream
I0201 13:22:53.626692       8 log.go:172] (0xc000aef760) (0xc0001d6640) Stream added, broadcasting: 1
I0201 13:22:53.641610       8 log.go:172] (0xc000aef760) Reply frame received for 1
I0201 13:22:53.641683       8 log.go:172] (0xc000aef760) (0xc0001d66e0) Create stream
I0201 13:22:53.641701       8 log.go:172] (0xc000aef760) (0xc0001d66e0) Stream added, broadcasting: 3
I0201 13:22:53.644745       8 log.go:172] (0xc000aef760) Reply frame received for 3
I0201 13:22:53.644785       8 log.go:172] (0xc000aef760) (0xc00204c280) Create stream
I0201 13:22:53.644800       8 log.go:172] (0xc000aef760) (0xc00204c280) Stream added, broadcasting: 5
I0201 13:22:53.647515       8 log.go:172] (0xc000aef760) Reply frame received for 5
I0201 13:22:53.800361       8 log.go:172] (0xc000aef760) Data frame received for 3
I0201 13:22:53.800470       8 log.go:172] (0xc0001d66e0) (3) Data frame handling
I0201 13:22:53.800532       8 log.go:172] (0xc0001d66e0) (3) Data frame sent
I0201 13:22:54.079895       8 log.go:172] (0xc000aef760) Data frame received for 1
I0201 13:22:54.080218       8 log.go:172] (0xc000aef760) (0xc0001d66e0) Stream removed, broadcasting: 3
I0201 13:22:54.080558       8 log.go:172] (0xc0001d6640) (1) Data frame handling
I0201 13:22:54.080628       8 log.go:172] (0xc0001d6640) (1) Data frame sent
I0201 13:22:54.080658       8 log.go:172] (0xc000aef760) (0xc0001d6640) Stream removed, broadcasting: 1
I0201 13:22:54.081185       8 log.go:172] (0xc000aef760) (0xc00204c280) Stream removed, broadcasting: 5
I0201 13:22:54.081786       8 log.go:172] (0xc000aef760) Go away received
I0201 13:22:54.082333       8 log.go:172] (0xc000aef760) (0xc0001d6640) Stream removed, broadcasting: 1
I0201 13:22:54.082449       8 log.go:172] (0xc000aef760) (0xc0001d66e0) Stream removed, broadcasting: 3
I0201 13:22:54.082632       8 log.go:172] (0xc000aef760) (0xc00204c280) Stream removed, broadcasting: 5
Feb  1 13:22:54.082: INFO: Found all expected endpoints: [netserver-1]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  1 13:22:54.083: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-6045" for this suite.
Feb  1 13:23:18.142: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  1 13:23:18.270: INFO: namespace pod-network-test-6045 deletion completed in 24.165029368s

• [SLOW TEST:65.508 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  1 13:23:18.272: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the deployment
STEP: Wait for the Deployment to create new ReplicaSet
STEP: delete the deployment
STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs
STEP: Gathering metrics
W0201 13:23:48.932347       8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Feb  1 13:23:48.932: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  1 13:23:48.933: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-4455" for this suite.
Feb  1 13:23:55.655: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  1 13:23:56.843: INFO: namespace gc-4455 deletion completed in 7.886120783s

• [SLOW TEST:38.572 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy through a service and a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  1 13:23:56.849: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy through a service and a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: starting an echo server on multiple ports
STEP: creating replication controller proxy-service-cv9kx in namespace proxy-2014
I0201 13:23:57.352774       8 runners.go:180] Created replication controller with name: proxy-service-cv9kx, namespace: proxy-2014, replica count: 1
I0201 13:23:58.404316       8 runners.go:180] proxy-service-cv9kx Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0201 13:23:59.405059       8 runners.go:180] proxy-service-cv9kx Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0201 13:24:00.405920       8 runners.go:180] proxy-service-cv9kx Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0201 13:24:01.406451       8 runners.go:180] proxy-service-cv9kx Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0201 13:24:02.407640       8 runners.go:180] proxy-service-cv9kx Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0201 13:24:03.408430       8 runners.go:180] proxy-service-cv9kx Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0201 13:24:04.409120       8 runners.go:180] proxy-service-cv9kx Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0201 13:24:05.409833       8 runners.go:180] proxy-service-cv9kx Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0201 13:24:06.411078       8 runners.go:180] proxy-service-cv9kx Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0201 13:24:07.412476       8 runners.go:180] proxy-service-cv9kx Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0201 13:24:08.413630       8 runners.go:180] proxy-service-cv9kx Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0201 13:24:09.414488       8 runners.go:180] proxy-service-cv9kx Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0201 13:24:10.415212       8 runners.go:180] proxy-service-cv9kx Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Feb  1 13:24:10.424: INFO: setup took 13.342134498s, starting test cases
STEP: running 16 cases, 20 attempts per case, 320 total attempts
Feb  1 13:24:10.465: INFO: (0) /api/v1/namespaces/proxy-2014/services/proxy-service-cv9kx:portname2/proxy/: bar (200; 40.241913ms)
Feb  1 13:24:10.465: INFO: (0) /api/v1/namespaces/proxy-2014/services/http:proxy-service-cv9kx:portname2/proxy/: bar (200; 40.623612ms)
Feb  1 13:24:10.466: INFO: (0) /api/v1/namespaces/proxy-2014/pods/http:proxy-service-cv9kx-6s87g:162/proxy/: bar (200; 39.972207ms)
Feb  1 13:24:10.466: INFO: (0) /api/v1/namespaces/proxy-2014/pods/proxy-service-cv9kx-6s87g:162/proxy/: bar (200; 39.93568ms)
Feb  1 13:24:10.474: INFO: (0) /api/v1/namespaces/proxy-2014/services/proxy-service-cv9kx:portname1/proxy/: foo (200; 48.065103ms)
Feb  1 13:24:10.474: INFO: (0) /api/v1/namespaces/proxy-2014/pods/proxy-service-cv9kx-6s87g:160/proxy/: foo (200; 49.010751ms)
Feb  1 13:24:10.475: INFO: (0) /api/v1/namespaces/proxy-2014/pods/http:proxy-service-cv9kx-6s87g:160/proxy/: foo (200; 49.707892ms)
Feb  1 13:24:10.475: INFO: (0) /api/v1/namespaces/proxy-2014/services/http:proxy-service-cv9kx:portname1/proxy/: foo (200; 49.51909ms)
Feb  1 13:24:10.476: INFO: (0) /api/v1/namespaces/proxy-2014/pods/proxy-service-cv9kx-6s87g:1080/proxy/: test<... (200; 51.515122ms)
Feb  1 13:24:10.477: INFO: (0) /api/v1/namespaces/proxy-2014/pods/http:proxy-service-cv9kx-6s87g:1080/proxy/: ... (200; 51.555805ms)
Feb  1 13:24:10.477: INFO: (0) /api/v1/namespaces/proxy-2014/pods/proxy-service-cv9kx-6s87g/proxy/: test (200; 51.712319ms)
Feb  1 13:24:10.488: INFO: (0) /api/v1/namespaces/proxy-2014/pods/https:proxy-service-cv9kx-6s87g:460/proxy/: tls baz (200; 63.27236ms)
Feb  1 13:24:10.490: INFO: (0) /api/v1/namespaces/proxy-2014/pods/https:proxy-service-cv9kx-6s87g:462/proxy/: tls qux (200; 64.946713ms)
Feb  1 13:24:10.490: INFO: (0) /api/v1/namespaces/proxy-2014/pods/https:proxy-service-cv9kx-6s87g:443/proxy/: ... (200; 30.986851ms)
Feb  1 13:24:10.527: INFO: (1) /api/v1/namespaces/proxy-2014/pods/proxy-service-cv9kx-6s87g/proxy/: test (200; 30.80389ms)
Feb  1 13:24:10.527: INFO: (1) /api/v1/namespaces/proxy-2014/pods/proxy-service-cv9kx-6s87g:1080/proxy/: test<... (200; 30.935347ms)
Feb  1 13:24:10.527: INFO: (1) /api/v1/namespaces/proxy-2014/pods/https:proxy-service-cv9kx-6s87g:460/proxy/: tls baz (200; 30.92209ms)
Feb  1 13:24:10.527: INFO: (1) /api/v1/namespaces/proxy-2014/pods/https:proxy-service-cv9kx-6s87g:443/proxy/: test (200; 14.73599ms)
Feb  1 13:24:10.549: INFO: (2) /api/v1/namespaces/proxy-2014/pods/https:proxy-service-cv9kx-6s87g:460/proxy/: tls baz (200; 15.564627ms)
Feb  1 13:24:10.549: INFO: (2) /api/v1/namespaces/proxy-2014/pods/proxy-service-cv9kx-6s87g:1080/proxy/: test<... (200; 15.574195ms)
Feb  1 13:24:10.549: INFO: (2) /api/v1/namespaces/proxy-2014/pods/https:proxy-service-cv9kx-6s87g:462/proxy/: tls qux (200; 15.782262ms)
Feb  1 13:24:10.556: INFO: (2) /api/v1/namespaces/proxy-2014/pods/http:proxy-service-cv9kx-6s87g:160/proxy/: foo (200; 22.461537ms)
Feb  1 13:24:10.556: INFO: (2) /api/v1/namespaces/proxy-2014/pods/proxy-service-cv9kx-6s87g:162/proxy/: bar (200; 22.729281ms)
Feb  1 13:24:10.557: INFO: (2) /api/v1/namespaces/proxy-2014/services/https:proxy-service-cv9kx:tlsportname2/proxy/: tls qux (200; 23.157935ms)
Feb  1 13:24:10.557: INFO: (2) /api/v1/namespaces/proxy-2014/pods/http:proxy-service-cv9kx-6s87g:1080/proxy/: ... (200; 23.938668ms)
Feb  1 13:24:10.558: INFO: (2) /api/v1/namespaces/proxy-2014/services/http:proxy-service-cv9kx:portname1/proxy/: foo (200; 24.344861ms)
Feb  1 13:24:10.559: INFO: (2) /api/v1/namespaces/proxy-2014/pods/https:proxy-service-cv9kx-6s87g:443/proxy/: test<... (200; 21.59133ms)
Feb  1 13:24:10.585: INFO: (3) /api/v1/namespaces/proxy-2014/pods/https:proxy-service-cv9kx-6s87g:443/proxy/: ... (200; 22.165154ms)
Feb  1 13:24:10.585: INFO: (3) /api/v1/namespaces/proxy-2014/pods/http:proxy-service-cv9kx-6s87g:160/proxy/: foo (200; 21.878932ms)
Feb  1 13:24:10.586: INFO: (3) /api/v1/namespaces/proxy-2014/pods/proxy-service-cv9kx-6s87g:162/proxy/: bar (200; 22.263111ms)
Feb  1 13:24:10.586: INFO: (3) /api/v1/namespaces/proxy-2014/pods/proxy-service-cv9kx-6s87g/proxy/: test (200; 22.390303ms)
Feb  1 13:24:10.587: INFO: (3) /api/v1/namespaces/proxy-2014/services/proxy-service-cv9kx:portname1/proxy/: foo (200; 24.094618ms)
Feb  1 13:24:10.588: INFO: (3) /api/v1/namespaces/proxy-2014/services/http:proxy-service-cv9kx:portname2/proxy/: bar (200; 24.561482ms)
Feb  1 13:24:10.588: INFO: (3) /api/v1/namespaces/proxy-2014/services/https:proxy-service-cv9kx:tlsportname1/proxy/: tls baz (200; 25.060357ms)
Feb  1 13:24:10.588: INFO: (3) /api/v1/namespaces/proxy-2014/services/proxy-service-cv9kx:portname2/proxy/: bar (200; 25.152991ms)
Feb  1 13:24:10.588: INFO: (3) /api/v1/namespaces/proxy-2014/services/https:proxy-service-cv9kx:tlsportname2/proxy/: tls qux (200; 25.48562ms)
Feb  1 13:24:10.606: INFO: (4) /api/v1/namespaces/proxy-2014/pods/http:proxy-service-cv9kx-6s87g:162/proxy/: bar (200; 16.949165ms)
Feb  1 13:24:10.606: INFO: (4) /api/v1/namespaces/proxy-2014/pods/https:proxy-service-cv9kx-6s87g:460/proxy/: tls baz (200; 16.792493ms)
Feb  1 13:24:10.606: INFO: (4) /api/v1/namespaces/proxy-2014/pods/https:proxy-service-cv9kx-6s87g:443/proxy/: test (200; 18.085524ms)
Feb  1 13:24:10.607: INFO: (4) /api/v1/namespaces/proxy-2014/services/https:proxy-service-cv9kx:tlsportname2/proxy/: tls qux (200; 18.654865ms)
Feb  1 13:24:10.608: INFO: (4) /api/v1/namespaces/proxy-2014/pods/proxy-service-cv9kx-6s87g:162/proxy/: bar (200; 18.387498ms)
Feb  1 13:24:10.608: INFO: (4) /api/v1/namespaces/proxy-2014/pods/proxy-service-cv9kx-6s87g:1080/proxy/: test<... (200; 18.25013ms)
Feb  1 13:24:10.608: INFO: (4) /api/v1/namespaces/proxy-2014/pods/http:proxy-service-cv9kx-6s87g:160/proxy/: foo (200; 19.249165ms)
Feb  1 13:24:10.608: INFO: (4) /api/v1/namespaces/proxy-2014/pods/http:proxy-service-cv9kx-6s87g:1080/proxy/: ... (200; 19.280727ms)
Feb  1 13:24:10.612: INFO: (4) /api/v1/namespaces/proxy-2014/services/proxy-service-cv9kx:portname1/proxy/: foo (200; 22.426693ms)
Feb  1 13:24:10.612: INFO: (4) /api/v1/namespaces/proxy-2014/services/http:proxy-service-cv9kx:portname2/proxy/: bar (200; 22.565235ms)
Feb  1 13:24:10.612: INFO: (4) /api/v1/namespaces/proxy-2014/services/https:proxy-service-cv9kx:tlsportname1/proxy/: tls baz (200; 23.205103ms)
Feb  1 13:24:10.612: INFO: (4) /api/v1/namespaces/proxy-2014/services/proxy-service-cv9kx:portname2/proxy/: bar (200; 23.11393ms)
Feb  1 13:24:10.613: INFO: (4) /api/v1/namespaces/proxy-2014/services/http:proxy-service-cv9kx:portname1/proxy/: foo (200; 23.664001ms)
Feb  1 13:24:10.613: INFO: (4) /api/v1/namespaces/proxy-2014/pods/https:proxy-service-cv9kx-6s87g:462/proxy/: tls qux (200; 24.09184ms)
Feb  1 13:24:10.632: INFO: (5) /api/v1/namespaces/proxy-2014/services/proxy-service-cv9kx:portname2/proxy/: bar (200; 18.254532ms)
Feb  1 13:24:10.633: INFO: (5) /api/v1/namespaces/proxy-2014/services/http:proxy-service-cv9kx:portname2/proxy/: bar (200; 18.598234ms)
Feb  1 13:24:10.632: INFO: (5) /api/v1/namespaces/proxy-2014/services/http:proxy-service-cv9kx:portname1/proxy/: foo (200; 18.726829ms)
Feb  1 13:24:10.633: INFO: (5) /api/v1/namespaces/proxy-2014/services/https:proxy-service-cv9kx:tlsportname1/proxy/: tls baz (200; 19.442948ms)
Feb  1 13:24:10.637: INFO: (5) /api/v1/namespaces/proxy-2014/pods/proxy-service-cv9kx-6s87g:162/proxy/: bar (200; 24.065661ms)
Feb  1 13:24:10.638: INFO: (5) /api/v1/namespaces/proxy-2014/pods/proxy-service-cv9kx-6s87g/proxy/: test (200; 24.321293ms)
Feb  1 13:24:10.638: INFO: (5) /api/v1/namespaces/proxy-2014/pods/proxy-service-cv9kx-6s87g:1080/proxy/: test<... (200; 24.645985ms)
Feb  1 13:24:10.641: INFO: (5) /api/v1/namespaces/proxy-2014/services/proxy-service-cv9kx:portname1/proxy/: foo (200; 26.922012ms)
Feb  1 13:24:10.641: INFO: (5) /api/v1/namespaces/proxy-2014/pods/http:proxy-service-cv9kx-6s87g:1080/proxy/: ... (200; 26.849103ms)
Feb  1 13:24:10.641: INFO: (5) /api/v1/namespaces/proxy-2014/pods/https:proxy-service-cv9kx-6s87g:460/proxy/: tls baz (200; 27.061477ms)
Feb  1 13:24:10.641: INFO: (5) /api/v1/namespaces/proxy-2014/services/https:proxy-service-cv9kx:tlsportname2/proxy/: tls qux (200; 26.89406ms)
Feb  1 13:24:10.641: INFO: (5) /api/v1/namespaces/proxy-2014/pods/http:proxy-service-cv9kx-6s87g:160/proxy/: foo (200; 27.414338ms)
Feb  1 13:24:10.641: INFO: (5) /api/v1/namespaces/proxy-2014/pods/http:proxy-service-cv9kx-6s87g:162/proxy/: bar (200; 26.798907ms)
Feb  1 13:24:10.641: INFO: (5) /api/v1/namespaces/proxy-2014/pods/proxy-service-cv9kx-6s87g:160/proxy/: foo (200; 27.249172ms)
Feb  1 13:24:10.641: INFO: (5) /api/v1/namespaces/proxy-2014/pods/https:proxy-service-cv9kx-6s87g:462/proxy/: tls qux (200; 27.643703ms)
Feb  1 13:24:10.641: INFO: (5) /api/v1/namespaces/proxy-2014/pods/https:proxy-service-cv9kx-6s87g:443/proxy/: ... (200; 12.052329ms)
Feb  1 13:24:10.656: INFO: (6) /api/v1/namespaces/proxy-2014/pods/http:proxy-service-cv9kx-6s87g:160/proxy/: foo (200; 13.775029ms)
Feb  1 13:24:10.656: INFO: (6) /api/v1/namespaces/proxy-2014/pods/proxy-service-cv9kx-6s87g:162/proxy/: bar (200; 14.264884ms)
Feb  1 13:24:10.658: INFO: (6) /api/v1/namespaces/proxy-2014/pods/http:proxy-service-cv9kx-6s87g:162/proxy/: bar (200; 15.929376ms)
Feb  1 13:24:10.660: INFO: (6) /api/v1/namespaces/proxy-2014/pods/proxy-service-cv9kx-6s87g/proxy/: test (200; 17.781322ms)
Feb  1 13:24:10.660: INFO: (6) /api/v1/namespaces/proxy-2014/pods/proxy-service-cv9kx-6s87g:1080/proxy/: test<... (200; 17.595994ms)
Feb  1 13:24:10.660: INFO: (6) /api/v1/namespaces/proxy-2014/pods/https:proxy-service-cv9kx-6s87g:462/proxy/: tls qux (200; 17.836075ms)
Feb  1 13:24:10.660: INFO: (6) /api/v1/namespaces/proxy-2014/services/http:proxy-service-cv9kx:portname1/proxy/: foo (200; 18.316288ms)
Feb  1 13:24:10.661: INFO: (6) /api/v1/namespaces/proxy-2014/services/https:proxy-service-cv9kx:tlsportname2/proxy/: tls qux (200; 19.593835ms)
Feb  1 13:24:10.667: INFO: (6) /api/v1/namespaces/proxy-2014/services/proxy-service-cv9kx:portname2/proxy/: bar (200; 24.526974ms)
Feb  1 13:24:10.667: INFO: (6) /api/v1/namespaces/proxy-2014/services/proxy-service-cv9kx:portname1/proxy/: foo (200; 24.30296ms)
Feb  1 13:24:10.667: INFO: (6) /api/v1/namespaces/proxy-2014/services/https:proxy-service-cv9kx:tlsportname1/proxy/: tls baz (200; 25.454163ms)
Feb  1 13:24:10.667: INFO: (6) /api/v1/namespaces/proxy-2014/services/http:proxy-service-cv9kx:portname2/proxy/: bar (200; 24.960847ms)
Feb  1 13:24:10.679: INFO: (7) /api/v1/namespaces/proxy-2014/pods/proxy-service-cv9kx-6s87g:1080/proxy/: test<... (200; 11.026758ms)
Feb  1 13:24:10.679: INFO: (7) /api/v1/namespaces/proxy-2014/pods/proxy-service-cv9kx-6s87g:162/proxy/: bar (200; 10.890522ms)
Feb  1 13:24:10.679: INFO: (7) /api/v1/namespaces/proxy-2014/pods/proxy-service-cv9kx-6s87g:160/proxy/: foo (200; 11.219956ms)
Feb  1 13:24:10.679: INFO: (7) /api/v1/namespaces/proxy-2014/pods/http:proxy-service-cv9kx-6s87g:162/proxy/: bar (200; 11.343418ms)
Feb  1 13:24:10.680: INFO: (7) /api/v1/namespaces/proxy-2014/pods/proxy-service-cv9kx-6s87g/proxy/: test (200; 12.302863ms)
Feb  1 13:24:10.680: INFO: (7) /api/v1/namespaces/proxy-2014/pods/https:proxy-service-cv9kx-6s87g:460/proxy/: tls baz (200; 11.833611ms)
Feb  1 13:24:10.680: INFO: (7) /api/v1/namespaces/proxy-2014/pods/https:proxy-service-cv9kx-6s87g:443/proxy/: ... (200; 12.861991ms)
Feb  1 13:24:10.681: INFO: (7) /api/v1/namespaces/proxy-2014/services/proxy-service-cv9kx:portname2/proxy/: bar (200; 13.211724ms)
Feb  1 13:24:10.681: INFO: (7) /api/v1/namespaces/proxy-2014/pods/http:proxy-service-cv9kx-6s87g:160/proxy/: foo (200; 13.799028ms)
Feb  1 13:24:10.681: INFO: (7) /api/v1/namespaces/proxy-2014/services/http:proxy-service-cv9kx:portname2/proxy/: bar (200; 13.807528ms)
Feb  1 13:24:10.681: INFO: (7) /api/v1/namespaces/proxy-2014/pods/https:proxy-service-cv9kx-6s87g:462/proxy/: tls qux (200; 13.745444ms)
Feb  1 13:24:10.683: INFO: (7) /api/v1/namespaces/proxy-2014/services/proxy-service-cv9kx:portname1/proxy/: foo (200; 15.563309ms)
Feb  1 13:24:10.683: INFO: (7) /api/v1/namespaces/proxy-2014/services/https:proxy-service-cv9kx:tlsportname1/proxy/: tls baz (200; 15.111555ms)
Feb  1 13:24:10.692: INFO: (8) /api/v1/namespaces/proxy-2014/pods/http:proxy-service-cv9kx-6s87g:160/proxy/: foo (200; 8.120323ms)
Feb  1 13:24:10.692: INFO: (8) /api/v1/namespaces/proxy-2014/pods/https:proxy-service-cv9kx-6s87g:462/proxy/: tls qux (200; 8.539878ms)
Feb  1 13:24:10.692: INFO: (8) /api/v1/namespaces/proxy-2014/pods/http:proxy-service-cv9kx-6s87g:162/proxy/: bar (200; 8.716653ms)
Feb  1 13:24:10.693: INFO: (8) /api/v1/namespaces/proxy-2014/pods/proxy-service-cv9kx-6s87g:162/proxy/: bar (200; 9.112258ms)
Feb  1 13:24:10.693: INFO: (8) /api/v1/namespaces/proxy-2014/pods/proxy-service-cv9kx-6s87g/proxy/: test (200; 9.419432ms)
Feb  1 13:24:10.694: INFO: (8) /api/v1/namespaces/proxy-2014/pods/https:proxy-service-cv9kx-6s87g:443/proxy/: test<... (200; 10.24617ms)
Feb  1 13:24:10.694: INFO: (8) /api/v1/namespaces/proxy-2014/pods/proxy-service-cv9kx-6s87g:160/proxy/: foo (200; 10.124084ms)
Feb  1 13:24:10.694: INFO: (8) /api/v1/namespaces/proxy-2014/pods/http:proxy-service-cv9kx-6s87g:1080/proxy/: ... (200; 10.583288ms)
Feb  1 13:24:10.694: INFO: (8) /api/v1/namespaces/proxy-2014/pods/https:proxy-service-cv9kx-6s87g:460/proxy/: tls baz (200; 10.292735ms)
Feb  1 13:24:10.695: INFO: (8) /api/v1/namespaces/proxy-2014/services/proxy-service-cv9kx:portname1/proxy/: foo (200; 11.544967ms)
Feb  1 13:24:10.696: INFO: (8) /api/v1/namespaces/proxy-2014/services/proxy-service-cv9kx:portname2/proxy/: bar (200; 12.573106ms)
Feb  1 13:24:10.697: INFO: (8) /api/v1/namespaces/proxy-2014/services/http:proxy-service-cv9kx:portname1/proxy/: foo (200; 13.755944ms)
Feb  1 13:24:10.697: INFO: (8) /api/v1/namespaces/proxy-2014/services/http:proxy-service-cv9kx:portname2/proxy/: bar (200; 14.191377ms)
Feb  1 13:24:10.699: INFO: (8) /api/v1/namespaces/proxy-2014/services/https:proxy-service-cv9kx:tlsportname2/proxy/: tls qux (200; 15.34323ms)
Feb  1 13:24:10.699: INFO: (8) /api/v1/namespaces/proxy-2014/services/https:proxy-service-cv9kx:tlsportname1/proxy/: tls baz (200; 15.735853ms)
Feb  1 13:24:10.710: INFO: (9) /api/v1/namespaces/proxy-2014/pods/http:proxy-service-cv9kx-6s87g:162/proxy/: bar (200; 10.908796ms)
Feb  1 13:24:10.711: INFO: (9) /api/v1/namespaces/proxy-2014/pods/https:proxy-service-cv9kx-6s87g:443/proxy/: ... (200; 11.200287ms)
Feb  1 13:24:10.711: INFO: (9) /api/v1/namespaces/proxy-2014/pods/proxy-service-cv9kx-6s87g:1080/proxy/: test<... (200; 11.280772ms)
Feb  1 13:24:10.711: INFO: (9) /api/v1/namespaces/proxy-2014/pods/proxy-service-cv9kx-6s87g/proxy/: test (200; 11.492791ms)
Feb  1 13:24:10.711: INFO: (9) /api/v1/namespaces/proxy-2014/pods/https:proxy-service-cv9kx-6s87g:460/proxy/: tls baz (200; 11.702765ms)
Feb  1 13:24:10.714: INFO: (9) /api/v1/namespaces/proxy-2014/services/proxy-service-cv9kx:portname2/proxy/: bar (200; 14.697454ms)
Feb  1 13:24:10.714: INFO: (9) /api/v1/namespaces/proxy-2014/services/http:proxy-service-cv9kx:portname2/proxy/: bar (200; 15.130973ms)
Feb  1 13:24:10.715: INFO: (9) /api/v1/namespaces/proxy-2014/services/proxy-service-cv9kx:portname1/proxy/: foo (200; 15.563677ms)
Feb  1 13:24:10.715: INFO: (9) /api/v1/namespaces/proxy-2014/services/https:proxy-service-cv9kx:tlsportname1/proxy/: tls baz (200; 15.637909ms)
Feb  1 13:24:10.716: INFO: (9) /api/v1/namespaces/proxy-2014/services/http:proxy-service-cv9kx:portname1/proxy/: foo (200; 16.654345ms)
Feb  1 13:24:10.716: INFO: (9) /api/v1/namespaces/proxy-2014/services/https:proxy-service-cv9kx:tlsportname2/proxy/: tls qux (200; 16.795835ms)
Feb  1 13:24:10.724: INFO: (10) /api/v1/namespaces/proxy-2014/pods/https:proxy-service-cv9kx-6s87g:462/proxy/: tls qux (200; 8.152585ms)
Feb  1 13:24:10.726: INFO: (10) /api/v1/namespaces/proxy-2014/pods/proxy-service-cv9kx-6s87g/proxy/: test (200; 8.314179ms)
Feb  1 13:24:10.727: INFO: (10) /api/v1/namespaces/proxy-2014/services/proxy-service-cv9kx:portname2/proxy/: bar (200; 10.075577ms)
Feb  1 13:24:10.727: INFO: (10) /api/v1/namespaces/proxy-2014/pods/https:proxy-service-cv9kx-6s87g:443/proxy/: ... (200; 10.709007ms)
Feb  1 13:24:10.728: INFO: (10) /api/v1/namespaces/proxy-2014/pods/proxy-service-cv9kx-6s87g:160/proxy/: foo (200; 9.889696ms)
Feb  1 13:24:10.728: INFO: (10) /api/v1/namespaces/proxy-2014/pods/http:proxy-service-cv9kx-6s87g:162/proxy/: bar (200; 10.678363ms)
Feb  1 13:24:10.728: INFO: (10) /api/v1/namespaces/proxy-2014/services/http:proxy-service-cv9kx:portname2/proxy/: bar (200; 11.054165ms)
Feb  1 13:24:10.728: INFO: (10) /api/v1/namespaces/proxy-2014/pods/proxy-service-cv9kx-6s87g:1080/proxy/: test<... (200; 10.86865ms)
Feb  1 13:24:10.728: INFO: (10) /api/v1/namespaces/proxy-2014/services/http:proxy-service-cv9kx:portname1/proxy/: foo (200; 10.867787ms)
Feb  1 13:24:10.729: INFO: (10) /api/v1/namespaces/proxy-2014/services/https:proxy-service-cv9kx:tlsportname1/proxy/: tls baz (200; 12.123216ms)
Feb  1 13:24:10.729: INFO: (10) /api/v1/namespaces/proxy-2014/services/proxy-service-cv9kx:portname1/proxy/: foo (200; 12.3205ms)
Feb  1 13:24:10.729: INFO: (10) /api/v1/namespaces/proxy-2014/services/https:proxy-service-cv9kx:tlsportname2/proxy/: tls qux (200; 12.257318ms)
Feb  1 13:24:10.740: INFO: (11) /api/v1/namespaces/proxy-2014/pods/http:proxy-service-cv9kx-6s87g:160/proxy/: foo (200; 9.844871ms)
Feb  1 13:24:10.740: INFO: (11) /api/v1/namespaces/proxy-2014/services/https:proxy-service-cv9kx:tlsportname1/proxy/: tls baz (200; 9.874686ms)
Feb  1 13:24:10.740: INFO: (11) /api/v1/namespaces/proxy-2014/services/proxy-service-cv9kx:portname2/proxy/: bar (200; 9.803017ms)
Feb  1 13:24:10.740: INFO: (11) /api/v1/namespaces/proxy-2014/services/proxy-service-cv9kx:portname1/proxy/: foo (200; 10.186556ms)
Feb  1 13:24:10.740: INFO: (11) /api/v1/namespaces/proxy-2014/services/http:proxy-service-cv9kx:portname2/proxy/: bar (200; 10.175956ms)
Feb  1 13:24:10.740: INFO: (11) /api/v1/namespaces/proxy-2014/pods/https:proxy-service-cv9kx-6s87g:443/proxy/: ... (200; 10.330818ms)
Feb  1 13:24:10.742: INFO: (11) /api/v1/namespaces/proxy-2014/pods/http:proxy-service-cv9kx-6s87g:162/proxy/: bar (200; 12.580934ms)
Feb  1 13:24:10.742: INFO: (11) /api/v1/namespaces/proxy-2014/services/http:proxy-service-cv9kx:portname1/proxy/: foo (200; 12.730531ms)
Feb  1 13:24:10.743: INFO: (11) /api/v1/namespaces/proxy-2014/services/https:proxy-service-cv9kx:tlsportname2/proxy/: tls qux (200; 13.002263ms)
Feb  1 13:24:10.743: INFO: (11) /api/v1/namespaces/proxy-2014/pods/https:proxy-service-cv9kx-6s87g:462/proxy/: tls qux (200; 12.816424ms)
Feb  1 13:24:10.743: INFO: (11) /api/v1/namespaces/proxy-2014/pods/proxy-service-cv9kx-6s87g:160/proxy/: foo (200; 12.718779ms)
Feb  1 13:24:10.743: INFO: (11) /api/v1/namespaces/proxy-2014/pods/https:proxy-service-cv9kx-6s87g:460/proxy/: tls baz (200; 13.156644ms)
Feb  1 13:24:10.743: INFO: (11) /api/v1/namespaces/proxy-2014/pods/proxy-service-cv9kx-6s87g:1080/proxy/: test<... (200; 12.999074ms)
Feb  1 13:24:10.743: INFO: (11) /api/v1/namespaces/proxy-2014/pods/proxy-service-cv9kx-6s87g:162/proxy/: bar (200; 13.281135ms)
Feb  1 13:24:10.743: INFO: (11) /api/v1/namespaces/proxy-2014/pods/proxy-service-cv9kx-6s87g/proxy/: test (200; 13.166871ms)
Feb  1 13:24:10.749: INFO: (12) /api/v1/namespaces/proxy-2014/pods/https:proxy-service-cv9kx-6s87g:462/proxy/: tls qux (200; 5.940366ms)
Feb  1 13:24:10.749: INFO: (12) /api/v1/namespaces/proxy-2014/pods/proxy-service-cv9kx-6s87g:162/proxy/: bar (200; 6.152244ms)
Feb  1 13:24:10.749: INFO: (12) /api/v1/namespaces/proxy-2014/pods/proxy-service-cv9kx-6s87g:1080/proxy/: test<... (200; 6.197905ms)
Feb  1 13:24:10.752: INFO: (12) /api/v1/namespaces/proxy-2014/pods/proxy-service-cv9kx-6s87g:160/proxy/: foo (200; 9.037735ms)
Feb  1 13:24:10.752: INFO: (12) /api/v1/namespaces/proxy-2014/pods/http:proxy-service-cv9kx-6s87g:162/proxy/: bar (200; 9.137602ms)
Feb  1 13:24:10.752: INFO: (12) /api/v1/namespaces/proxy-2014/pods/http:proxy-service-cv9kx-6s87g:160/proxy/: foo (200; 9.108314ms)
Feb  1 13:24:10.753: INFO: (12) /api/v1/namespaces/proxy-2014/pods/https:proxy-service-cv9kx-6s87g:460/proxy/: tls baz (200; 9.850194ms)
Feb  1 13:24:10.753: INFO: (12) /api/v1/namespaces/proxy-2014/pods/http:proxy-service-cv9kx-6s87g:1080/proxy/: ... (200; 9.846812ms)
Feb  1 13:24:10.753: INFO: (12) /api/v1/namespaces/proxy-2014/pods/proxy-service-cv9kx-6s87g/proxy/: test (200; 9.901013ms)
Feb  1 13:24:10.753: INFO: (12) /api/v1/namespaces/proxy-2014/pods/https:proxy-service-cv9kx-6s87g:443/proxy/: ... (200; 6.193078ms)
Feb  1 13:24:10.773: INFO: (13) /api/v1/namespaces/proxy-2014/pods/https:proxy-service-cv9kx-6s87g:443/proxy/: test<... (200; 10.51762ms)
Feb  1 13:24:10.774: INFO: (13) /api/v1/namespaces/proxy-2014/pods/proxy-service-cv9kx-6s87g:162/proxy/: bar (200; 10.798451ms)
Feb  1 13:24:10.774: INFO: (13) /api/v1/namespaces/proxy-2014/pods/proxy-service-cv9kx-6s87g:160/proxy/: foo (200; 10.948397ms)
Feb  1 13:24:10.774: INFO: (13) /api/v1/namespaces/proxy-2014/pods/proxy-service-cv9kx-6s87g/proxy/: test (200; 11.0676ms)
Feb  1 13:24:10.777: INFO: (13) /api/v1/namespaces/proxy-2014/services/http:proxy-service-cv9kx:portname1/proxy/: foo (200; 14.110243ms)
Feb  1 13:24:10.778: INFO: (13) /api/v1/namespaces/proxy-2014/services/proxy-service-cv9kx:portname2/proxy/: bar (200; 14.398637ms)
Feb  1 13:24:10.778: INFO: (13) /api/v1/namespaces/proxy-2014/services/https:proxy-service-cv9kx:tlsportname2/proxy/: tls qux (200; 14.717455ms)
Feb  1 13:24:10.778: INFO: (13) /api/v1/namespaces/proxy-2014/services/https:proxy-service-cv9kx:tlsportname1/proxy/: tls baz (200; 14.720348ms)
Feb  1 13:24:10.780: INFO: (13) /api/v1/namespaces/proxy-2014/services/proxy-service-cv9kx:portname1/proxy/: foo (200; 16.91339ms)
Feb  1 13:24:10.792: INFO: (14) /api/v1/namespaces/proxy-2014/pods/http:proxy-service-cv9kx-6s87g:160/proxy/: foo (200; 11.348841ms)
Feb  1 13:24:10.796: INFO: (14) /api/v1/namespaces/proxy-2014/pods/http:proxy-service-cv9kx-6s87g:162/proxy/: bar (200; 14.995928ms)
Feb  1 13:24:10.796: INFO: (14) /api/v1/namespaces/proxy-2014/services/https:proxy-service-cv9kx:tlsportname1/proxy/: tls baz (200; 15.512335ms)
Feb  1 13:24:10.796: INFO: (14) /api/v1/namespaces/proxy-2014/services/https:proxy-service-cv9kx:tlsportname2/proxy/: tls qux (200; 15.444869ms)
Feb  1 13:24:10.796: INFO: (14) /api/v1/namespaces/proxy-2014/pods/https:proxy-service-cv9kx-6s87g:460/proxy/: tls baz (200; 15.554439ms)
Feb  1 13:24:10.796: INFO: (14) /api/v1/namespaces/proxy-2014/pods/https:proxy-service-cv9kx-6s87g:443/proxy/: test (200; 19.499511ms)
Feb  1 13:24:10.800: INFO: (14) /api/v1/namespaces/proxy-2014/services/proxy-service-cv9kx:portname1/proxy/: foo (200; 19.686417ms)
Feb  1 13:24:10.800: INFO: (14) /api/v1/namespaces/proxy-2014/pods/proxy-service-cv9kx-6s87g:1080/proxy/: test<... (200; 19.85492ms)
Feb  1 13:24:10.800: INFO: (14) /api/v1/namespaces/proxy-2014/services/proxy-service-cv9kx:portname2/proxy/: bar (200; 19.921997ms)
Feb  1 13:24:10.800: INFO: (14) /api/v1/namespaces/proxy-2014/pods/http:proxy-service-cv9kx-6s87g:1080/proxy/: ... (200; 19.55921ms)
Feb  1 13:24:10.800: INFO: (14) /api/v1/namespaces/proxy-2014/pods/proxy-service-cv9kx-6s87g:162/proxy/: bar (200; 19.661054ms)
Feb  1 13:24:10.800: INFO: (14) /api/v1/namespaces/proxy-2014/services/http:proxy-service-cv9kx:portname1/proxy/: foo (200; 20.010649ms)
Feb  1 13:24:10.801: INFO: (14) /api/v1/namespaces/proxy-2014/services/http:proxy-service-cv9kx:portname2/proxy/: bar (200; 20.393163ms)
Feb  1 13:24:10.830: INFO: (15) /api/v1/namespaces/proxy-2014/pods/http:proxy-service-cv9kx-6s87g:162/proxy/: bar (200; 28.705801ms)
Feb  1 13:24:10.830: INFO: (15) /api/v1/namespaces/proxy-2014/services/http:proxy-service-cv9kx:portname2/proxy/: bar (200; 28.982593ms)
Feb  1 13:24:10.832: INFO: (15) /api/v1/namespaces/proxy-2014/pods/https:proxy-service-cv9kx-6s87g:460/proxy/: tls baz (200; 30.958908ms)
Feb  1 13:24:10.832: INFO: (15) /api/v1/namespaces/proxy-2014/pods/proxy-service-cv9kx-6s87g/proxy/: test (200; 30.857797ms)
Feb  1 13:24:10.832: INFO: (15) /api/v1/namespaces/proxy-2014/pods/proxy-service-cv9kx-6s87g:1080/proxy/: test<... (200; 30.925137ms)
Feb  1 13:24:10.833: INFO: (15) /api/v1/namespaces/proxy-2014/pods/proxy-service-cv9kx-6s87g:160/proxy/: foo (200; 32.051744ms)
Feb  1 13:24:10.833: INFO: (15) /api/v1/namespaces/proxy-2014/services/https:proxy-service-cv9kx:tlsportname2/proxy/: tls qux (200; 32.435376ms)
Feb  1 13:24:10.833: INFO: (15) /api/v1/namespaces/proxy-2014/pods/https:proxy-service-cv9kx-6s87g:462/proxy/: tls qux (200; 32.248787ms)
Feb  1 13:24:10.833: INFO: (15) /api/v1/namespaces/proxy-2014/services/http:proxy-service-cv9kx:portname1/proxy/: foo (200; 32.49161ms)
Feb  1 13:24:10.833: INFO: (15) /api/v1/namespaces/proxy-2014/pods/proxy-service-cv9kx-6s87g:162/proxy/: bar (200; 32.571142ms)
Feb  1 13:24:10.833: INFO: (15) /api/v1/namespaces/proxy-2014/services/proxy-service-cv9kx:portname2/proxy/: bar (200; 32.365807ms)
Feb  1 13:24:10.834: INFO: (15) /api/v1/namespaces/proxy-2014/pods/http:proxy-service-cv9kx-6s87g:1080/proxy/: ... (200; 32.505338ms)
Feb  1 13:24:10.834: INFO: (15) /api/v1/namespaces/proxy-2014/services/https:proxy-service-cv9kx:tlsportname1/proxy/: tls baz (200; 32.618056ms)
Feb  1 13:24:10.834: INFO: (15) /api/v1/namespaces/proxy-2014/services/proxy-service-cv9kx:portname1/proxy/: foo (200; 32.872334ms)
Feb  1 13:24:10.834: INFO: (15) /api/v1/namespaces/proxy-2014/pods/https:proxy-service-cv9kx-6s87g:443/proxy/: test<... (200; 9.455135ms)
Feb  1 13:24:10.845: INFO: (16) /api/v1/namespaces/proxy-2014/pods/http:proxy-service-cv9kx-6s87g:160/proxy/: foo (200; 9.537027ms)
Feb  1 13:24:10.845: INFO: (16) /api/v1/namespaces/proxy-2014/pods/http:proxy-service-cv9kx-6s87g:1080/proxy/: ... (200; 9.742854ms)
Feb  1 13:24:10.846: INFO: (16) /api/v1/namespaces/proxy-2014/pods/https:proxy-service-cv9kx-6s87g:460/proxy/: tls baz (200; 10.32892ms)
Feb  1 13:24:10.846: INFO: (16) /api/v1/namespaces/proxy-2014/pods/http:proxy-service-cv9kx-6s87g:162/proxy/: bar (200; 10.455086ms)
Feb  1 13:24:10.846: INFO: (16) /api/v1/namespaces/proxy-2014/pods/https:proxy-service-cv9kx-6s87g:462/proxy/: tls qux (200; 10.567358ms)
Feb  1 13:24:10.846: INFO: (16) /api/v1/namespaces/proxy-2014/pods/proxy-service-cv9kx-6s87g:160/proxy/: foo (200; 10.308061ms)
Feb  1 13:24:10.846: INFO: (16) /api/v1/namespaces/proxy-2014/pods/proxy-service-cv9kx-6s87g:162/proxy/: bar (200; 10.437493ms)
Feb  1 13:24:10.850: INFO: (16) /api/v1/namespaces/proxy-2014/services/http:proxy-service-cv9kx:portname1/proxy/: foo (200; 14.563074ms)
Feb  1 13:24:10.850: INFO: (16) /api/v1/namespaces/proxy-2014/pods/proxy-service-cv9kx-6s87g/proxy/: test (200; 14.458096ms)
Feb  1 13:24:10.851: INFO: (16) /api/v1/namespaces/proxy-2014/services/proxy-service-cv9kx:portname1/proxy/: foo (200; 15.218403ms)
Feb  1 13:24:10.851: INFO: (16) /api/v1/namespaces/proxy-2014/pods/https:proxy-service-cv9kx-6s87g:443/proxy/: test (200; 25.032584ms)
Feb  1 13:24:10.881: INFO: (17) /api/v1/namespaces/proxy-2014/pods/proxy-service-cv9kx-6s87g:1080/proxy/: test<... (200; 24.945903ms)
Feb  1 13:24:10.881: INFO: (17) /api/v1/namespaces/proxy-2014/pods/http:proxy-service-cv9kx-6s87g:1080/proxy/: ... (200; 25.209257ms)
Feb  1 13:24:10.881: INFO: (17) /api/v1/namespaces/proxy-2014/pods/http:proxy-service-cv9kx-6s87g:160/proxy/: foo (200; 25.056004ms)
Feb  1 13:24:10.882: INFO: (17) /api/v1/namespaces/proxy-2014/services/proxy-service-cv9kx:portname2/proxy/: bar (200; 25.962902ms)
Feb  1 13:24:10.882: INFO: (17) /api/v1/namespaces/proxy-2014/services/http:proxy-service-cv9kx:portname1/proxy/: foo (200; 25.905738ms)
Feb  1 13:24:10.883: INFO: (17) /api/v1/namespaces/proxy-2014/services/https:proxy-service-cv9kx:tlsportname1/proxy/: tls baz (200; 26.684557ms)
Feb  1 13:24:10.886: INFO: (17) /api/v1/namespaces/proxy-2014/services/http:proxy-service-cv9kx:portname2/proxy/: bar (200; 30.344497ms)
Feb  1 13:24:10.886: INFO: (17) /api/v1/namespaces/proxy-2014/pods/proxy-service-cv9kx-6s87g:160/proxy/: foo (200; 30.521327ms)
Feb  1 13:24:10.888: INFO: (17) /api/v1/namespaces/proxy-2014/services/proxy-service-cv9kx:portname1/proxy/: foo (200; 32.359195ms)
Feb  1 13:24:10.889: INFO: (17) /api/v1/namespaces/proxy-2014/services/https:proxy-service-cv9kx:tlsportname2/proxy/: tls qux (200; 33.02662ms)
Feb  1 13:24:10.901: INFO: (18) /api/v1/namespaces/proxy-2014/pods/https:proxy-service-cv9kx-6s87g:462/proxy/: tls qux (200; 11.23034ms)
Feb  1 13:24:10.904: INFO: (18) /api/v1/namespaces/proxy-2014/services/proxy-service-cv9kx:portname2/proxy/: bar (200; 14.830094ms)
Feb  1 13:24:10.906: INFO: (18) /api/v1/namespaces/proxy-2014/services/http:proxy-service-cv9kx:portname1/proxy/: foo (200; 16.664313ms)
Feb  1 13:24:10.906: INFO: (18) /api/v1/namespaces/proxy-2014/services/proxy-service-cv9kx:portname1/proxy/: foo (200; 16.774305ms)
Feb  1 13:24:10.908: INFO: (18) /api/v1/namespaces/proxy-2014/services/https:proxy-service-cv9kx:tlsportname1/proxy/: tls baz (200; 18.773901ms)
Feb  1 13:24:10.908: INFO: (18) /api/v1/namespaces/proxy-2014/pods/https:proxy-service-cv9kx-6s87g:460/proxy/: tls baz (200; 18.503604ms)
Feb  1 13:24:10.909: INFO: (18) /api/v1/namespaces/proxy-2014/pods/https:proxy-service-cv9kx-6s87g:443/proxy/: ... (200; 20.342744ms)
Feb  1 13:24:10.910: INFO: (18) /api/v1/namespaces/proxy-2014/pods/http:proxy-service-cv9kx-6s87g:160/proxy/: foo (200; 20.481038ms)
Feb  1 13:24:10.910: INFO: (18) /api/v1/namespaces/proxy-2014/pods/proxy-service-cv9kx-6s87g:162/proxy/: bar (200; 20.393617ms)
Feb  1 13:24:10.910: INFO: (18) /api/v1/namespaces/proxy-2014/services/http:proxy-service-cv9kx:portname2/proxy/: bar (200; 20.876711ms)
Feb  1 13:24:10.911: INFO: (18) /api/v1/namespaces/proxy-2014/pods/proxy-service-cv9kx-6s87g:160/proxy/: foo (200; 22.353066ms)
Feb  1 13:24:10.912: INFO: (18) /api/v1/namespaces/proxy-2014/pods/http:proxy-service-cv9kx-6s87g:162/proxy/: bar (200; 22.396199ms)
Feb  1 13:24:10.912: INFO: (18) /api/v1/namespaces/proxy-2014/pods/proxy-service-cv9kx-6s87g:1080/proxy/: test<... (200; 22.369859ms)
Feb  1 13:24:10.912: INFO: (18) /api/v1/namespaces/proxy-2014/pods/proxy-service-cv9kx-6s87g/proxy/: test (200; 22.197072ms)
Feb  1 13:24:10.928: INFO: (19) /api/v1/namespaces/proxy-2014/pods/http:proxy-service-cv9kx-6s87g:1080/proxy/: ... (200; 15.787219ms)
Feb  1 13:24:10.928: INFO: (19) /api/v1/namespaces/proxy-2014/pods/proxy-service-cv9kx-6s87g:1080/proxy/: test<... (200; 16.056163ms)
Feb  1 13:24:10.928: INFO: (19) /api/v1/namespaces/proxy-2014/pods/proxy-service-cv9kx-6s87g:162/proxy/: bar (200; 16.245354ms)
Feb  1 13:24:10.929: INFO: (19) /api/v1/namespaces/proxy-2014/pods/https:proxy-service-cv9kx-6s87g:460/proxy/: tls baz (200; 17.009929ms)
Feb  1 13:24:10.935: INFO: (19) /api/v1/namespaces/proxy-2014/pods/http:proxy-service-cv9kx-6s87g:162/proxy/: bar (200; 23.021904ms)
Feb  1 13:24:10.936: INFO: (19) /api/v1/namespaces/proxy-2014/pods/proxy-service-cv9kx-6s87g/proxy/: test (200; 23.705507ms)
Feb  1 13:24:10.936: INFO: (19) /api/v1/namespaces/proxy-2014/pods/proxy-service-cv9kx-6s87g:160/proxy/: foo (200; 23.501997ms)
Feb  1 13:24:10.936: INFO: (19) /api/v1/namespaces/proxy-2014/pods/http:proxy-service-cv9kx-6s87g:160/proxy/: foo (200; 23.826242ms)
Feb  1 13:24:10.936: INFO: (19) /api/v1/namespaces/proxy-2014/pods/https:proxy-service-cv9kx-6s87g:462/proxy/: tls qux (200; 23.510734ms)
Feb  1 13:24:10.936: INFO: (19) /api/v1/namespaces/proxy-2014/services/proxy-service-cv9kx:portname2/proxy/: bar (200; 24.062289ms)
Feb  1 13:24:10.936: INFO: (19) /api/v1/namespaces/proxy-2014/pods/https:proxy-service-cv9kx-6s87g:443/proxy/: >> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0666 on node default medium
Feb  1 13:24:32.929: INFO: Waiting up to 5m0s for pod "pod-02d35a69-8dd1-4c9c-a609-2d5c9aa93de5" in namespace "emptydir-8321" to be "success or failure"
Feb  1 13:24:32.937: INFO: Pod "pod-02d35a69-8dd1-4c9c-a609-2d5c9aa93de5": Phase="Pending", Reason="", readiness=false. Elapsed: 7.905473ms
Feb  1 13:24:35.505: INFO: Pod "pod-02d35a69-8dd1-4c9c-a609-2d5c9aa93de5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.575918414s
Feb  1 13:24:37.514: INFO: Pod "pod-02d35a69-8dd1-4c9c-a609-2d5c9aa93de5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.585535991s
Feb  1 13:24:39.540: INFO: Pod "pod-02d35a69-8dd1-4c9c-a609-2d5c9aa93de5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.611403703s
Feb  1 13:24:41.549: INFO: Pod "pod-02d35a69-8dd1-4c9c-a609-2d5c9aa93de5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.620419435s
STEP: Saw pod success
Feb  1 13:24:41.549: INFO: Pod "pod-02d35a69-8dd1-4c9c-a609-2d5c9aa93de5" satisfied condition "success or failure"
Feb  1 13:24:41.552: INFO: Trying to get logs from node iruya-node pod pod-02d35a69-8dd1-4c9c-a609-2d5c9aa93de5 container test-container: 
STEP: delete the pod
Feb  1 13:24:41.595: INFO: Waiting for pod pod-02d35a69-8dd1-4c9c-a609-2d5c9aa93de5 to disappear
Feb  1 13:24:41.642: INFO: Pod pod-02d35a69-8dd1-4c9c-a609-2d5c9aa93de5 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  1 13:24:41.643: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-8321" for this suite.
Feb  1 13:24:47.751: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  1 13:24:47.914: INFO: namespace emptydir-8321 deletion completed in 6.256586549s

• [SLOW TEST:15.126 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  1 13:24:47.915: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Feb  1 13:24:56.647: INFO: Successfully updated pod "pod-update-activedeadlineseconds-ee0a9d5f-c212-4dc2-b545-945687dcfcde"
Feb  1 13:24:56.647: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-ee0a9d5f-c212-4dc2-b545-945687dcfcde" in namespace "pods-8687" to be "terminated due to deadline exceeded"
Feb  1 13:24:56.668: INFO: Pod "pod-update-activedeadlineseconds-ee0a9d5f-c212-4dc2-b545-945687dcfcde": Phase="Running", Reason="", readiness=true. Elapsed: 20.630327ms
Feb  1 13:24:58.675: INFO: Pod "pod-update-activedeadlineseconds-ee0a9d5f-c212-4dc2-b545-945687dcfcde": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.027489435s
Feb  1 13:24:58.675: INFO: Pod "pod-update-activedeadlineseconds-ee0a9d5f-c212-4dc2-b545-945687dcfcde" satisfied condition "terminated due to deadline exceeded"
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  1 13:24:58.675: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-8687" for this suite.
Feb  1 13:25:04.706: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  1 13:25:04.885: INFO: namespace pods-8687 deletion completed in 6.203899126s

• [SLOW TEST:16.970 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox Pod with hostAliases 
  should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  1 13:25:04.886: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  1 13:25:13.124: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-4310" for this suite.
Feb  1 13:25:59.165: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  1 13:25:59.261: INFO: namespace kubelet-test-4310 deletion completed in 46.124602594s

• [SLOW TEST:54.376 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when scheduling a busybox Pod with hostAliases
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:136
    should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  1 13:25:59.262: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test substitution in container's args
Feb  1 13:25:59.349: INFO: Waiting up to 5m0s for pod "var-expansion-2fc997ff-dd2d-435b-82c5-86fc3fee01ea" in namespace "var-expansion-4317" to be "success or failure"
Feb  1 13:25:59.361: INFO: Pod "var-expansion-2fc997ff-dd2d-435b-82c5-86fc3fee01ea": Phase="Pending", Reason="", readiness=false. Elapsed: 11.721082ms
Feb  1 13:26:01.380: INFO: Pod "var-expansion-2fc997ff-dd2d-435b-82c5-86fc3fee01ea": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030247209s
Feb  1 13:26:03.386: INFO: Pod "var-expansion-2fc997ff-dd2d-435b-82c5-86fc3fee01ea": Phase="Pending", Reason="", readiness=false. Elapsed: 4.03696038s
Feb  1 13:26:05.395: INFO: Pod "var-expansion-2fc997ff-dd2d-435b-82c5-86fc3fee01ea": Phase="Pending", Reason="", readiness=false. Elapsed: 6.045650697s
Feb  1 13:26:07.419: INFO: Pod "var-expansion-2fc997ff-dd2d-435b-82c5-86fc3fee01ea": Phase="Pending", Reason="", readiness=false. Elapsed: 8.069804378s
Feb  1 13:26:09.436: INFO: Pod "var-expansion-2fc997ff-dd2d-435b-82c5-86fc3fee01ea": Phase="Running", Reason="", readiness=true. Elapsed: 10.086475896s
Feb  1 13:26:11.445: INFO: Pod "var-expansion-2fc997ff-dd2d-435b-82c5-86fc3fee01ea": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.095478274s
STEP: Saw pod success
Feb  1 13:26:11.445: INFO: Pod "var-expansion-2fc997ff-dd2d-435b-82c5-86fc3fee01ea" satisfied condition "success or failure"
Feb  1 13:26:11.449: INFO: Trying to get logs from node iruya-node pod var-expansion-2fc997ff-dd2d-435b-82c5-86fc3fee01ea container dapi-container: 
STEP: delete the pod
Feb  1 13:26:11.533: INFO: Waiting for pod var-expansion-2fc997ff-dd2d-435b-82c5-86fc3fee01ea to disappear
Feb  1 13:26:11.556: INFO: Pod var-expansion-2fc997ff-dd2d-435b-82c5-86fc3fee01ea no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  1 13:26:11.556: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-4317" for this suite.
Feb  1 13:26:17.598: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  1 13:26:17.763: INFO: namespace var-expansion-4317 deletion completed in 6.200529218s

• [SLOW TEST:18.501 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  1 13:26:17.764: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72
[It] deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb  1 13:26:17.919: INFO: Creating deployment "nginx-deployment"
Feb  1 13:26:17.975: INFO: Waiting for observed generation 1
Feb  1 13:26:20.577: INFO: Waiting for all required pods to come up
Feb  1 13:26:21.245: INFO: Pod name nginx: Found 10 pods out of 10
STEP: ensuring each pod is running
Feb  1 13:26:49.314: INFO: Waiting for deployment "nginx-deployment" to complete
Feb  1 13:26:49.327: INFO: Updating deployment "nginx-deployment" with a non-existent image
Feb  1 13:26:49.341: INFO: Updating deployment nginx-deployment
Feb  1 13:26:49.341: INFO: Waiting for observed generation 2
Feb  1 13:26:51.823: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8
Feb  1 13:26:52.367: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8
Feb  1 13:26:52.375: INFO: Waiting for the first rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas
Feb  1 13:26:52.389: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0
Feb  1 13:26:52.389: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5
Feb  1 13:26:52.392: INFO: Waiting for the second rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas
Feb  1 13:26:52.513: INFO: Verifying that deployment "nginx-deployment" has minimum required number of available replicas
Feb  1 13:26:52.513: INFO: Scaling up the deployment "nginx-deployment" from 10 to 30
Feb  1 13:26:52.525: INFO: Updating deployment nginx-deployment
Feb  1 13:26:52.525: INFO: Waiting for the replicasets of deployment "nginx-deployment" to have desired number of replicas
Feb  1 13:26:53.899: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20
Feb  1 13:26:54.113: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66
Feb  1 13:27:00.787: INFO: Deployment "nginx-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment,GenerateName:,Namespace:deployment-1973,SelfLink:/apis/apps/v1/namespaces/deployment-1973/deployments/nginx-deployment,UID:4ed5ae92-6806-42f5-ac57-27c75e521b11,ResourceVersion:22690984,Generation:3,CreationTimestamp:2020-02-01 13:26:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*30,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:25,UpdatedReplicas:5,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[{Progressing True 2020-02-01 13:26:49 +0000 UTC 2020-02-01 13:26:17 +0000 UTC ReplicaSetUpdated ReplicaSet "nginx-deployment-55fb7cb77f" is progressing.} {Available False 2020-02-01 13:26:53 +0000 UTC 2020-02-01 13:26:53 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.}],ReadyReplicas:8,CollisionCount:nil,},}

Feb  1 13:27:03.159: INFO: New ReplicaSet "nginx-deployment-55fb7cb77f" of Deployment "nginx-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f,GenerateName:,Namespace:deployment-1973,SelfLink:/apis/apps/v1/namespaces/deployment-1973/replicasets/nginx-deployment-55fb7cb77f,UID:b4e50f3c-a608-4a67-9b00-6ff37a7f3a35,ResourceVersion:22690996,Generation:3,CreationTimestamp:2020-02-01 13:26:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment nginx-deployment 4ed5ae92-6806-42f5-ac57-27c75e521b11 0xc001657617 0xc001657618}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*13,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Feb  1 13:27:03.159: INFO: All old ReplicaSets of Deployment "nginx-deployment":
Feb  1 13:27:03.160: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498,GenerateName:,Namespace:deployment-1973,SelfLink:/apis/apps/v1/namespaces/deployment-1973/replicasets/nginx-deployment-7b8c6f4498,UID:b103749b-20cf-4d23-ab2e-6d470f89a243,ResourceVersion:22690979,Generation:3,CreationTimestamp:2020-02-01 13:26:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment nginx-deployment 4ed5ae92-6806-42f5-ac57-27c75e521b11 0xc0016576e7 0xc0016576e8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*20,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[],},}
Feb  1 13:27:05.642: INFO: Pod "nginx-deployment-55fb7cb77f-256vp" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-256vp,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-1973,SelfLink:/api/v1/namespaces/deployment-1973/pods/nginx-deployment-55fb7cb77f-256vp,UID:3efff7a5-a75c-4fdc-b774-61e9eff570f5,ResourceVersion:22690952,Generation:0,CreationTimestamp:2020-02-01 13:26:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f b4e50f3c-a608-4a67-9b00-6ff37a7f3a35 0xc002634947 0xc002634948}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-5wj6d {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5wj6d,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-5wj6d true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0026349b0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0026349d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-01 13:26:54 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  1 13:27:05.644: INFO: Pod "nginx-deployment-55fb7cb77f-2pjzq" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-2pjzq,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-1973,SelfLink:/api/v1/namespaces/deployment-1973/pods/nginx-deployment-55fb7cb77f-2pjzq,UID:c9dc601e-1a12-45ce-9c6c-a4bd12a907ed,ResourceVersion:22690974,Generation:0,CreationTimestamp:2020-02-01 13:26:54 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f b4e50f3c-a608-4a67-9b00-6ff37a7f3a35 0xc002634a57 0xc002634a58}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-5wj6d {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5wj6d,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-5wj6d true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002634ac0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002634ae0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-01 13:26:54 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  1 13:27:05.645: INFO: Pod "nginx-deployment-55fb7cb77f-7sknt" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-7sknt,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-1973,SelfLink:/api/v1/namespaces/deployment-1973/pods/nginx-deployment-55fb7cb77f-7sknt,UID:b389c34f-1731-4280-86d4-be17303a273f,ResourceVersion:22690969,Generation:0,CreationTimestamp:2020-02-01 13:26:54 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f b4e50f3c-a608-4a67-9b00-6ff37a7f3a35 0xc002634b67 0xc002634b68}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-5wj6d {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5wj6d,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-5wj6d true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002634be0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002634c00}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-01 13:26:54 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  1 13:27:05.646: INFO: Pod "nginx-deployment-55fb7cb77f-9l4cc" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-9l4cc,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-1973,SelfLink:/api/v1/namespaces/deployment-1973/pods/nginx-deployment-55fb7cb77f-9l4cc,UID:bc646361-1d71-40d9-a767-476772db567b,ResourceVersion:22690921,Generation:0,CreationTimestamp:2020-02-01 13:26:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f b4e50f3c-a608-4a67-9b00-6ff37a7f3a35 0xc002634c87 0xc002634c88}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-5wj6d {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5wj6d,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-5wj6d true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002634cf0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002634d10}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-01 13:26:49 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-01 13:26:49 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-01 13:26:49 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-01 13:26:49 +0000 UTC  }],Message:,Reason:,HostIP:10.96.2.216,PodIP:,StartTime:2020-02-01 13:26:49 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  1 13:27:05.647: INFO: Pod "nginx-deployment-55fb7cb77f-dq5cf" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-dq5cf,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-1973,SelfLink:/api/v1/namespaces/deployment-1973/pods/nginx-deployment-55fb7cb77f-dq5cf,UID:42b49de0-655b-440a-ad12-dda06e2f8b4a,ResourceVersion:22690925,Generation:0,CreationTimestamp:2020-02-01 13:26:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f b4e50f3c-a608-4a67-9b00-6ff37a7f3a35 0xc002634de7 0xc002634de8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-5wj6d {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5wj6d,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-5wj6d true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002634e60} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002634e80}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-01 13:26:50 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-01 13:26:50 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-01 13:26:50 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-01 13:26:49 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2020-02-01 13:26:50 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  1 13:27:05.648: INFO: Pod "nginx-deployment-55fb7cb77f-fptdm" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-fptdm,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-1973,SelfLink:/api/v1/namespaces/deployment-1973/pods/nginx-deployment-55fb7cb77f-fptdm,UID:d4792128-6fef-484c-8443-7e2718a33905,ResourceVersion:22690896,Generation:0,CreationTimestamp:2020-02-01 13:26:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f b4e50f3c-a608-4a67-9b00-6ff37a7f3a35 0xc002634f57 0xc002634f58}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-5wj6d {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5wj6d,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-5wj6d true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002634fc0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002634fe0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-01 13:26:49 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-01 13:26:49 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-01 13:26:49 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-01 13:26:49 +0000 UTC  }],Message:,Reason:,HostIP:10.96.2.216,PodIP:,StartTime:2020-02-01 13:26:49 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  1 13:27:05.650: INFO: Pod "nginx-deployment-55fb7cb77f-jhrgh" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-jhrgh,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-1973,SelfLink:/api/v1/namespaces/deployment-1973/pods/nginx-deployment-55fb7cb77f-jhrgh,UID:89a6f99b-e9d7-40de-93f3-5a6d59231027,ResourceVersion:22691007,Generation:0,CreationTimestamp:2020-02-01 13:26:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f b4e50f3c-a608-4a67-9b00-6ff37a7f3a35 0xc0026350b7 0xc0026350b8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-5wj6d {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5wj6d,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-5wj6d true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002635130} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002635150}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-01 13:26:58 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-01 13:26:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-01 13:26:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-01 13:26:54 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2020-02-01 13:26:58 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  1 13:27:05.650: INFO: Pod "nginx-deployment-55fb7cb77f-lrnln" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-lrnln,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-1973,SelfLink:/api/v1/namespaces/deployment-1973/pods/nginx-deployment-55fb7cb77f-lrnln,UID:c927200d-e299-404c-9b9c-e2fefce86401,ResourceVersion:22690972,Generation:0,CreationTimestamp:2020-02-01 13:26:54 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f b4e50f3c-a608-4a67-9b00-6ff37a7f3a35 0xc002635227 0xc002635228}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-5wj6d {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5wj6d,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-5wj6d true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0026352a0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0026352c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-01 13:26:54 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  1 13:27:05.652: INFO: Pod "nginx-deployment-55fb7cb77f-mlrx8" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-mlrx8,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-1973,SelfLink:/api/v1/namespaces/deployment-1973/pods/nginx-deployment-55fb7cb77f-mlrx8,UID:e469f205-560e-4100-a8eb-c00103695e2b,ResourceVersion:22690975,Generation:0,CreationTimestamp:2020-02-01 13:26:54 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f b4e50f3c-a608-4a67-9b00-6ff37a7f3a35 0xc002635347 0xc002635348}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-5wj6d {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5wj6d,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-5wj6d true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0026353b0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0026353d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-01 13:26:54 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  1 13:27:05.653: INFO: Pod "nginx-deployment-55fb7cb77f-phmvm" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-phmvm,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-1973,SelfLink:/api/v1/namespaces/deployment-1973/pods/nginx-deployment-55fb7cb77f-phmvm,UID:e7fb0768-6279-462f-baca-b974c5ad02d5,ResourceVersion:22690898,Generation:0,CreationTimestamp:2020-02-01 13:26:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f b4e50f3c-a608-4a67-9b00-6ff37a7f3a35 0xc002635457 0xc002635458}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-5wj6d {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5wj6d,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-5wj6d true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0026354d0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0026354f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-01 13:26:49 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-01 13:26:49 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-01 13:26:49 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-01 13:26:49 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2020-02-01 13:26:49 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  1 13:27:05.654: INFO: Pod "nginx-deployment-55fb7cb77f-s9ssn" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-s9ssn,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-1973,SelfLink:/api/v1/namespaces/deployment-1973/pods/nginx-deployment-55fb7cb77f-s9ssn,UID:31515469-317f-476d-9707-2b503163fed4,ResourceVersion:22690912,Generation:0,CreationTimestamp:2020-02-01 13:26:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f b4e50f3c-a608-4a67-9b00-6ff37a7f3a35 0xc0026355c7 0xc0026355c8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-5wj6d {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5wj6d,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-5wj6d true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002635640} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002635660}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-01 13:26:49 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-01 13:26:49 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-01 13:26:49 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-01 13:26:49 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2020-02-01 13:26:49 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  1 13:27:05.655: INFO: Pod "nginx-deployment-55fb7cb77f-v9prs" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-v9prs,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-1973,SelfLink:/api/v1/namespaces/deployment-1973/pods/nginx-deployment-55fb7cb77f-v9prs,UID:5ff83002-4e92-41d9-a9fb-e9b99b90828c,ResourceVersion:22690994,Generation:0,CreationTimestamp:2020-02-01 13:26:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f b4e50f3c-a608-4a67-9b00-6ff37a7f3a35 0xc002635737 0xc002635738}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-5wj6d {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5wj6d,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-5wj6d true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0026357a0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0026357c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-01 13:26:54 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-01 13:26:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-01 13:26:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-01 13:26:53 +0000 UTC  }],Message:,Reason:,HostIP:10.96.2.216,PodIP:,StartTime:2020-02-01 13:26:54 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  1 13:27:05.656: INFO: Pod "nginx-deployment-55fb7cb77f-z2p5t" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-z2p5t,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-1973,SelfLink:/api/v1/namespaces/deployment-1973/pods/nginx-deployment-55fb7cb77f-z2p5t,UID:3d2adc72-4368-4def-945b-5b2f59f7be2d,ResourceVersion:22690982,Generation:0,CreationTimestamp:2020-02-01 13:26:54 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f b4e50f3c-a608-4a67-9b00-6ff37a7f3a35 0xc002635897 0xc002635898}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-5wj6d {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5wj6d,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-5wj6d true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002635910} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002635930}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-01 13:26:54 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  1 13:27:05.657: INFO: Pod "nginx-deployment-7b8c6f4498-75qln" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-75qln,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-1973,SelfLink:/api/v1/namespaces/deployment-1973/pods/nginx-deployment-7b8c6f4498-75qln,UID:30325261-86e1-447b-bc6e-d5a351cad3af,ResourceVersion:22690976,Generation:0,CreationTimestamp:2020-02-01 13:26:54 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 b103749b-20cf-4d23-ab2e-6d470f89a243 0xc0026359b7 0xc0026359b8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-5wj6d {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5wj6d,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-5wj6d true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002635a30} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002635a50}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-01 13:26:54 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  1 13:27:05.657: INFO: Pod "nginx-deployment-7b8c6f4498-8f6wf" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-8f6wf,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-1973,SelfLink:/api/v1/namespaces/deployment-1973/pods/nginx-deployment-7b8c6f4498-8f6wf,UID:01b565f5-bf0e-4572-bee7-306cfbc402dc,ResourceVersion:22690955,Generation:0,CreationTimestamp:2020-02-01 13:26:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 b103749b-20cf-4d23-ab2e-6d470f89a243 0xc002635ad7 0xc002635ad8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-5wj6d {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5wj6d,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-5wj6d true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002635b50} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002635b70}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-01 13:26:54 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  1 13:27:05.658: INFO: Pod "nginx-deployment-7b8c6f4498-9hmds" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-9hmds,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-1973,SelfLink:/api/v1/namespaces/deployment-1973/pods/nginx-deployment-7b8c6f4498-9hmds,UID:6b7f956f-a9f8-4dbd-a4c0-8a7c48721238,ResourceVersion:22690865,Generation:0,CreationTimestamp:2020-02-01 13:26:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 b103749b-20cf-4d23-ab2e-6d470f89a243 0xc002635bf7 0xc002635bf8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-5wj6d {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5wj6d,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-5wj6d true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002635c70} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002635c90}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-01 13:26:18 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-01 13:26:47 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-01 13:26:47 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-01 13:26:18 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.6,StartTime:2020-02-01 13:26:18 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-01 13:26:46 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://9aee9268e503a74e5ab18c4dfafaefe7d32c562d420fa72cdea11343c76a15e3}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  1 13:27:05.659: INFO: Pod "nginx-deployment-7b8c6f4498-cgrxz" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-cgrxz,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-1973,SelfLink:/api/v1/namespaces/deployment-1973/pods/nginx-deployment-7b8c6f4498-cgrxz,UID:d75f4ab0-722b-47b4-a977-4524d7c1aa5d,ResourceVersion:22690815,Generation:0,CreationTimestamp:2020-02-01 13:26:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 b103749b-20cf-4d23-ab2e-6d470f89a243 0xc002635d67 0xc002635d68}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-5wj6d {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5wj6d,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-5wj6d true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002635dd0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002635df0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-01 13:26:18 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-01 13:26:39 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-01 13:26:39 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-01 13:26:18 +0000 UTC  }],Message:,Reason:,HostIP:10.96.2.216,PodIP:10.32.0.5,StartTime:2020-02-01 13:26:18 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-01 13:26:39 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://eb921111f6d40bc2661a85240db30cc71abb06decc87c5c7c37c1adb01c5b16f}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  1 13:27:05.660: INFO: Pod "nginx-deployment-7b8c6f4498-fsn64" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-fsn64,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-1973,SelfLink:/api/v1/namespaces/deployment-1973/pods/nginx-deployment-7b8c6f4498-fsn64,UID:78c51717-d324-4715-b76e-181d0650b098,ResourceVersion:22690973,Generation:0,CreationTimestamp:2020-02-01 13:26:54 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 b103749b-20cf-4d23-ab2e-6d470f89a243 0xc002635ec7 0xc002635ec8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-5wj6d {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5wj6d,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-5wj6d true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002635f30} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002635f50}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-01 13:26:54 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  1 13:27:05.661: INFO: Pod "nginx-deployment-7b8c6f4498-g4dpl" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-g4dpl,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-1973,SelfLink:/api/v1/namespaces/deployment-1973/pods/nginx-deployment-7b8c6f4498-g4dpl,UID:464cefca-639a-4ced-9393-169542b407b7,ResourceVersion:22690956,Generation:0,CreationTimestamp:2020-02-01 13:26:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 b103749b-20cf-4d23-ab2e-6d470f89a243 0xc002635fd7 0xc002635fd8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-5wj6d {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5wj6d,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-5wj6d true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0028b4040} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0028b4060}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-01 13:26:54 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  1 13:27:05.662: INFO: Pod "nginx-deployment-7b8c6f4498-jjdhn" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-jjdhn,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-1973,SelfLink:/api/v1/namespaces/deployment-1973/pods/nginx-deployment-7b8c6f4498-jjdhn,UID:1f3d020b-88c0-4845-b407-3b4a4c7b8a8d,ResourceVersion:22690990,Generation:0,CreationTimestamp:2020-02-01 13:26:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 b103749b-20cf-4d23-ab2e-6d470f89a243 0xc0028b40e7 0xc0028b40e8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-5wj6d {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5wj6d,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-5wj6d true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0028b4160} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0028b4180}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-01 13:26:54 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-01 13:26:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-01 13:26:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-01 13:26:53 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2020-02-01 13:26:54 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  1 13:27:05.663: INFO: Pod "nginx-deployment-7b8c6f4498-kmqg8" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-kmqg8,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-1973,SelfLink:/api/v1/namespaces/deployment-1973/pods/nginx-deployment-7b8c6f4498-kmqg8,UID:b9f84a0c-bded-45fa-85ba-86c6e8cb12c2,ResourceVersion:22690999,Generation:0,CreationTimestamp:2020-02-01 13:26:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 b103749b-20cf-4d23-ab2e-6d470f89a243 0xc0028b4247 0xc0028b4248}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-5wj6d {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5wj6d,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-5wj6d true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0028b42c0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0028b42e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-01 13:26:54 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-01 13:26:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-01 13:26:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-01 13:26:53 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2020-02-01 13:26:54 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  1 13:27:05.664: INFO: Pod "nginx-deployment-7b8c6f4498-kz8h4" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-kz8h4,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-1973,SelfLink:/api/v1/namespaces/deployment-1973/pods/nginx-deployment-7b8c6f4498-kz8h4,UID:dcfd358b-8375-4810-804c-136467435c3c,ResourceVersion:22690986,Generation:0,CreationTimestamp:2020-02-01 13:26:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 b103749b-20cf-4d23-ab2e-6d470f89a243 0xc0028b43a7 0xc0028b43a8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-5wj6d {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5wj6d,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-5wj6d true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0028b4410} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0028b4430}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-01 13:26:54 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-01 13:26:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-01 13:26:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-01 13:26:53 +0000 UTC  }],Message:,Reason:,HostIP:10.96.2.216,PodIP:,StartTime:2020-02-01 13:26:54 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  1 13:27:05.665: INFO: Pod "nginx-deployment-7b8c6f4498-lpgt5" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-lpgt5,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-1973,SelfLink:/api/v1/namespaces/deployment-1973/pods/nginx-deployment-7b8c6f4498-lpgt5,UID:89ff572a-dd0f-4fa6-a81b-0ef89e6df482,ResourceVersion:22690971,Generation:0,CreationTimestamp:2020-02-01 13:26:54 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 b103749b-20cf-4d23-ab2e-6d470f89a243 0xc0028b4507 0xc0028b4508}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-5wj6d {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5wj6d,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-5wj6d true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0028b4580} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0028b45a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-01 13:26:54 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  1 13:27:05.666: INFO: Pod "nginx-deployment-7b8c6f4498-n7sgk" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-n7sgk,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-1973,SelfLink:/api/v1/namespaces/deployment-1973/pods/nginx-deployment-7b8c6f4498-n7sgk,UID:3e53d5be-4b4c-463a-a7b0-54004bba7744,ResourceVersion:22690828,Generation:0,CreationTimestamp:2020-02-01 13:26:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 b103749b-20cf-4d23-ab2e-6d470f89a243 0xc0028b4627 0xc0028b4628}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-5wj6d {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5wj6d,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-5wj6d true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0028b4690} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0028b46b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-01 13:26:18 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-01 13:26:41 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-01 13:26:41 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-01 13:26:18 +0000 UTC  }],Message:,Reason:,HostIP:10.96.2.216,PodIP:10.32.0.7,StartTime:2020-02-01 13:26:18 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-01 13:26:40 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://fe84705d2931bf9aa7580a7aa9764ca6a20fd4d4104590c653d3c78966d8c76d}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  1 13:27:05.667: INFO: Pod "nginx-deployment-7b8c6f4498-q6pnf" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-q6pnf,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-1973,SelfLink:/api/v1/namespaces/deployment-1973/pods/nginx-deployment-7b8c6f4498-q6pnf,UID:f3f3b412-3f47-4ffa-a281-257432a0a69a,ResourceVersion:22690977,Generation:0,CreationTimestamp:2020-02-01 13:26:54 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 b103749b-20cf-4d23-ab2e-6d470f89a243 0xc0028b4787 0xc0028b4788}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-5wj6d {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5wj6d,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-5wj6d true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0028b47f0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0028b4810}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-01 13:26:54 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  1 13:27:05.667: INFO: Pod "nginx-deployment-7b8c6f4498-rkg9h" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-rkg9h,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-1973,SelfLink:/api/v1/namespaces/deployment-1973/pods/nginx-deployment-7b8c6f4498-rkg9h,UID:ec894020-2c97-44d6-80c8-57e0940500a9,ResourceVersion:22690818,Generation:0,CreationTimestamp:2020-02-01 13:26:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 b103749b-20cf-4d23-ab2e-6d470f89a243 0xc0028b4897 0xc0028b4898}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-5wj6d {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5wj6d,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-5wj6d true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0028b4900} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0028b4920}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-01 13:26:18 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-01 13:26:39 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-01 13:26:39 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-01 13:26:18 +0000 UTC  }],Message:,Reason:,HostIP:10.96.2.216,PodIP:10.32.0.6,StartTime:2020-02-01 13:26:18 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-01 13:26:39 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://8469fac66d2f6bd30c70fd6e37bc5fc6c377617c19523b4d45c2d912e39575c2}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  1 13:27:05.668: INFO: Pod "nginx-deployment-7b8c6f4498-rlhqp" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-rlhqp,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-1973,SelfLink:/api/v1/namespaces/deployment-1973/pods/nginx-deployment-7b8c6f4498-rlhqp,UID:b77d663a-9bd4-401d-9fab-27fc3d8f6fc2,ResourceVersion:22690970,Generation:0,CreationTimestamp:2020-02-01 13:26:54 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 b103749b-20cf-4d23-ab2e-6d470f89a243 0xc0028b49f7 0xc0028b49f8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-5wj6d {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5wj6d,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-5wj6d true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0028b4a70} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0028b4a90}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-01 13:26:54 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  1 13:27:05.669: INFO: Pod "nginx-deployment-7b8c6f4498-rwg7v" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-rwg7v,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-1973,SelfLink:/api/v1/namespaces/deployment-1973/pods/nginx-deployment-7b8c6f4498-rwg7v,UID:6584379c-be79-4532-a167-375c7579e203,ResourceVersion:22690862,Generation:0,CreationTimestamp:2020-02-01 13:26:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 b103749b-20cf-4d23-ab2e-6d470f89a243 0xc0028b4b17 0xc0028b4b18}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-5wj6d {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5wj6d,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-5wj6d true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0028b4b90} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0028b4bb0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-01 13:26:18 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-01 13:26:47 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-01 13:26:47 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-01 13:26:18 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.5,StartTime:2020-02-01 13:26:18 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-01 13:26:46 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://a72b33c6fd73638a0c026ab2239276733fe32594bfb060723e192369bf370ff4}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  1 13:27:05.670: INFO: Pod "nginx-deployment-7b8c6f4498-swjfj" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-swjfj,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-1973,SelfLink:/api/v1/namespaces/deployment-1973/pods/nginx-deployment-7b8c6f4498-swjfj,UID:64183ecd-6595-447f-8320-63a8c03bfa53,ResourceVersion:22690850,Generation:0,CreationTimestamp:2020-02-01 13:26:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 b103749b-20cf-4d23-ab2e-6d470f89a243 0xc0028b4c87 0xc0028b4c88}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-5wj6d {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5wj6d,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-5wj6d true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0028b4d00} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0028b4d20}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-01 13:26:18 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-01 13:26:47 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-01 13:26:47 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-01 13:26:18 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.3,StartTime:2020-02-01 13:26:18 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-01 13:26:46 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://d6d7a8af86629f51bbb9452e7d01ce76edbcbf758af4647689e011460995bc22}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  1 13:27:05.671: INFO: Pod "nginx-deployment-7b8c6f4498-tzk75" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-tzk75,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-1973,SelfLink:/api/v1/namespaces/deployment-1973/pods/nginx-deployment-7b8c6f4498-tzk75,UID:3f7d45ea-80f3-4f64-b378-183219e482c7,ResourceVersion:22690825,Generation:0,CreationTimestamp:2020-02-01 13:26:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 b103749b-20cf-4d23-ab2e-6d470f89a243 0xc0028b4df7 0xc0028b4df8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-5wj6d {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5wj6d,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-5wj6d true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0028b4e70} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0028b4e90}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-01 13:26:18 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-01 13:26:41 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-01 13:26:41 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-01 13:26:18 +0000 UTC  }],Message:,Reason:,HostIP:10.96.2.216,PodIP:10.32.0.4,StartTime:2020-02-01 13:26:18 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-01 13:26:39 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://6f38e6504aa6c8bfd700ea255f86187e214901a5491a795524f73285af560cbd}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  1 13:27:05.672: INFO: Pod "nginx-deployment-7b8c6f4498-vjxns" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-vjxns,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-1973,SelfLink:/api/v1/namespaces/deployment-1973/pods/nginx-deployment-7b8c6f4498-vjxns,UID:66f62234-37e8-4261-8fdf-7e83753e6608,ResourceVersion:22690953,Generation:0,CreationTimestamp:2020-02-01 13:26:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 b103749b-20cf-4d23-ab2e-6d470f89a243 0xc0028b4f67 0xc0028b4f68}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-5wj6d {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5wj6d,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-5wj6d true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0028b4fe0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0028b5000}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-01 13:26:54 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  1 13:27:05.673: INFO: Pod "nginx-deployment-7b8c6f4498-w5tw4" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-w5tw4,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-1973,SelfLink:/api/v1/namespaces/deployment-1973/pods/nginx-deployment-7b8c6f4498-w5tw4,UID:d67fd712-cb08-4a9d-8ac5-e26cc0e81fee,ResourceVersion:22690856,Generation:0,CreationTimestamp:2020-02-01 13:26:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 b103749b-20cf-4d23-ab2e-6d470f89a243 0xc0028b5087 0xc0028b5088}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-5wj6d {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5wj6d,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-5wj6d true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0028b5100} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0028b5120}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-01 13:26:18 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-01 13:26:47 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-01 13:26:47 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-01 13:26:18 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.2,StartTime:2020-02-01 13:26:18 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-01 13:26:46 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://a12827553e2f4825e6bf1e36462164cbce425b263c17058a0e9b46a5140c757f}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  1 13:27:05.674: INFO: Pod "nginx-deployment-7b8c6f4498-x86kl" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-x86kl,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-1973,SelfLink:/api/v1/namespaces/deployment-1973/pods/nginx-deployment-7b8c6f4498-x86kl,UID:018b48bf-c7a3-43a2-9fb7-e17e59ac4dfc,ResourceVersion:22690954,Generation:0,CreationTimestamp:2020-02-01 13:26:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 b103749b-20cf-4d23-ab2e-6d470f89a243 0xc0028b51f7 0xc0028b51f8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-5wj6d {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5wj6d,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-5wj6d true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0028b5280} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0028b52a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-01 13:26:54 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  1 13:27:05.675: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-1973" for this suite.
Feb  1 13:27:55.839: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  1 13:27:55.951: INFO: namespace deployment-1973 deletion completed in 49.529541997s

• [SLOW TEST:98.187 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  1 13:27:55.951: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-map-df3de056-fb88-473f-8133-0b3c631880c5
STEP: Creating a pod to test consume configMaps
Feb  1 13:27:56.126: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-b7e229f2-14af-425c-90fb-1efbadf7fe19" in namespace "projected-217" to be "success or failure"
Feb  1 13:27:56.138: INFO: Pod "pod-projected-configmaps-b7e229f2-14af-425c-90fb-1efbadf7fe19": Phase="Pending", Reason="", readiness=false. Elapsed: 11.768013ms
Feb  1 13:27:58.150: INFO: Pod "pod-projected-configmaps-b7e229f2-14af-425c-90fb-1efbadf7fe19": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024268412s
Feb  1 13:28:00.160: INFO: Pod "pod-projected-configmaps-b7e229f2-14af-425c-90fb-1efbadf7fe19": Phase="Pending", Reason="", readiness=false. Elapsed: 4.034057626s
Feb  1 13:28:02.167: INFO: Pod "pod-projected-configmaps-b7e229f2-14af-425c-90fb-1efbadf7fe19": Phase="Pending", Reason="", readiness=false. Elapsed: 6.041529732s
Feb  1 13:28:04.185: INFO: Pod "pod-projected-configmaps-b7e229f2-14af-425c-90fb-1efbadf7fe19": Phase="Pending", Reason="", readiness=false. Elapsed: 8.059681004s
Feb  1 13:28:06.196: INFO: Pod "pod-projected-configmaps-b7e229f2-14af-425c-90fb-1efbadf7fe19": Phase="Running", Reason="", readiness=true. Elapsed: 10.069975391s
Feb  1 13:28:08.205: INFO: Pod "pod-projected-configmaps-b7e229f2-14af-425c-90fb-1efbadf7fe19": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.0796486s
STEP: Saw pod success
Feb  1 13:28:08.206: INFO: Pod "pod-projected-configmaps-b7e229f2-14af-425c-90fb-1efbadf7fe19" satisfied condition "success or failure"
Feb  1 13:28:08.210: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-b7e229f2-14af-425c-90fb-1efbadf7fe19 container projected-configmap-volume-test: 
STEP: delete the pod
Feb  1 13:28:08.408: INFO: Waiting for pod pod-projected-configmaps-b7e229f2-14af-425c-90fb-1efbadf7fe19 to disappear
Feb  1 13:28:08.475: INFO: Pod pod-projected-configmaps-b7e229f2-14af-425c-90fb-1efbadf7fe19 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  1 13:28:08.476: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-217" for this suite.
Feb  1 13:28:14.522: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  1 13:28:14.678: INFO: namespace projected-217 deletion completed in 6.191109837s

• [SLOW TEST:18.727 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl describe 
  should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  1 13:28:14.679: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb  1 13:28:14.811: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1234'
Feb  1 13:28:17.305: INFO: stderr: ""
Feb  1 13:28:17.305: INFO: stdout: "replicationcontroller/redis-master created\n"
Feb  1 13:28:17.306: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1234'
Feb  1 13:28:18.020: INFO: stderr: ""
Feb  1 13:28:18.020: INFO: stdout: "service/redis-master created\n"
STEP: Waiting for Redis master to start.
Feb  1 13:28:19.141: INFO: Selector matched 1 pods for map[app:redis]
Feb  1 13:28:19.141: INFO: Found 0 / 1
Feb  1 13:28:20.034: INFO: Selector matched 1 pods for map[app:redis]
Feb  1 13:28:20.034: INFO: Found 0 / 1
Feb  1 13:28:21.033: INFO: Selector matched 1 pods for map[app:redis]
Feb  1 13:28:21.034: INFO: Found 0 / 1
Feb  1 13:28:22.055: INFO: Selector matched 1 pods for map[app:redis]
Feb  1 13:28:22.056: INFO: Found 0 / 1
Feb  1 13:28:23.024: INFO: Selector matched 1 pods for map[app:redis]
Feb  1 13:28:23.024: INFO: Found 0 / 1
Feb  1 13:28:24.028: INFO: Selector matched 1 pods for map[app:redis]
Feb  1 13:28:24.028: INFO: Found 0 / 1
Feb  1 13:28:26.141: INFO: Selector matched 1 pods for map[app:redis]
Feb  1 13:28:26.142: INFO: Found 0 / 1
Feb  1 13:28:27.044: INFO: Selector matched 1 pods for map[app:redis]
Feb  1 13:28:27.044: INFO: Found 1 / 1
Feb  1 13:28:27.044: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Feb  1 13:28:27.059: INFO: Selector matched 1 pods for map[app:redis]
Feb  1 13:28:27.059: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Feb  1 13:28:27.060: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe pod redis-master-lhbtb --namespace=kubectl-1234'
Feb  1 13:28:27.316: INFO: stderr: ""
Feb  1 13:28:27.317: INFO: stdout: "Name:           redis-master-lhbtb\nNamespace:      kubectl-1234\nPriority:       0\nNode:           iruya-node/10.96.3.65\nStart Time:     Sat, 01 Feb 2020 13:28:17 +0000\nLabels:         app=redis\n                role=master\nAnnotations:    \nStatus:         Running\nIP:             10.44.0.1\nControlled By:  ReplicationController/redis-master\nContainers:\n  redis-master:\n    Container ID:   docker://97cb80b03eb4029dc8a419a776c1f27d6a7e430b1fa2dacbfb8812529fb440c1\n    Image:          gcr.io/kubernetes-e2e-test-images/redis:1.0\n    Image ID:       docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830\n    Port:           6379/TCP\n    Host Port:      0/TCP\n    State:          Running\n      Started:      Sat, 01 Feb 2020 13:28:24 +0000\n    Ready:          True\n    Restart Count:  0\n    Environment:    \n    Mounts:\n      /var/run/secrets/kubernetes.io/serviceaccount from default-token-ld4gh (ro)\nConditions:\n  Type              Status\n  Initialized       True \n  Ready             True \n  ContainersReady   True \n  PodScheduled      True \nVolumes:\n  default-token-ld4gh:\n    Type:        Secret (a volume populated by a Secret)\n    SecretName:  default-token-ld4gh\n    Optional:    false\nQoS Class:       BestEffort\nNode-Selectors:  \nTolerations:     node.kubernetes.io/not-ready:NoExecute for 300s\n                 node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n  Type    Reason     Age   From                 Message\n  ----    ------     ----  ----                 -------\n  Normal  Scheduled  10s   default-scheduler    Successfully assigned kubectl-1234/redis-master-lhbtb to iruya-node\n  Normal  Pulled     6s    kubelet, iruya-node  Container image \"gcr.io/kubernetes-e2e-test-images/redis:1.0\" already present on machine\n  Normal  Created    3s    kubelet, iruya-node  Created container redis-master\n  Normal  Started    3s    kubelet, iruya-node  Started container redis-master\n"
Feb  1 13:28:27.318: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe rc redis-master --namespace=kubectl-1234'
Feb  1 13:28:27.433: INFO: stderr: ""
Feb  1 13:28:27.433: INFO: stdout: "Name:         redis-master\nNamespace:    kubectl-1234\nSelector:     app=redis,role=master\nLabels:       app=redis\n              role=master\nAnnotations:  \nReplicas:     1 current / 1 desired\nPods Status:  1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n  Labels:  app=redis\n           role=master\n  Containers:\n   redis-master:\n    Image:        gcr.io/kubernetes-e2e-test-images/redis:1.0\n    Port:         6379/TCP\n    Host Port:    0/TCP\n    Environment:  \n    Mounts:       \n  Volumes:        \nEvents:\n  Type    Reason            Age   From                    Message\n  ----    ------            ----  ----                    -------\n  Normal  SuccessfulCreate  10s   replication-controller  Created pod: redis-master-lhbtb\n"
Feb  1 13:28:27.434: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe service redis-master --namespace=kubectl-1234'
Feb  1 13:28:27.556: INFO: stderr: ""
Feb  1 13:28:27.556: INFO: stdout: "Name:              redis-master\nNamespace:         kubectl-1234\nLabels:            app=redis\n                   role=master\nAnnotations:       \nSelector:          app=redis,role=master\nType:              ClusterIP\nIP:                10.108.8.163\nPort:                6379/TCP\nTargetPort:        redis-server/TCP\nEndpoints:         10.44.0.1:6379\nSession Affinity:  None\nEvents:            \n"
Feb  1 13:28:27.562: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe node iruya-node'
Feb  1 13:28:27.676: INFO: stderr: ""
Feb  1 13:28:27.677: INFO: stdout: "Name:               iruya-node\nRoles:              \nLabels:             beta.kubernetes.io/arch=amd64\n                    beta.kubernetes.io/os=linux\n                    kubernetes.io/arch=amd64\n                    kubernetes.io/hostname=iruya-node\n                    kubernetes.io/os=linux\nAnnotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock\n                    node.alpha.kubernetes.io/ttl: 0\n                    volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp:  Sun, 04 Aug 2019 09:01:39 +0000\nTaints:             \nUnschedulable:      false\nConditions:\n  Type                 Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message\n  ----                 ------  -----------------                 ------------------                ------                       -------\n  NetworkUnavailable   False   Sat, 12 Oct 2019 11:56:49 +0000   Sat, 12 Oct 2019 11:56:49 +0000   WeaveIsUp                    Weave pod has set this\n  MemoryPressure       False   Sat, 01 Feb 2020 13:28:10 +0000   Sun, 04 Aug 2019 09:01:39 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available\n  DiskPressure         False   Sat, 01 Feb 2020 13:28:10 +0000   Sun, 04 Aug 2019 09:01:39 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure\n  PIDPressure          False   Sat, 01 Feb 2020 13:28:10 +0000   Sun, 04 Aug 2019 09:01:39 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available\n  Ready                True    Sat, 01 Feb 2020 13:28:10 +0000   Sun, 04 Aug 2019 09:02:19 +0000   KubeletReady                 kubelet is posting ready status. AppArmor enabled\nAddresses:\n  InternalIP:  10.96.3.65\n  Hostname:    iruya-node\nCapacity:\n cpu:                4\n ephemeral-storage:  20145724Ki\n hugepages-2Mi:      0\n memory:             4039076Ki\n pods:               110\nAllocatable:\n cpu:                4\n ephemeral-storage:  18566299208\n hugepages-2Mi:      0\n memory:             3936676Ki\n pods:               110\nSystem Info:\n Machine ID:                 f573dcf04d6f4a87856a35d266a2fa7a\n System UUID:                F573DCF0-4D6F-4A87-856A-35D266A2FA7A\n Boot ID:                    8baf4beb-8391-43e6-b17b-b1e184b5370a\n Kernel Version:             4.15.0-52-generic\n OS Image:                   Ubuntu 18.04.2 LTS\n Operating System:           linux\n Architecture:               amd64\n Container Runtime Version:  docker://18.9.7\n Kubelet Version:            v1.15.1\n Kube-Proxy Version:         v1.15.1\nPodCIDR:                     10.96.1.0/24\nNon-terminated Pods:         (3 in total)\n  Namespace                  Name                  CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE\n  ---------                  ----                  ------------  ----------  ---------------  -------------  ---\n  kube-system                kube-proxy-976zl      0 (0%)        0 (0%)      0 (0%)           0 (0%)         181d\n  kube-system                weave-net-rlp57       20m (0%)      0 (0%)      0 (0%)           0 (0%)         112d\n  kubectl-1234               redis-master-lhbtb    0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s\nAllocated resources:\n  (Total limits may be over 100 percent, i.e., overcommitted.)\n  Resource           Requests  Limits\n  --------           --------  ------\n  cpu                20m (0%)  0 (0%)\n  memory             0 (0%)    0 (0%)\n  ephemeral-storage  0 (0%)    0 (0%)\nEvents:              \n"
Feb  1 13:28:27.677: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe namespace kubectl-1234'
Feb  1 13:28:27.846: INFO: stderr: ""
Feb  1 13:28:27.846: INFO: stdout: "Name:         kubectl-1234\nLabels:       e2e-framework=kubectl\n              e2e-run=6a02dc8e-b166-467e-9f0f-1642e32af73b\nAnnotations:  \nStatus:       Active\n\nNo resource quota.\n\nNo resource limits.\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  1 13:28:27.847: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-1234" for this suite.
Feb  1 13:28:49.881: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  1 13:28:49.974: INFO: namespace kubectl-1234 deletion completed in 22.122598307s

• [SLOW TEST:35.296 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl describe
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should check if kubectl describe prints relevant information for rc and pods  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-apps] ReplicationController 
  should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  1 13:28:49.975: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Given a ReplicationController is created
STEP: When the matched label of one of its pods change
Feb  1 13:28:50.095: INFO: Pod name pod-release: Found 0 pods out of 1
Feb  1 13:28:55.111: INFO: Pod name pod-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  1 13:28:56.168: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-4141" for this suite.
Feb  1 13:29:02.202: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  1 13:29:02.352: INFO: namespace replication-controller-4141 deletion completed in 6.177454188s

• [SLOW TEST:12.377 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-storage] Projected secret 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  1 13:29:02.352: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name projected-secret-test-ba3d63ba-1c13-4165-bfaa-e330f720f9a0
STEP: Creating a pod to test consume secrets
Feb  1 13:29:02.502: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-d024555f-3aba-4019-86de-5d9e52fb6698" in namespace "projected-7269" to be "success or failure"
Feb  1 13:29:02.534: INFO: Pod "pod-projected-secrets-d024555f-3aba-4019-86de-5d9e52fb6698": Phase="Pending", Reason="", readiness=false. Elapsed: 31.741942ms
Feb  1 13:29:04.544: INFO: Pod "pod-projected-secrets-d024555f-3aba-4019-86de-5d9e52fb6698": Phase="Pending", Reason="", readiness=false. Elapsed: 2.041770917s
Feb  1 13:29:06.560: INFO: Pod "pod-projected-secrets-d024555f-3aba-4019-86de-5d9e52fb6698": Phase="Pending", Reason="", readiness=false. Elapsed: 4.058003671s
Feb  1 13:29:08.577: INFO: Pod "pod-projected-secrets-d024555f-3aba-4019-86de-5d9e52fb6698": Phase="Pending", Reason="", readiness=false. Elapsed: 6.074678849s
Feb  1 13:29:10.591: INFO: Pod "pod-projected-secrets-d024555f-3aba-4019-86de-5d9e52fb6698": Phase="Pending", Reason="", readiness=false. Elapsed: 8.089591831s
Feb  1 13:29:12.609: INFO: Pod "pod-projected-secrets-d024555f-3aba-4019-86de-5d9e52fb6698": Phase="Pending", Reason="", readiness=false. Elapsed: 10.107150016s
Feb  1 13:29:14.627: INFO: Pod "pod-projected-secrets-d024555f-3aba-4019-86de-5d9e52fb6698": Phase="Running", Reason="", readiness=true. Elapsed: 12.124656828s
Feb  1 13:29:16.634: INFO: Pod "pod-projected-secrets-d024555f-3aba-4019-86de-5d9e52fb6698": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.131696197s
STEP: Saw pod success
Feb  1 13:29:16.634: INFO: Pod "pod-projected-secrets-d024555f-3aba-4019-86de-5d9e52fb6698" satisfied condition "success or failure"
Feb  1 13:29:16.638: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-d024555f-3aba-4019-86de-5d9e52fb6698 container secret-volume-test: 
STEP: delete the pod
Feb  1 13:29:16.696: INFO: Waiting for pod pod-projected-secrets-d024555f-3aba-4019-86de-5d9e52fb6698 to disappear
Feb  1 13:29:16.704: INFO: Pod pod-projected-secrets-d024555f-3aba-4019-86de-5d9e52fb6698 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  1 13:29:16.704: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7269" for this suite.
Feb  1 13:29:22.852: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  1 13:29:23.001: INFO: namespace projected-7269 deletion completed in 6.291254109s

• [SLOW TEST:20.649 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  1 13:29:23.002: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Feb  1 13:29:39.200: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Feb  1 13:29:39.221: INFO: Pod pod-with-prestop-http-hook still exists
Feb  1 13:29:41.222: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Feb  1 13:29:41.232: INFO: Pod pod-with-prestop-http-hook still exists
Feb  1 13:29:43.222: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Feb  1 13:29:43.229: INFO: Pod pod-with-prestop-http-hook still exists
Feb  1 13:29:45.221: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Feb  1 13:29:45.234: INFO: Pod pod-with-prestop-http-hook still exists
Feb  1 13:29:47.221: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Feb  1 13:29:47.235: INFO: Pod pod-with-prestop-http-hook still exists
Feb  1 13:29:49.222: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Feb  1 13:29:49.230: INFO: Pod pod-with-prestop-http-hook still exists
Feb  1 13:29:51.221: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Feb  1 13:29:51.231: INFO: Pod pod-with-prestop-http-hook still exists
Feb  1 13:29:53.222: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Feb  1 13:29:53.236: INFO: Pod pod-with-prestop-http-hook still exists
Feb  1 13:29:55.221: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Feb  1 13:29:55.233: INFO: Pod pod-with-prestop-http-hook still exists
Feb  1 13:29:57.221: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Feb  1 13:29:57.229: INFO: Pod pod-with-prestop-http-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  1 13:29:57.279: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-8984" for this suite.
Feb  1 13:30:19.427: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  1 13:30:19.630: INFO: namespace container-lifecycle-hook-8984 deletion completed in 22.341548228s

• [SLOW TEST:56.628 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute prestop http hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  1 13:30:19.632: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap configmap-7411/configmap-test-f7fee73a-21d0-49ee-8860-51ce6db818bf
STEP: Creating a pod to test consume configMaps
Feb  1 13:30:19.770: INFO: Waiting up to 5m0s for pod "pod-configmaps-6ffb6984-43ac-42d1-bc0e-60bd6d925d71" in namespace "configmap-7411" to be "success or failure"
Feb  1 13:30:19.849: INFO: Pod "pod-configmaps-6ffb6984-43ac-42d1-bc0e-60bd6d925d71": Phase="Pending", Reason="", readiness=false. Elapsed: 79.116176ms
Feb  1 13:30:21.878: INFO: Pod "pod-configmaps-6ffb6984-43ac-42d1-bc0e-60bd6d925d71": Phase="Pending", Reason="", readiness=false. Elapsed: 2.108601672s
Feb  1 13:30:23.894: INFO: Pod "pod-configmaps-6ffb6984-43ac-42d1-bc0e-60bd6d925d71": Phase="Pending", Reason="", readiness=false. Elapsed: 4.124270687s
Feb  1 13:30:25.907: INFO: Pod "pod-configmaps-6ffb6984-43ac-42d1-bc0e-60bd6d925d71": Phase="Pending", Reason="", readiness=false. Elapsed: 6.136870977s
Feb  1 13:30:27.918: INFO: Pod "pod-configmaps-6ffb6984-43ac-42d1-bc0e-60bd6d925d71": Phase="Pending", Reason="", readiness=false. Elapsed: 8.147662309s
Feb  1 13:30:29.927: INFO: Pod "pod-configmaps-6ffb6984-43ac-42d1-bc0e-60bd6d925d71": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.157016895s
STEP: Saw pod success
Feb  1 13:30:29.927: INFO: Pod "pod-configmaps-6ffb6984-43ac-42d1-bc0e-60bd6d925d71" satisfied condition "success or failure"
Feb  1 13:30:29.930: INFO: Trying to get logs from node iruya-node pod pod-configmaps-6ffb6984-43ac-42d1-bc0e-60bd6d925d71 container env-test: 
STEP: delete the pod
Feb  1 13:30:29.976: INFO: Waiting for pod pod-configmaps-6ffb6984-43ac-42d1-bc0e-60bd6d925d71 to disappear
Feb  1 13:30:29.988: INFO: Pod pod-configmaps-6ffb6984-43ac-42d1-bc0e-60bd6d925d71 no longer exists
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  1 13:30:29.988: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-7411" for this suite.
Feb  1 13:30:36.121: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  1 13:30:36.249: INFO: namespace configmap-7411 deletion completed in 6.195963058s

• [SLOW TEST:16.618 seconds]
[sig-node] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  1 13:30:36.250: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0666 on node default medium
Feb  1 13:30:36.536: INFO: Waiting up to 5m0s for pod "pod-c85d42d4-2f23-434c-8eec-6214969ffe6f" in namespace "emptydir-7998" to be "success or failure"
Feb  1 13:30:36.544: INFO: Pod "pod-c85d42d4-2f23-434c-8eec-6214969ffe6f": Phase="Pending", Reason="", readiness=false. Elapsed: 8.116788ms
Feb  1 13:30:38.552: INFO: Pod "pod-c85d42d4-2f23-434c-8eec-6214969ffe6f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016464043s
Feb  1 13:30:40.581: INFO: Pod "pod-c85d42d4-2f23-434c-8eec-6214969ffe6f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.04529875s
Feb  1 13:30:42.600: INFO: Pod "pod-c85d42d4-2f23-434c-8eec-6214969ffe6f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.063745925s
Feb  1 13:30:44.650: INFO: Pod "pod-c85d42d4-2f23-434c-8eec-6214969ffe6f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.114041108s
STEP: Saw pod success
Feb  1 13:30:44.650: INFO: Pod "pod-c85d42d4-2f23-434c-8eec-6214969ffe6f" satisfied condition "success or failure"
Feb  1 13:30:44.654: INFO: Trying to get logs from node iruya-node pod pod-c85d42d4-2f23-434c-8eec-6214969ffe6f container test-container: 
STEP: delete the pod
Feb  1 13:30:44.696: INFO: Waiting for pod pod-c85d42d4-2f23-434c-8eec-6214969ffe6f to disappear
Feb  1 13:30:44.701: INFO: Pod pod-c85d42d4-2f23-434c-8eec-6214969ffe6f no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  1 13:30:44.701: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-7998" for this suite.
Feb  1 13:30:50.728: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  1 13:30:50.846: INFO: namespace emptydir-7998 deletion completed in 6.140147028s

• [SLOW TEST:14.596 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  1 13:30:50.849: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72
[It] deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb  1 13:30:50.957: INFO: Pod name cleanup-pod: Found 0 pods out of 1
Feb  1 13:30:55.975: INFO: Pod name cleanup-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Feb  1 13:30:59.990: INFO: Creating deployment test-cleanup-deployment
STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66
Feb  1 13:31:00.032: INFO: Deployment "test-cleanup-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment,GenerateName:,Namespace:deployment-48,SelfLink:/apis/apps/v1/namespaces/deployment-48/deployments/test-cleanup-deployment,UID:793ffaf3-8f14-46f1-8849-b05d802d26dd,ResourceVersion:22691730,Generation:1,CreationTimestamp:2020-02-01 13:30:59 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[],ReadyReplicas:0,CollisionCount:nil,},}

Feb  1 13:31:00.054: INFO: New ReplicaSet "test-cleanup-deployment-55bbcbc84c" of Deployment "test-cleanup-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment-55bbcbc84c,GenerateName:,Namespace:deployment-48,SelfLink:/apis/apps/v1/namespaces/deployment-48/replicasets/test-cleanup-deployment-55bbcbc84c,UID:d4534679-f6bc-49dd-9f7c-011d5aadc1a7,ResourceVersion:22691732,Generation:1,CreationTimestamp:2020-02-01 13:31:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-cleanup-deployment 793ffaf3-8f14-46f1-8849-b05d802d26dd 0xc002ccbb67 0xc002ccbb68}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:0,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Feb  1 13:31:00.054: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment":
Feb  1 13:31:00.055: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller,GenerateName:,Namespace:deployment-48,SelfLink:/apis/apps/v1/namespaces/deployment-48/replicasets/test-cleanup-controller,UID:6eaaa360-82e3-40e1-b0b6-e799f2c7f7d3,ResourceVersion:22691731,Generation:1,CreationTimestamp:2020-02-01 13:30:50 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 Deployment test-cleanup-deployment 793ffaf3-8f14-46f1-8849-b05d802d26dd 0xc002ccba97 0xc002ccba98}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},}
Feb  1 13:31:00.069: INFO: Pod "test-cleanup-controller-kf4wf" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller-kf4wf,GenerateName:test-cleanup-controller-,Namespace:deployment-48,SelfLink:/api/v1/namespaces/deployment-48/pods/test-cleanup-controller-kf4wf,UID:49d5e833-dc24-4576-bf92-fd8b14c0f024,ResourceVersion:22691725,Generation:0,CreationTimestamp:2020-02-01 13:30:50 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-cleanup-controller 6eaaa360-82e3-40e1-b0b6-e799f2c7f7d3 0xc002180227 0xc002180228}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-x75mz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-x75mz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-x75mz true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0021802a0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0021802c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-01 13:30:51 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-01 13:30:58 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-01 13:30:58 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-01 13:30:50 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.1,StartTime:2020-02-01 13:30:51 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-01 13:30:57 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://08a67e483a412efa024ce90fe1d05bb76bf4a861f7ad8eb3d6f25e0160a56570}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  1 13:31:00.069: INFO: Pod "test-cleanup-deployment-55bbcbc84c-r9xmq" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment-55bbcbc84c-r9xmq,GenerateName:test-cleanup-deployment-55bbcbc84c-,Namespace:deployment-48,SelfLink:/api/v1/namespaces/deployment-48/pods/test-cleanup-deployment-55bbcbc84c-r9xmq,UID:4b4f4098-9b61-4bfe-97dc-001f01a51d69,ResourceVersion:22691734,Generation:0,CreationTimestamp:2020-02-01 13:31:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-cleanup-deployment-55bbcbc84c d4534679-f6bc-49dd-9f7c-011d5aadc1a7 0xc0021803a7 0xc0021803a8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-x75mz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-x75mz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [{default-token-x75mz true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002180410} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002180430}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  1 13:31:00.069: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-48" for this suite.
Feb  1 13:31:06.245: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  1 13:31:06.401: INFO: namespace deployment-48 deletion completed in 6.24343345s

• [SLOW TEST:15.552 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  1 13:31:06.402: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name projected-secret-test-map-9c77e42b-e790-4bd2-990e-6648c0f474bf
STEP: Creating a pod to test consume secrets
Feb  1 13:31:06.556: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-9851cf4e-4a36-4827-9e46-9914df9c695a" in namespace "projected-7413" to be "success or failure"
Feb  1 13:31:06.581: INFO: Pod "pod-projected-secrets-9851cf4e-4a36-4827-9e46-9914df9c695a": Phase="Pending", Reason="", readiness=false. Elapsed: 25.200445ms
Feb  1 13:31:08.619: INFO: Pod "pod-projected-secrets-9851cf4e-4a36-4827-9e46-9914df9c695a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.063082343s
Feb  1 13:31:10.631: INFO: Pod "pod-projected-secrets-9851cf4e-4a36-4827-9e46-9914df9c695a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.075228762s
Feb  1 13:31:12.640: INFO: Pod "pod-projected-secrets-9851cf4e-4a36-4827-9e46-9914df9c695a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.083912662s
Feb  1 13:31:14.653: INFO: Pod "pod-projected-secrets-9851cf4e-4a36-4827-9e46-9914df9c695a": Phase="Pending", Reason="", readiness=false. Elapsed: 8.09718793s
Feb  1 13:31:16.666: INFO: Pod "pod-projected-secrets-9851cf4e-4a36-4827-9e46-9914df9c695a": Phase="Pending", Reason="", readiness=false. Elapsed: 10.109514596s
Feb  1 13:31:18.681: INFO: Pod "pod-projected-secrets-9851cf4e-4a36-4827-9e46-9914df9c695a": Phase="Pending", Reason="", readiness=false. Elapsed: 12.12535439s
Feb  1 13:31:20.700: INFO: Pod "pod-projected-secrets-9851cf4e-4a36-4827-9e46-9914df9c695a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.144310148s
STEP: Saw pod success
Feb  1 13:31:20.701: INFO: Pod "pod-projected-secrets-9851cf4e-4a36-4827-9e46-9914df9c695a" satisfied condition "success or failure"
Feb  1 13:31:20.704: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-9851cf4e-4a36-4827-9e46-9914df9c695a container projected-secret-volume-test: 
STEP: delete the pod
Feb  1 13:31:20.826: INFO: Waiting for pod pod-projected-secrets-9851cf4e-4a36-4827-9e46-9914df9c695a to disappear
Feb  1 13:31:20.835: INFO: Pod pod-projected-secrets-9851cf4e-4a36-4827-9e46-9914df9c695a no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  1 13:31:20.835: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7413" for this suite.
Feb  1 13:31:26.877: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  1 13:31:26.998: INFO: namespace projected-7413 deletion completed in 6.156593989s

• [SLOW TEST:20.596 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  1 13:31:26.999: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Feb  1 13:31:27.106: INFO: Waiting up to 5m0s for pod "downwardapi-volume-cb2e5551-2d87-4d0f-9058-3ccd07aed9ea" in namespace "downward-api-199" to be "success or failure"
Feb  1 13:31:27.121: INFO: Pod "downwardapi-volume-cb2e5551-2d87-4d0f-9058-3ccd07aed9ea": Phase="Pending", Reason="", readiness=false. Elapsed: 14.530796ms
Feb  1 13:31:29.134: INFO: Pod "downwardapi-volume-cb2e5551-2d87-4d0f-9058-3ccd07aed9ea": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027724732s
Feb  1 13:31:31.148: INFO: Pod "downwardapi-volume-cb2e5551-2d87-4d0f-9058-3ccd07aed9ea": Phase="Pending", Reason="", readiness=false. Elapsed: 4.041984415s
Feb  1 13:31:33.166: INFO: Pod "downwardapi-volume-cb2e5551-2d87-4d0f-9058-3ccd07aed9ea": Phase="Pending", Reason="", readiness=false. Elapsed: 6.059797798s
Feb  1 13:31:35.174: INFO: Pod "downwardapi-volume-cb2e5551-2d87-4d0f-9058-3ccd07aed9ea": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.068274945s
STEP: Saw pod success
Feb  1 13:31:35.175: INFO: Pod "downwardapi-volume-cb2e5551-2d87-4d0f-9058-3ccd07aed9ea" satisfied condition "success or failure"
Feb  1 13:31:35.180: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-cb2e5551-2d87-4d0f-9058-3ccd07aed9ea container client-container: 
STEP: delete the pod
Feb  1 13:31:35.247: INFO: Waiting for pod downwardapi-volume-cb2e5551-2d87-4d0f-9058-3ccd07aed9ea to disappear
Feb  1 13:31:35.253: INFO: Pod downwardapi-volume-cb2e5551-2d87-4d0f-9058-3ccd07aed9ea no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  1 13:31:35.253: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-199" for this suite.
Feb  1 13:31:41.320: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  1 13:31:41.419: INFO: namespace downward-api-199 deletion completed in 6.146787145s

• [SLOW TEST:14.420 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  1 13:31:41.420: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-affe71a3-7fdb-42a1-8f0d-0c614a8fc056
STEP: Creating a pod to test consume secrets
Feb  1 13:31:41.571: INFO: Waiting up to 5m0s for pod "pod-secrets-d3a6561e-88d5-4fb7-a251-dfc73ad0a0d4" in namespace "secrets-2220" to be "success or failure"
Feb  1 13:31:41.578: INFO: Pod "pod-secrets-d3a6561e-88d5-4fb7-a251-dfc73ad0a0d4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.95294ms
Feb  1 13:31:43.587: INFO: Pod "pod-secrets-d3a6561e-88d5-4fb7-a251-dfc73ad0a0d4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015706154s
Feb  1 13:31:45.595: INFO: Pod "pod-secrets-d3a6561e-88d5-4fb7-a251-dfc73ad0a0d4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.023955732s
Feb  1 13:31:47.613: INFO: Pod "pod-secrets-d3a6561e-88d5-4fb7-a251-dfc73ad0a0d4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.041779677s
Feb  1 13:31:49.620: INFO: Pod "pod-secrets-d3a6561e-88d5-4fb7-a251-dfc73ad0a0d4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.049388142s
STEP: Saw pod success
Feb  1 13:31:49.620: INFO: Pod "pod-secrets-d3a6561e-88d5-4fb7-a251-dfc73ad0a0d4" satisfied condition "success or failure"
Feb  1 13:31:49.623: INFO: Trying to get logs from node iruya-node pod pod-secrets-d3a6561e-88d5-4fb7-a251-dfc73ad0a0d4 container secret-volume-test: 
STEP: delete the pod
Feb  1 13:31:49.669: INFO: Waiting for pod pod-secrets-d3a6561e-88d5-4fb7-a251-dfc73ad0a0d4 to disappear
Feb  1 13:31:49.789: INFO: Pod pod-secrets-d3a6561e-88d5-4fb7-a251-dfc73ad0a0d4 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  1 13:31:49.789: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-2220" for this suite.
Feb  1 13:31:55.817: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  1 13:31:55.989: INFO: namespace secrets-2220 deletion completed in 6.193666431s

• [SLOW TEST:14.569 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  1 13:31:55.991: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-71477482-b49d-4693-9e1a-590e47034622
STEP: Creating a pod to test consume configMaps
Feb  1 13:31:56.141: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-8fef7765-0472-4a2b-88e6-c8433fe7a20d" in namespace "projected-2775" to be "success or failure"
Feb  1 13:31:56.147: INFO: Pod "pod-projected-configmaps-8fef7765-0472-4a2b-88e6-c8433fe7a20d": Phase="Pending", Reason="", readiness=false. Elapsed: 5.928925ms
Feb  1 13:31:58.161: INFO: Pod "pod-projected-configmaps-8fef7765-0472-4a2b-88e6-c8433fe7a20d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019524957s
Feb  1 13:32:00.168: INFO: Pod "pod-projected-configmaps-8fef7765-0472-4a2b-88e6-c8433fe7a20d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.026801018s
Feb  1 13:32:02.183: INFO: Pod "pod-projected-configmaps-8fef7765-0472-4a2b-88e6-c8433fe7a20d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.042109567s
Feb  1 13:32:04.203: INFO: Pod "pod-projected-configmaps-8fef7765-0472-4a2b-88e6-c8433fe7a20d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.062166773s
STEP: Saw pod success
Feb  1 13:32:04.204: INFO: Pod "pod-projected-configmaps-8fef7765-0472-4a2b-88e6-c8433fe7a20d" satisfied condition "success or failure"
Feb  1 13:32:04.214: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-8fef7765-0472-4a2b-88e6-c8433fe7a20d container projected-configmap-volume-test: 
STEP: delete the pod
Feb  1 13:32:04.346: INFO: Waiting for pod pod-projected-configmaps-8fef7765-0472-4a2b-88e6-c8433fe7a20d to disappear
Feb  1 13:32:04.355: INFO: Pod pod-projected-configmaps-8fef7765-0472-4a2b-88e6-c8433fe7a20d no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  1 13:32:04.355: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2775" for this suite.
Feb  1 13:32:10.387: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  1 13:32:10.617: INFO: namespace projected-2775 deletion completed in 6.255065415s

• [SLOW TEST:14.626 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  1 13:32:10.620: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Feb  1 13:32:19.815: INFO: Expected: &{} to match Container's Termination Message:  --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  1 13:32:19.851: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-9937" for this suite.
Feb  1 13:32:25.883: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  1 13:32:25.978: INFO: namespace container-runtime-9937 deletion completed in 6.120839538s

• [SLOW TEST:15.358 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129
      should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  1 13:32:25.979: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103
[It] should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Feb  1 13:32:26.148: INFO: Number of nodes with available pods: 0
Feb  1 13:32:26.148: INFO: Node iruya-node is running more than one daemon pod
Feb  1 13:32:27.164: INFO: Number of nodes with available pods: 0
Feb  1 13:32:27.164: INFO: Node iruya-node is running more than one daemon pod
Feb  1 13:32:28.216: INFO: Number of nodes with available pods: 0
Feb  1 13:32:28.216: INFO: Node iruya-node is running more than one daemon pod
Feb  1 13:32:29.166: INFO: Number of nodes with available pods: 0
Feb  1 13:32:29.166: INFO: Node iruya-node is running more than one daemon pod
Feb  1 13:32:30.162: INFO: Number of nodes with available pods: 0
Feb  1 13:32:30.162: INFO: Node iruya-node is running more than one daemon pod
Feb  1 13:32:31.171: INFO: Number of nodes with available pods: 0
Feb  1 13:32:31.172: INFO: Node iruya-node is running more than one daemon pod
Feb  1 13:32:34.387: INFO: Number of nodes with available pods: 0
Feb  1 13:32:34.387: INFO: Node iruya-node is running more than one daemon pod
Feb  1 13:32:35.564: INFO: Number of nodes with available pods: 0
Feb  1 13:32:35.564: INFO: Node iruya-node is running more than one daemon pod
Feb  1 13:32:36.164: INFO: Number of nodes with available pods: 0
Feb  1 13:32:36.164: INFO: Node iruya-node is running more than one daemon pod
Feb  1 13:32:37.171: INFO: Number of nodes with available pods: 2
Feb  1 13:32:37.171: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Stop a daemon pod, check that the daemon pod is revived.
Feb  1 13:32:37.206: INFO: Number of nodes with available pods: 1
Feb  1 13:32:37.206: INFO: Node iruya-node is running more than one daemon pod
Feb  1 13:32:38.227: INFO: Number of nodes with available pods: 1
Feb  1 13:32:38.227: INFO: Node iruya-node is running more than one daemon pod
Feb  1 13:32:39.223: INFO: Number of nodes with available pods: 1
Feb  1 13:32:39.223: INFO: Node iruya-node is running more than one daemon pod
Feb  1 13:32:40.217: INFO: Number of nodes with available pods: 1
Feb  1 13:32:40.217: INFO: Node iruya-node is running more than one daemon pod
Feb  1 13:32:41.219: INFO: Number of nodes with available pods: 1
Feb  1 13:32:41.219: INFO: Node iruya-node is running more than one daemon pod
Feb  1 13:32:42.217: INFO: Number of nodes with available pods: 1
Feb  1 13:32:42.217: INFO: Node iruya-node is running more than one daemon pod
Feb  1 13:32:43.225: INFO: Number of nodes with available pods: 1
Feb  1 13:32:43.225: INFO: Node iruya-node is running more than one daemon pod
Feb  1 13:32:44.220: INFO: Number of nodes with available pods: 1
Feb  1 13:32:44.220: INFO: Node iruya-node is running more than one daemon pod
Feb  1 13:32:45.234: INFO: Number of nodes with available pods: 1
Feb  1 13:32:45.234: INFO: Node iruya-node is running more than one daemon pod
Feb  1 13:32:46.221: INFO: Number of nodes with available pods: 1
Feb  1 13:32:46.221: INFO: Node iruya-node is running more than one daemon pod
Feb  1 13:32:47.224: INFO: Number of nodes with available pods: 1
Feb  1 13:32:47.224: INFO: Node iruya-node is running more than one daemon pod
Feb  1 13:32:48.223: INFO: Number of nodes with available pods: 1
Feb  1 13:32:48.223: INFO: Node iruya-node is running more than one daemon pod
Feb  1 13:32:49.220: INFO: Number of nodes with available pods: 1
Feb  1 13:32:49.220: INFO: Node iruya-node is running more than one daemon pod
Feb  1 13:32:50.222: INFO: Number of nodes with available pods: 1
Feb  1 13:32:50.222: INFO: Node iruya-node is running more than one daemon pod
Feb  1 13:32:51.221: INFO: Number of nodes with available pods: 1
Feb  1 13:32:51.221: INFO: Node iruya-node is running more than one daemon pod
Feb  1 13:32:52.229: INFO: Number of nodes with available pods: 1
Feb  1 13:32:52.229: INFO: Node iruya-node is running more than one daemon pod
Feb  1 13:32:53.222: INFO: Number of nodes with available pods: 2
Feb  1 13:32:53.222: INFO: Number of running nodes: 2, number of available pods: 2
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-2576, will wait for the garbage collector to delete the pods
Feb  1 13:32:53.294: INFO: Deleting DaemonSet.extensions daemon-set took: 14.044734ms
Feb  1 13:32:53.594: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.577714ms
Feb  1 13:33:07.971: INFO: Number of nodes with available pods: 0
Feb  1 13:33:07.971: INFO: Number of running nodes: 0, number of available pods: 0
Feb  1 13:33:07.982: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-2576/daemonsets","resourceVersion":"22692110"},"items":null}

Feb  1 13:33:07.986: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-2576/pods","resourceVersion":"22692110"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  1 13:33:07.998: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-2576" for this suite.
Feb  1 13:33:14.027: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  1 13:33:14.163: INFO: namespace daemonsets-2576 deletion completed in 6.160253219s

• [SLOW TEST:48.184 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  1 13:33:14.163: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0644 on tmpfs
Feb  1 13:33:14.269: INFO: Waiting up to 5m0s for pod "pod-02513e6c-ca96-43cb-bc57-20375588853d" in namespace "emptydir-7513" to be "success or failure"
Feb  1 13:33:14.276: INFO: Pod "pod-02513e6c-ca96-43cb-bc57-20375588853d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.997506ms
Feb  1 13:33:16.290: INFO: Pod "pod-02513e6c-ca96-43cb-bc57-20375588853d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020782562s
Feb  1 13:33:18.301: INFO: Pod "pod-02513e6c-ca96-43cb-bc57-20375588853d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.031744334s
Feb  1 13:33:20.309: INFO: Pod "pod-02513e6c-ca96-43cb-bc57-20375588853d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.039840816s
Feb  1 13:33:22.345: INFO: Pod "pod-02513e6c-ca96-43cb-bc57-20375588853d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.075668085s
STEP: Saw pod success
Feb  1 13:33:22.345: INFO: Pod "pod-02513e6c-ca96-43cb-bc57-20375588853d" satisfied condition "success or failure"
Feb  1 13:33:22.349: INFO: Trying to get logs from node iruya-node pod pod-02513e6c-ca96-43cb-bc57-20375588853d container test-container: 
STEP: delete the pod
Feb  1 13:33:22.437: INFO: Waiting for pod pod-02513e6c-ca96-43cb-bc57-20375588853d to disappear
Feb  1 13:33:22.443: INFO: Pod pod-02513e6c-ca96-43cb-bc57-20375588853d no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  1 13:33:22.443: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-7513" for this suite.
Feb  1 13:33:28.533: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  1 13:33:28.657: INFO: namespace emptydir-7513 deletion completed in 6.167520566s

• [SLOW TEST:14.494 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period 
  should be submitted and removed [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  1 13:33:28.657: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Delete Grace Period
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:47
[It] should be submitted and removed [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: setting up selector
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
Feb  1 13:33:38.891: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0'
STEP: deleting the pod gracefully
STEP: verifying the kubelet observed the termination notice
Feb  1 13:33:49.063: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed
[AfterEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  1 13:33:49.069: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-8649" for this suite.
Feb  1 13:33:55.124: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  1 13:33:55.228: INFO: namespace pods-8649 deletion completed in 6.151731001s

• [SLOW TEST:26.570 seconds]
[k8s.io] [sig-node] Pods Extended
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  [k8s.io] Delete Grace Period
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should be submitted and removed [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  1 13:33:55.229: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating the pod
Feb  1 13:34:04.059: INFO: Successfully updated pod "labelsupdate458c103b-ced4-4abe-a6f5-4251a7071afd"
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  1 13:34:08.222: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-669" for this suite.
Feb  1 13:34:30.259: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  1 13:34:30.393: INFO: namespace downward-api-669 deletion completed in 22.163491897s

• [SLOW TEST:35.164 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  1 13:34:30.394: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103
[It] should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb  1 13:34:30.599: INFO: Create a RollingUpdate DaemonSet
Feb  1 13:34:30.671: INFO: Check that daemon pods launch on every node of the cluster
Feb  1 13:34:30.689: INFO: Number of nodes with available pods: 0
Feb  1 13:34:30.689: INFO: Node iruya-node is running more than one daemon pod
Feb  1 13:34:31.709: INFO: Number of nodes with available pods: 0
Feb  1 13:34:31.709: INFO: Node iruya-node is running more than one daemon pod
Feb  1 13:34:32.710: INFO: Number of nodes with available pods: 0
Feb  1 13:34:32.710: INFO: Node iruya-node is running more than one daemon pod
Feb  1 13:34:33.845: INFO: Number of nodes with available pods: 0
Feb  1 13:34:33.845: INFO: Node iruya-node is running more than one daemon pod
Feb  1 13:34:34.748: INFO: Number of nodes with available pods: 0
Feb  1 13:34:34.748: INFO: Node iruya-node is running more than one daemon pod
Feb  1 13:34:35.706: INFO: Number of nodes with available pods: 0
Feb  1 13:34:35.706: INFO: Node iruya-node is running more than one daemon pod
Feb  1 13:34:37.518: INFO: Number of nodes with available pods: 0
Feb  1 13:34:37.518: INFO: Node iruya-node is running more than one daemon pod
Feb  1 13:34:37.904: INFO: Number of nodes with available pods: 0
Feb  1 13:34:37.904: INFO: Node iruya-node is running more than one daemon pod
Feb  1 13:34:38.700: INFO: Number of nodes with available pods: 0
Feb  1 13:34:38.700: INFO: Node iruya-node is running more than one daemon pod
Feb  1 13:34:39.785: INFO: Number of nodes with available pods: 0
Feb  1 13:34:39.785: INFO: Node iruya-node is running more than one daemon pod
Feb  1 13:34:40.709: INFO: Number of nodes with available pods: 0
Feb  1 13:34:40.709: INFO: Node iruya-node is running more than one daemon pod
Feb  1 13:34:41.707: INFO: Number of nodes with available pods: 2
Feb  1 13:34:41.707: INFO: Number of running nodes: 2, number of available pods: 2
Feb  1 13:34:41.707: INFO: Update the DaemonSet to trigger a rollout
Feb  1 13:34:41.721: INFO: Updating DaemonSet daemon-set
Feb  1 13:34:48.750: INFO: Roll back the DaemonSet before rollout is complete
Feb  1 13:34:48.771: INFO: Updating DaemonSet daemon-set
Feb  1 13:34:48.771: INFO: Make sure DaemonSet rollback is complete
Feb  1 13:34:49.221: INFO: Wrong image for pod: daemon-set-772nb. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent.
Feb  1 13:34:49.222: INFO: Pod daemon-set-772nb is not available
Feb  1 13:34:50.605: INFO: Wrong image for pod: daemon-set-772nb. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent.
Feb  1 13:34:50.606: INFO: Pod daemon-set-772nb is not available
Feb  1 13:34:51.595: INFO: Wrong image for pod: daemon-set-772nb. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent.
Feb  1 13:34:51.595: INFO: Pod daemon-set-772nb is not available
Feb  1 13:34:52.882: INFO: Wrong image for pod: daemon-set-772nb. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent.
Feb  1 13:34:52.882: INFO: Pod daemon-set-772nb is not available
Feb  1 13:34:53.595: INFO: Pod daemon-set-5vslm is not available
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-4666, will wait for the garbage collector to delete the pods
Feb  1 13:34:53.683: INFO: Deleting DaemonSet.extensions daemon-set took: 20.304128ms
Feb  1 13:34:54.784: INFO: Terminating DaemonSet.extensions daemon-set pods took: 1.101360425s
Feb  1 13:35:01.616: INFO: Number of nodes with available pods: 0
Feb  1 13:35:01.616: INFO: Number of running nodes: 0, number of available pods: 0
Feb  1 13:35:01.623: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-4666/daemonsets","resourceVersion":"22692429"},"items":null}

Feb  1 13:35:01.626: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-4666/pods","resourceVersion":"22692429"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  1 13:35:01.642: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-4666" for this suite.
Feb  1 13:35:07.679: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  1 13:35:07.823: INFO: namespace daemonsets-4666 deletion completed in 6.175928097s

• [SLOW TEST:37.429 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  1 13:35:07.824: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103
[It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb  1 13:35:07.982: INFO: Creating simple daemon set daemon-set
STEP: Check that daemon pods launch on every node of the cluster.
Feb  1 13:35:08.077: INFO: Number of nodes with available pods: 0
Feb  1 13:35:08.077: INFO: Node iruya-node is running more than one daemon pod
Feb  1 13:35:10.091: INFO: Number of nodes with available pods: 0
Feb  1 13:35:10.091: INFO: Node iruya-node is running more than one daemon pod
Feb  1 13:35:11.416: INFO: Number of nodes with available pods: 0
Feb  1 13:35:11.416: INFO: Node iruya-node is running more than one daemon pod
Feb  1 13:35:12.091: INFO: Number of nodes with available pods: 0
Feb  1 13:35:12.091: INFO: Node iruya-node is running more than one daemon pod
Feb  1 13:35:13.091: INFO: Number of nodes with available pods: 0
Feb  1 13:35:13.091: INFO: Node iruya-node is running more than one daemon pod
Feb  1 13:35:16.142: INFO: Number of nodes with available pods: 0
Feb  1 13:35:16.143: INFO: Node iruya-node is running more than one daemon pod
Feb  1 13:35:17.101: INFO: Number of nodes with available pods: 0
Feb  1 13:35:17.101: INFO: Node iruya-node is running more than one daemon pod
Feb  1 13:35:18.094: INFO: Number of nodes with available pods: 0
Feb  1 13:35:18.094: INFO: Node iruya-node is running more than one daemon pod
Feb  1 13:35:19.124: INFO: Number of nodes with available pods: 2
Feb  1 13:35:19.124: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Update daemon pods image.
STEP: Check that daemon pods images are updated.
Feb  1 13:35:19.185: INFO: Wrong image for pod: daemon-set-hh7fz. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  1 13:35:19.185: INFO: Wrong image for pod: daemon-set-wdmwr. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  1 13:35:20.210: INFO: Wrong image for pod: daemon-set-hh7fz. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  1 13:35:20.210: INFO: Wrong image for pod: daemon-set-wdmwr. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  1 13:35:21.220: INFO: Wrong image for pod: daemon-set-hh7fz. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  1 13:35:21.220: INFO: Wrong image for pod: daemon-set-wdmwr. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  1 13:35:22.210: INFO: Wrong image for pod: daemon-set-hh7fz. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  1 13:35:22.210: INFO: Wrong image for pod: daemon-set-wdmwr. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  1 13:35:23.211: INFO: Wrong image for pod: daemon-set-hh7fz. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  1 13:35:23.211: INFO: Wrong image for pod: daemon-set-wdmwr. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  1 13:35:24.211: INFO: Wrong image for pod: daemon-set-hh7fz. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  1 13:35:24.211: INFO: Wrong image for pod: daemon-set-wdmwr. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  1 13:35:25.214: INFO: Wrong image for pod: daemon-set-hh7fz. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  1 13:35:25.214: INFO: Wrong image for pod: daemon-set-wdmwr. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  1 13:35:26.215: INFO: Wrong image for pod: daemon-set-hh7fz. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  1 13:35:26.216: INFO: Wrong image for pod: daemon-set-wdmwr. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  1 13:35:26.216: INFO: Pod daemon-set-wdmwr is not available
Feb  1 13:35:27.217: INFO: Wrong image for pod: daemon-set-hh7fz. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  1 13:35:27.217: INFO: Wrong image for pod: daemon-set-wdmwr. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  1 13:35:27.217: INFO: Pod daemon-set-wdmwr is not available
Feb  1 13:35:28.215: INFO: Wrong image for pod: daemon-set-hh7fz. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  1 13:35:28.215: INFO: Wrong image for pod: daemon-set-wdmwr. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  1 13:35:28.215: INFO: Pod daemon-set-wdmwr is not available
Feb  1 13:35:29.210: INFO: Wrong image for pod: daemon-set-hh7fz. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  1 13:35:29.211: INFO: Wrong image for pod: daemon-set-wdmwr. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  1 13:35:29.211: INFO: Pod daemon-set-wdmwr is not available
Feb  1 13:35:30.213: INFO: Wrong image for pod: daemon-set-hh7fz. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  1 13:35:30.213: INFO: Wrong image for pod: daemon-set-wdmwr. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  1 13:35:30.213: INFO: Pod daemon-set-wdmwr is not available
Feb  1 13:35:31.217: INFO: Wrong image for pod: daemon-set-hh7fz. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  1 13:35:31.218: INFO: Wrong image for pod: daemon-set-wdmwr. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  1 13:35:31.218: INFO: Pod daemon-set-wdmwr is not available
Feb  1 13:35:32.216: INFO: Wrong image for pod: daemon-set-hh7fz. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  1 13:35:32.216: INFO: Wrong image for pod: daemon-set-wdmwr. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  1 13:35:32.216: INFO: Pod daemon-set-wdmwr is not available
Feb  1 13:35:33.213: INFO: Wrong image for pod: daemon-set-hh7fz. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  1 13:35:33.213: INFO: Wrong image for pod: daemon-set-wdmwr. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  1 13:35:33.213: INFO: Pod daemon-set-wdmwr is not available
Feb  1 13:35:34.209: INFO: Wrong image for pod: daemon-set-hh7fz. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  1 13:35:34.209: INFO: Wrong image for pod: daemon-set-wdmwr. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  1 13:35:34.209: INFO: Pod daemon-set-wdmwr is not available
Feb  1 13:35:35.212: INFO: Wrong image for pod: daemon-set-hh7fz. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  1 13:35:35.212: INFO: Wrong image for pod: daemon-set-wdmwr. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  1 13:35:35.212: INFO: Pod daemon-set-wdmwr is not available
Feb  1 13:35:36.210: INFO: Wrong image for pod: daemon-set-hh7fz. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  1 13:35:36.210: INFO: Wrong image for pod: daemon-set-wdmwr. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  1 13:35:36.210: INFO: Pod daemon-set-wdmwr is not available
Feb  1 13:35:37.210: INFO: Pod daemon-set-8r5tz is not available
Feb  1 13:35:37.210: INFO: Wrong image for pod: daemon-set-hh7fz. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  1 13:35:38.215: INFO: Pod daemon-set-8r5tz is not available
Feb  1 13:35:38.215: INFO: Wrong image for pod: daemon-set-hh7fz. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  1 13:35:39.208: INFO: Pod daemon-set-8r5tz is not available
Feb  1 13:35:39.208: INFO: Wrong image for pod: daemon-set-hh7fz. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  1 13:35:40.228: INFO: Pod daemon-set-8r5tz is not available
Feb  1 13:35:40.229: INFO: Wrong image for pod: daemon-set-hh7fz. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  1 13:35:41.213: INFO: Pod daemon-set-8r5tz is not available
Feb  1 13:35:41.213: INFO: Wrong image for pod: daemon-set-hh7fz. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  1 13:35:42.211: INFO: Pod daemon-set-8r5tz is not available
Feb  1 13:35:42.211: INFO: Wrong image for pod: daemon-set-hh7fz. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  1 13:35:43.983: INFO: Pod daemon-set-8r5tz is not available
Feb  1 13:35:43.983: INFO: Wrong image for pod: daemon-set-hh7fz. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  1 13:35:44.209: INFO: Pod daemon-set-8r5tz is not available
Feb  1 13:35:44.209: INFO: Wrong image for pod: daemon-set-hh7fz. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  1 13:35:45.220: INFO: Wrong image for pod: daemon-set-hh7fz. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  1 13:35:46.211: INFO: Wrong image for pod: daemon-set-hh7fz. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  1 13:35:47.438: INFO: Wrong image for pod: daemon-set-hh7fz. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  1 13:35:48.217: INFO: Wrong image for pod: daemon-set-hh7fz. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  1 13:35:49.215: INFO: Wrong image for pod: daemon-set-hh7fz. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  1 13:35:49.215: INFO: Pod daemon-set-hh7fz is not available
Feb  1 13:35:50.215: INFO: Pod daemon-set-569f2 is not available
STEP: Check that daemon pods are still running on every node of the cluster.
Feb  1 13:35:50.238: INFO: Number of nodes with available pods: 1
Feb  1 13:35:50.238: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Feb  1 13:35:51.261: INFO: Number of nodes with available pods: 1
Feb  1 13:35:51.261: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Feb  1 13:35:52.256: INFO: Number of nodes with available pods: 1
Feb  1 13:35:52.256: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Feb  1 13:35:53.270: INFO: Number of nodes with available pods: 1
Feb  1 13:35:53.270: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Feb  1 13:35:54.421: INFO: Number of nodes with available pods: 1
Feb  1 13:35:54.421: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Feb  1 13:35:55.512: INFO: Number of nodes with available pods: 1
Feb  1 13:35:55.512: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Feb  1 13:35:56.414: INFO: Number of nodes with available pods: 1
Feb  1 13:35:56.414: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Feb  1 13:35:57.255: INFO: Number of nodes with available pods: 1
Feb  1 13:35:57.255: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Feb  1 13:35:58.256: INFO: Number of nodes with available pods: 2
Feb  1 13:35:58.256: INFO: Number of running nodes: 2, number of available pods: 2
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-826, will wait for the garbage collector to delete the pods
Feb  1 13:35:58.375: INFO: Deleting DaemonSet.extensions daemon-set took: 36.055868ms
Feb  1 13:35:58.677: INFO: Terminating DaemonSet.extensions daemon-set pods took: 302.337407ms
Feb  1 13:36:07.914: INFO: Number of nodes with available pods: 0
Feb  1 13:36:07.914: INFO: Number of running nodes: 0, number of available pods: 0
Feb  1 13:36:07.922: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-826/daemonsets","resourceVersion":"22692621"},"items":null}

Feb  1 13:36:07.926: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-826/pods","resourceVersion":"22692621"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  1 13:36:07.964: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-826" for this suite.
Feb  1 13:36:14.006: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  1 13:36:14.131: INFO: namespace daemonsets-826 deletion completed in 6.159629157s

• [SLOW TEST:66.308 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  1 13:36:14.132: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name projected-secret-test-281332ed-c15c-4de3-b85b-917d7362866c
STEP: Creating a pod to test consume secrets
Feb  1 13:36:14.272: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-9e665462-d558-46aa-a26c-b98db0dc3789" in namespace "projected-4819" to be "success or failure"
Feb  1 13:36:14.439: INFO: Pod "pod-projected-secrets-9e665462-d558-46aa-a26c-b98db0dc3789": Phase="Pending", Reason="", readiness=false. Elapsed: 166.855069ms
Feb  1 13:36:16.456: INFO: Pod "pod-projected-secrets-9e665462-d558-46aa-a26c-b98db0dc3789": Phase="Pending", Reason="", readiness=false. Elapsed: 2.184017313s
Feb  1 13:36:18.467: INFO: Pod "pod-projected-secrets-9e665462-d558-46aa-a26c-b98db0dc3789": Phase="Pending", Reason="", readiness=false. Elapsed: 4.195257428s
Feb  1 13:36:20.480: INFO: Pod "pod-projected-secrets-9e665462-d558-46aa-a26c-b98db0dc3789": Phase="Pending", Reason="", readiness=false. Elapsed: 6.207809031s
Feb  1 13:36:22.493: INFO: Pod "pod-projected-secrets-9e665462-d558-46aa-a26c-b98db0dc3789": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.220922791s
STEP: Saw pod success
Feb  1 13:36:22.493: INFO: Pod "pod-projected-secrets-9e665462-d558-46aa-a26c-b98db0dc3789" satisfied condition "success or failure"
Feb  1 13:36:22.499: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-9e665462-d558-46aa-a26c-b98db0dc3789 container projected-secret-volume-test: 
STEP: delete the pod
Feb  1 13:36:22.598: INFO: Waiting for pod pod-projected-secrets-9e665462-d558-46aa-a26c-b98db0dc3789 to disappear
Feb  1 13:36:22.602: INFO: Pod pod-projected-secrets-9e665462-d558-46aa-a26c-b98db0dc3789 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  1 13:36:22.603: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4819" for this suite.
Feb  1 13:36:28.633: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  1 13:36:28.772: INFO: namespace projected-4819 deletion completed in 6.164512052s

• [SLOW TEST:14.640 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Guestbook application 
  should create and stop a working application  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  1 13:36:28.772: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should create and stop a working application  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating all guestbook components
Feb  1 13:36:28.873: INFO: apiVersion: v1
kind: Service
metadata:
  name: redis-slave
  labels:
    app: redis
    role: slave
    tier: backend
spec:
  ports:
  - port: 6379
  selector:
    app: redis
    role: slave
    tier: backend

Feb  1 13:36:28.874: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4765'
Feb  1 13:36:29.513: INFO: stderr: ""
Feb  1 13:36:29.513: INFO: stdout: "service/redis-slave created\n"
Feb  1 13:36:29.514: INFO: apiVersion: v1
kind: Service
metadata:
  name: redis-master
  labels:
    app: redis
    role: master
    tier: backend
spec:
  ports:
  - port: 6379
    targetPort: 6379
  selector:
    app: redis
    role: master
    tier: backend

Feb  1 13:36:29.515: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4765'
Feb  1 13:36:29.999: INFO: stderr: ""
Feb  1 13:36:29.999: INFO: stdout: "service/redis-master created\n"
Feb  1 13:36:30.000: INFO: apiVersion: v1
kind: Service
metadata:
  name: frontend
  labels:
    app: guestbook
    tier: frontend
spec:
  # if your cluster supports it, uncomment the following to automatically create
  # an external load-balanced IP for the frontend service.
  # type: LoadBalancer
  ports:
  - port: 80
  selector:
    app: guestbook
    tier: frontend

Feb  1 13:36:30.000: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4765'
Feb  1 13:36:30.699: INFO: stderr: ""
Feb  1 13:36:30.699: INFO: stdout: "service/frontend created\n"
Feb  1 13:36:30.700: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: frontend
spec:
  replicas: 3
  selector:
    matchLabels:
      app: guestbook
      tier: frontend
  template:
    metadata:
      labels:
        app: guestbook
        tier: frontend
    spec:
      containers:
      - name: php-redis
        image: gcr.io/google-samples/gb-frontend:v6
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        env:
        - name: GET_HOSTS_FROM
          value: dns
          # If your cluster config does not include a dns service, then to
          # instead access environment variables to find service host
          # info, comment out the 'value: dns' line above, and uncomment the
          # line below:
          # value: env
        ports:
        - containerPort: 80

Feb  1 13:36:30.700: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4765'
Feb  1 13:36:31.168: INFO: stderr: ""
Feb  1 13:36:31.168: INFO: stdout: "deployment.apps/frontend created\n"
Feb  1 13:36:31.169: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: redis-master
spec:
  replicas: 1
  selector:
    matchLabels:
      app: redis
      role: master
      tier: backend
  template:
    metadata:
      labels:
        app: redis
        role: master
        tier: backend
    spec:
      containers:
      - name: master
        image: gcr.io/kubernetes-e2e-test-images/redis:1.0
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 6379

Feb  1 13:36:31.169: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4765'
Feb  1 13:36:31.616: INFO: stderr: ""
Feb  1 13:36:31.617: INFO: stdout: "deployment.apps/redis-master created\n"
Feb  1 13:36:31.618: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: redis-slave
spec:
  replicas: 2
  selector:
    matchLabels:
      app: redis
      role: slave
      tier: backend
  template:
    metadata:
      labels:
        app: redis
        role: slave
        tier: backend
    spec:
      containers:
      - name: slave
        image: gcr.io/google-samples/gb-redisslave:v3
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        env:
        - name: GET_HOSTS_FROM
          value: dns
          # If your cluster config does not include a dns service, then to
          # instead access an environment variable to find the master
          # service's host, comment out the 'value: dns' line above, and
          # uncomment the line below:
          # value: env
        ports:
        - containerPort: 6379

Feb  1 13:36:31.619: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4765'
Feb  1 13:36:33.959: INFO: stderr: ""
Feb  1 13:36:33.959: INFO: stdout: "deployment.apps/redis-slave created\n"
STEP: validating guestbook app
Feb  1 13:36:33.959: INFO: Waiting for all frontend pods to be Running.
Feb  1 13:36:54.014: INFO: Waiting for frontend to serve content.
Feb  1 13:36:54.291: INFO: Trying to add a new entry to the guestbook.
Feb  1 13:36:54.358: INFO: Failed to get response from guestbook. err: , response: 
Fatal error: Uncaught exception 'Predis\Connection\ConnectionException' with message 'Connection refused [tcp://redis-master:6379]' in /usr/local/lib/php/Predis/Connection/AbstractConnection.php:155 Stack trace: #0 /usr/local/lib/php/Predis/Connection/StreamConnection.php(128): Predis\Connection\AbstractConnection->onConnectionError('Connection refu...', 111) #1 /usr/local/lib/php/Predis/Connection/StreamConnection.php(178): Predis\Connection\StreamConnection->createStreamSocket(Object(Predis\Connection\Parameters), 'tcp://redis-mas...', 4) #2 /usr/local/lib/php/Predis/Connection/StreamConnection.php(100): Predis\Connection\StreamConnection->tcpStreamInitializer(Object(Predis\Connection\Parameters)) #3 /usr/local/lib/php/Predis/Connection/AbstractConnection.php(81): Predis\Connection\StreamConnection->createResource() #4 /usr/local/lib/php/Predis/Connection/StreamConnection.php(258): Predis\Connection\AbstractConnection->connect() #5 /usr/local/lib/php/Predis/Connection/AbstractConnection.php(180): Predis\Connection\Strea in /usr/local/lib/php/Predis/Connection/AbstractConnection.php on line 155
Feb 1 13:36:59.411: INFO: Verifying that added entry can be retrieved. STEP: using delete to clean up resources Feb 1 13:36:59.437: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-4765' Feb 1 13:36:59.702: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Feb 1 13:36:59.702: INFO: stdout: "service \"redis-slave\" force deleted\n" STEP: using delete to clean up resources Feb 1 13:36:59.703: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-4765' Feb 1 13:36:59.958: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Feb 1 13:36:59.958: INFO: stdout: "service \"redis-master\" force deleted\n" STEP: using delete to clean up resources Feb 1 13:36:59.959: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-4765' Feb 1 13:37:00.181: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Feb 1 13:37:00.182: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources Feb 1 13:37:00.183: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-4765' Feb 1 13:37:00.335: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Feb 1 13:37:00.335: INFO: stdout: "deployment.apps \"frontend\" force deleted\n" STEP: using delete to clean up resources Feb 1 13:37:00.338: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-4765' Feb 1 13:37:00.433: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Feb 1 13:37:00.433: INFO: stdout: "deployment.apps \"redis-master\" force deleted\n" STEP: using delete to clean up resources Feb 1 13:37:00.434: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-4765' Feb 1 13:37:00.591: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Feb 1 13:37:00.591: INFO: stdout: "deployment.apps \"redis-slave\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 1 13:37:00.592: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4765" for this suite. Feb 1 13:37:40.796: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 1 13:37:40.932: INFO: namespace kubectl-4765 deletion completed in 40.193182371s • [SLOW TEST:72.159 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Guestbook application /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 1 13:37:40.932: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: validating cluster-info Feb 1 13:37:41.015: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info' Feb 1 13:37:41.144: INFO: stderr: "" Feb 1 13:37:41.144: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.24.4.57:6443\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.24.4.57:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 1 13:37:41.145: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3078" for this suite. Feb 1 13:37:47.178: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 1 13:37:47.336: INFO: namespace kubectl-3078 deletion completed in 6.184915888s • [SLOW TEST:6.404 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl cluster-info /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-network] Services should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 1 13:37:47.337: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88 [It] should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 1 13:37:47.461: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-5521" for this suite. Feb 1 13:37:53.496: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 1 13:37:53.651: INFO: namespace services-5521 deletion completed in 6.180729984s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92 • [SLOW TEST:6.314 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 1 13:37:53.652: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-fb7a21a1-1ee1-444a-b5d1-4c9f1403aeac STEP: Creating a pod to test consume configMaps Feb 1 13:37:53.896: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-a3459e6e-5911-468d-8c11-19164daa41b1" in namespace "projected-4783" to be "success or failure" Feb 1 13:37:53.927: INFO: Pod "pod-projected-configmaps-a3459e6e-5911-468d-8c11-19164daa41b1": Phase="Pending", Reason="", readiness=false. Elapsed: 30.246814ms Feb 1 13:37:55.939: INFO: Pod "pod-projected-configmaps-a3459e6e-5911-468d-8c11-19164daa41b1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.041827828s Feb 1 13:37:58.044: INFO: Pod "pod-projected-configmaps-a3459e6e-5911-468d-8c11-19164daa41b1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.146734621s Feb 1 13:38:00.054: INFO: Pod "pod-projected-configmaps-a3459e6e-5911-468d-8c11-19164daa41b1": Phase="Pending", Reason="", readiness=false. Elapsed: 6.157410159s Feb 1 13:38:02.223: INFO: Pod "pod-projected-configmaps-a3459e6e-5911-468d-8c11-19164daa41b1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.325778005s STEP: Saw pod success Feb 1 13:38:02.223: INFO: Pod "pod-projected-configmaps-a3459e6e-5911-468d-8c11-19164daa41b1" satisfied condition "success or failure" Feb 1 13:38:02.228: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-a3459e6e-5911-468d-8c11-19164daa41b1 container projected-configmap-volume-test: STEP: delete the pod Feb 1 13:38:02.387: INFO: Waiting for pod pod-projected-configmaps-a3459e6e-5911-468d-8c11-19164daa41b1 to disappear Feb 1 13:38:02.395: INFO: Pod pod-projected-configmaps-a3459e6e-5911-468d-8c11-19164daa41b1 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 1 13:38:02.396: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4783" for this suite. Feb 1 13:38:08.426: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 1 13:38:08.627: INFO: namespace projected-4783 deletion completed in 6.225151713s • [SLOW TEST:14.974 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Proxy server should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 1 13:38:08.629: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: starting the proxy server Feb 1 13:38:08.775: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter' STEP: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 1 13:38:08.877: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-919" for this suite. Feb 1 13:38:14.930: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 1 13:38:15.054: INFO: namespace kubectl-919 deletion completed in 6.151076672s • [SLOW TEST:6.425 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Proxy server /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 1 13:38:15.056: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-map-ce3d2abe-52be-418c-835a-01a2744fb58d STEP: Creating a pod to test consume secrets Feb 1 13:38:15.191: INFO: Waiting up to 5m0s for pod "pod-secrets-fb30e1b3-5c80-4aae-bf14-5886666b2ed2" in namespace "secrets-3118" to be "success or failure" Feb 1 13:38:15.197: INFO: Pod "pod-secrets-fb30e1b3-5c80-4aae-bf14-5886666b2ed2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.826937ms Feb 1 13:38:17.206: INFO: Pod "pod-secrets-fb30e1b3-5c80-4aae-bf14-5886666b2ed2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015296281s Feb 1 13:38:19.217: INFO: Pod "pod-secrets-fb30e1b3-5c80-4aae-bf14-5886666b2ed2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.026624731s Feb 1 13:38:21.229: INFO: Pod "pod-secrets-fb30e1b3-5c80-4aae-bf14-5886666b2ed2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.038145143s Feb 1 13:38:23.238: INFO: Pod "pod-secrets-fb30e1b3-5c80-4aae-bf14-5886666b2ed2": Phase="Pending", Reason="", readiness=false. Elapsed: 8.047587929s Feb 1 13:38:25.249: INFO: Pod "pod-secrets-fb30e1b3-5c80-4aae-bf14-5886666b2ed2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.058046098s STEP: Saw pod success Feb 1 13:38:25.249: INFO: Pod "pod-secrets-fb30e1b3-5c80-4aae-bf14-5886666b2ed2" satisfied condition "success or failure" Feb 1 13:38:25.253: INFO: Trying to get logs from node iruya-node pod pod-secrets-fb30e1b3-5c80-4aae-bf14-5886666b2ed2 container secret-volume-test: STEP: delete the pod Feb 1 13:38:25.403: INFO: Waiting for pod pod-secrets-fb30e1b3-5c80-4aae-bf14-5886666b2ed2 to disappear Feb 1 13:38:25.410: INFO: Pod pod-secrets-fb30e1b3-5c80-4aae-bf14-5886666b2ed2 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 1 13:38:25.411: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-3118" for this suite. Feb 1 13:38:31.457: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 1 13:38:31.611: INFO: namespace secrets-3118 deletion completed in 6.190288899s • [SLOW TEST:16.556 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Aggregator Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 1 13:38:31.613: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename aggregator STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76 Feb 1 13:38:31.711: INFO: >>> kubeConfig: /root/.kube/config [It] Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Registering the sample API server. Feb 1 13:38:32.380: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set Feb 1 13:38:34.652: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716161112, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716161112, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716161112, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716161112, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 1 13:38:36.665: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716161112, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716161112, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716161112, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716161112, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 1 13:38:38.666: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716161112, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716161112, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716161112, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716161112, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 1 13:38:40.673: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716161112, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716161112, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716161112, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716161112, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 1 13:38:42.660: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716161112, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716161112, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716161112, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716161112, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 1 13:38:49.434: INFO: Waited 4.753850119s for the sample-apiserver to be ready to handle requests. [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67 [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 1 13:38:50.752: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "aggregator-4580" for this suite. Feb 1 13:38:56.872: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 1 13:38:57.011: INFO: namespace aggregator-4580 deletion completed in 6.24767658s • [SLOW TEST:25.399 seconds] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 1 13:38:57.012: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Feb 1 13:38:57.068: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 1 13:39:05.543: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-7738" for this suite. Feb 1 13:40:07.584: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 1 13:40:07.762: INFO: namespace pods-7738 deletion completed in 1m2.203839848s • [SLOW TEST:70.750 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 1 13:40:07.764: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Feb 1 13:40:07.925: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4713a1ee-a1f9-4af0-8c9e-803a1c8ce597" in namespace "downward-api-2614" to be "success or failure" Feb 1 13:40:07.936: INFO: Pod "downwardapi-volume-4713a1ee-a1f9-4af0-8c9e-803a1c8ce597": Phase="Pending", Reason="", readiness=false. Elapsed: 11.152969ms Feb 1 13:40:09.946: INFO: Pod "downwardapi-volume-4713a1ee-a1f9-4af0-8c9e-803a1c8ce597": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020746648s Feb 1 13:40:11.954: INFO: Pod "downwardapi-volume-4713a1ee-a1f9-4af0-8c9e-803a1c8ce597": Phase="Pending", Reason="", readiness=false. Elapsed: 4.029077657s Feb 1 13:40:14.061: INFO: Pod "downwardapi-volume-4713a1ee-a1f9-4af0-8c9e-803a1c8ce597": Phase="Pending", Reason="", readiness=false. Elapsed: 6.136469954s Feb 1 13:40:16.071: INFO: Pod "downwardapi-volume-4713a1ee-a1f9-4af0-8c9e-803a1c8ce597": Phase="Pending", Reason="", readiness=false. Elapsed: 8.146625116s Feb 1 13:40:18.079: INFO: Pod "downwardapi-volume-4713a1ee-a1f9-4af0-8c9e-803a1c8ce597": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.154408616s STEP: Saw pod success Feb 1 13:40:18.079: INFO: Pod "downwardapi-volume-4713a1ee-a1f9-4af0-8c9e-803a1c8ce597" satisfied condition "success or failure" Feb 1 13:40:18.082: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-4713a1ee-a1f9-4af0-8c9e-803a1c8ce597 container client-container: STEP: delete the pod Feb 1 13:40:18.388: INFO: Waiting for pod downwardapi-volume-4713a1ee-a1f9-4af0-8c9e-803a1c8ce597 to disappear Feb 1 13:40:18.397: INFO: Pod downwardapi-volume-4713a1ee-a1f9-4af0-8c9e-803a1c8ce597 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 1 13:40:18.398: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2614" for this suite. Feb 1 13:40:24.488: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 1 13:40:24.655: INFO: namespace downward-api-2614 deletion completed in 6.250367454s • [SLOW TEST:16.891 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 1 13:40:24.656: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications Feb 1 13:40:24.824: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-2667,SelfLink:/api/v1/namespaces/watch-2667/configmaps/e2e-watch-test-watch-closed,UID:90000aa7-54cd-4580-bf60-f4d2317ca316,ResourceVersion:22693405,Generation:0,CreationTimestamp:2020-02-01 13:40:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} Feb 1 13:40:24.825: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-2667,SelfLink:/api/v1/namespaces/watch-2667/configmaps/e2e-watch-test-watch-closed,UID:90000aa7-54cd-4580-bf60-f4d2317ca316,ResourceVersion:22693406,Generation:0,CreationTimestamp:2020-02-01 13:40:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed Feb 1 13:40:24.921: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-2667,SelfLink:/api/v1/namespaces/watch-2667/configmaps/e2e-watch-test-watch-closed,UID:90000aa7-54cd-4580-bf60-f4d2317ca316,ResourceVersion:22693407,Generation:0,CreationTimestamp:2020-02-01 13:40:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Feb 1 13:40:24.921: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-2667,SelfLink:/api/v1/namespaces/watch-2667/configmaps/e2e-watch-test-watch-closed,UID:90000aa7-54cd-4580-bf60-f4d2317ca316,ResourceVersion:22693408,Generation:0,CreationTimestamp:2020-02-01 13:40:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 1 13:40:24.922: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-2667" for this suite. Feb 1 13:40:30.978: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 1 13:40:31.099: INFO: namespace watch-2667 deletion completed in 6.156765283s • [SLOW TEST:6.443 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-apps] ReplicationController should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 1 13:40:31.100: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Given a Pod with a 'name' label pod-adoption is created STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 1 13:40:40.224: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-1303" for this suite. Feb 1 13:41:02.264: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 1 13:41:02.392: INFO: namespace replication-controller-1303 deletion completed in 22.15918034s • [SLOW TEST:31.292 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 1 13:41:02.392: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Feb 1 13:41:20.651: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Feb 1 13:41:20.659: INFO: Pod pod-with-poststart-http-hook still exists Feb 1 13:41:22.660: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Feb 1 13:41:22.677: INFO: Pod pod-with-poststart-http-hook still exists Feb 1 13:41:24.660: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Feb 1 13:41:24.700: INFO: Pod pod-with-poststart-http-hook still exists Feb 1 13:41:26.660: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Feb 1 13:41:26.693: INFO: Pod pod-with-poststart-http-hook still exists Feb 1 13:41:28.660: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Feb 1 13:41:28.669: INFO: Pod pod-with-poststart-http-hook still exists Feb 1 13:41:30.660: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Feb 1 13:41:30.678: INFO: Pod pod-with-poststart-http-hook still exists Feb 1 13:41:32.660: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Feb 1 13:41:32.684: INFO: Pod pod-with-poststart-http-hook still exists Feb 1 13:41:34.660: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Feb 1 13:41:34.674: INFO: Pod pod-with-poststart-http-hook still exists Feb 1 13:41:36.662: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Feb 1 13:41:36.670: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 1 13:41:36.671: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-9584" for this suite. Feb 1 13:41:58.743: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 1 13:41:58.922: INFO: namespace container-lifecycle-hook-9584 deletion completed in 22.245937639s • [SLOW TEST:56.530 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 1 13:41:58.923: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod test-webserver-f9350078-df10-4367-993a-e64f2aade327 in namespace container-probe-1442 Feb 1 13:42:09.091: INFO: Started pod test-webserver-f9350078-df10-4367-993a-e64f2aade327 in namespace container-probe-1442 STEP: checking the pod's current state and verifying that restartCount is present Feb 1 13:42:09.102: INFO: Initial restart count of pod test-webserver-f9350078-df10-4367-993a-e64f2aade327 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 1 13:46:09.800: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-1442" for this suite. Feb 1 13:46:16.120: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 1 13:46:16.240: INFO: namespace container-probe-1442 deletion completed in 6.415126847s • [SLOW TEST:257.317 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 1 13:46:16.241: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-map-e4689d7a-457b-45a6-afd8-bc0a5d50f37b STEP: Creating a pod to test consume configMaps Feb 1 13:46:16.321: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-a1a2071f-511d-4fbf-aece-afe9c207e512" in namespace "projected-6620" to be "success or failure" Feb 1 13:46:16.340: INFO: Pod "pod-projected-configmaps-a1a2071f-511d-4fbf-aece-afe9c207e512": Phase="Pending", Reason="", readiness=false. Elapsed: 18.111847ms Feb 1 13:46:18.350: INFO: Pod "pod-projected-configmaps-a1a2071f-511d-4fbf-aece-afe9c207e512": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028104449s Feb 1 13:46:20.367: INFO: Pod "pod-projected-configmaps-a1a2071f-511d-4fbf-aece-afe9c207e512": Phase="Pending", Reason="", readiness=false. Elapsed: 4.045238493s Feb 1 13:46:22.389: INFO: Pod "pod-projected-configmaps-a1a2071f-511d-4fbf-aece-afe9c207e512": Phase="Pending", Reason="", readiness=false. Elapsed: 6.067264504s Feb 1 13:46:24.402: INFO: Pod "pod-projected-configmaps-a1a2071f-511d-4fbf-aece-afe9c207e512": Phase="Pending", Reason="", readiness=false. Elapsed: 8.080059083s Feb 1 13:46:26.430: INFO: Pod "pod-projected-configmaps-a1a2071f-511d-4fbf-aece-afe9c207e512": Phase="Pending", Reason="", readiness=false. Elapsed: 10.108734845s Feb 1 13:46:28.440: INFO: Pod "pod-projected-configmaps-a1a2071f-511d-4fbf-aece-afe9c207e512": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.118023288s STEP: Saw pod success Feb 1 13:46:28.440: INFO: Pod "pod-projected-configmaps-a1a2071f-511d-4fbf-aece-afe9c207e512" satisfied condition "success or failure" Feb 1 13:46:28.445: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-a1a2071f-511d-4fbf-aece-afe9c207e512 container projected-configmap-volume-test: STEP: delete the pod Feb 1 13:46:28.505: INFO: Waiting for pod pod-projected-configmaps-a1a2071f-511d-4fbf-aece-afe9c207e512 to disappear Feb 1 13:46:28.514: INFO: Pod pod-projected-configmaps-a1a2071f-511d-4fbf-aece-afe9c207e512 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 1 13:46:28.514: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6620" for this suite. Feb 1 13:46:34.629: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 1 13:46:34.810: INFO: namespace projected-6620 deletion completed in 6.285673137s • [SLOW TEST:18.569 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 1 13:46:34.813: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-4c64da22-f448-4d9e-aaef-a3e34a83c440 STEP: Creating a pod to test consume secrets Feb 1 13:46:34.962: INFO: Waiting up to 5m0s for pod "pod-secrets-3d6bae65-774f-4490-99b8-081266bec69c" in namespace "secrets-4093" to be "success or failure" Feb 1 13:46:35.042: INFO: Pod "pod-secrets-3d6bae65-774f-4490-99b8-081266bec69c": Phase="Pending", Reason="", readiness=false. Elapsed: 78.837651ms Feb 1 13:46:37.054: INFO: Pod "pod-secrets-3d6bae65-774f-4490-99b8-081266bec69c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.09099839s Feb 1 13:46:39.071: INFO: Pod "pod-secrets-3d6bae65-774f-4490-99b8-081266bec69c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.108135978s Feb 1 13:46:41.085: INFO: Pod "pod-secrets-3d6bae65-774f-4490-99b8-081266bec69c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.122130049s Feb 1 13:46:43.101: INFO: Pod "pod-secrets-3d6bae65-774f-4490-99b8-081266bec69c": Phase="Pending", Reason="", readiness=false. Elapsed: 8.138348221s Feb 1 13:46:45.113: INFO: Pod "pod-secrets-3d6bae65-774f-4490-99b8-081266bec69c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.150700191s STEP: Saw pod success Feb 1 13:46:45.114: INFO: Pod "pod-secrets-3d6bae65-774f-4490-99b8-081266bec69c" satisfied condition "success or failure" Feb 1 13:46:45.119: INFO: Trying to get logs from node iruya-node pod pod-secrets-3d6bae65-774f-4490-99b8-081266bec69c container secret-volume-test: STEP: delete the pod Feb 1 13:46:45.237: INFO: Waiting for pod pod-secrets-3d6bae65-774f-4490-99b8-081266bec69c to disappear Feb 1 13:46:45.245: INFO: Pod pod-secrets-3d6bae65-774f-4490-99b8-081266bec69c no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 1 13:46:45.245: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-4093" for this suite. Feb 1 13:46:51.320: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 1 13:46:51.461: INFO: namespace secrets-4093 deletion completed in 6.211730656s • [SLOW TEST:16.648 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 1 13:46:51.461: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test override arguments Feb 1 13:46:51.589: INFO: Waiting up to 5m0s for pod "client-containers-92ed92e4-38c5-481e-985b-5b964ad8f097" in namespace "containers-718" to be "success or failure" Feb 1 13:46:51.608: INFO: Pod "client-containers-92ed92e4-38c5-481e-985b-5b964ad8f097": Phase="Pending", Reason="", readiness=false. Elapsed: 18.446469ms Feb 1 13:46:53.626: INFO: Pod "client-containers-92ed92e4-38c5-481e-985b-5b964ad8f097": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036224623s Feb 1 13:46:55.661: INFO: Pod "client-containers-92ed92e4-38c5-481e-985b-5b964ad8f097": Phase="Pending", Reason="", readiness=false. Elapsed: 4.071527579s Feb 1 13:46:57.671: INFO: Pod "client-containers-92ed92e4-38c5-481e-985b-5b964ad8f097": Phase="Pending", Reason="", readiness=false. Elapsed: 6.0811581s Feb 1 13:46:59.709: INFO: Pod "client-containers-92ed92e4-38c5-481e-985b-5b964ad8f097": Phase="Pending", Reason="", readiness=false. Elapsed: 8.119304502s Feb 1 13:47:01.719: INFO: Pod "client-containers-92ed92e4-38c5-481e-985b-5b964ad8f097": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.129200911s STEP: Saw pod success Feb 1 13:47:01.719: INFO: Pod "client-containers-92ed92e4-38c5-481e-985b-5b964ad8f097" satisfied condition "success or failure" Feb 1 13:47:01.724: INFO: Trying to get logs from node iruya-node pod client-containers-92ed92e4-38c5-481e-985b-5b964ad8f097 container test-container: STEP: delete the pod Feb 1 13:47:01.848: INFO: Waiting for pod client-containers-92ed92e4-38c5-481e-985b-5b964ad8f097 to disappear Feb 1 13:47:01.922: INFO: Pod client-containers-92ed92e4-38c5-481e-985b-5b964ad8f097 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 1 13:47:01.923: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-718" for this suite. Feb 1 13:47:07.997: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 1 13:47:08.178: INFO: namespace containers-718 deletion completed in 6.242010135s • [SLOW TEST:16.716 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Service endpoints latency should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 1 13:47:08.179: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating replication controller svc-latency-rc in namespace svc-latency-4918 I0201 13:47:08.292934 8 runners.go:180] Created replication controller with name: svc-latency-rc, namespace: svc-latency-4918, replica count: 1 I0201 13:47:09.344444 8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0201 13:47:10.345059 8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0201 13:47:11.345976 8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0201 13:47:12.346666 8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0201 13:47:13.347773 8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0201 13:47:14.348585 8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0201 13:47:15.349270 8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0201 13:47:16.349907 8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Feb 1 13:47:16.498: INFO: Created: latency-svc-r8qfk Feb 1 13:47:16.535: INFO: Got endpoints: latency-svc-r8qfk [85.060318ms] Feb 1 13:47:16.674: INFO: Created: latency-svc-vkck9 Feb 1 13:47:16.702: INFO: Got endpoints: latency-svc-vkck9 [163.989459ms] Feb 1 13:47:16.746: INFO: Created: latency-svc-tzprx Feb 1 13:47:16.886: INFO: Got endpoints: latency-svc-tzprx [345.89073ms] Feb 1 13:47:16.901: INFO: Created: latency-svc-5nvgx Feb 1 13:47:16.914: INFO: Got endpoints: latency-svc-5nvgx [376.693101ms] Feb 1 13:47:16.948: INFO: Created: latency-svc-xd6rn Feb 1 13:47:16.953: INFO: Got endpoints: latency-svc-xd6rn [412.82257ms] Feb 1 13:47:17.080: INFO: Created: latency-svc-nbbcl Feb 1 13:47:17.088: INFO: Got endpoints: latency-svc-nbbcl [548.530482ms] Feb 1 13:47:17.138: INFO: Created: latency-svc-4n645 Feb 1 13:47:17.161: INFO: Got endpoints: latency-svc-4n645 [622.173243ms] Feb 1 13:47:17.236: INFO: Created: latency-svc-lhrc6 Feb 1 13:47:17.256: INFO: Got endpoints: latency-svc-lhrc6 [716.381712ms] Feb 1 13:47:17.300: INFO: Created: latency-svc-48hn2 Feb 1 13:47:17.322: INFO: Got endpoints: latency-svc-48hn2 [782.845079ms] Feb 1 13:47:17.400: INFO: Created: latency-svc-qr928 Feb 1 13:47:17.413: INFO: Got endpoints: latency-svc-qr928 [872.505263ms] Feb 1 13:47:17.458: INFO: Created: latency-svc-cpx7v Feb 1 13:47:17.596: INFO: Got endpoints: latency-svc-cpx7v [1.055977779s] Feb 1 13:47:17.599: INFO: Created: latency-svc-6tmms Feb 1 13:47:17.607: INFO: Got endpoints: latency-svc-6tmms [1.068263102s] Feb 1 13:47:17.663: INFO: Created: latency-svc-6p7z9 Feb 1 13:47:17.669: INFO: Got endpoints: latency-svc-6p7z9 [1.129510453s] Feb 1 13:47:17.761: INFO: Created: latency-svc-r7w9v Feb 1 13:47:17.787: INFO: Got endpoints: latency-svc-r7w9v [1.246614618s] Feb 1 13:47:17.832: INFO: Created: latency-svc-2v2d4 Feb 1 13:47:17.957: INFO: Got endpoints: latency-svc-2v2d4 [1.417301178s] Feb 1 13:47:17.978: INFO: Created: latency-svc-fb4ll Feb 1 13:47:17.994: INFO: Got endpoints: latency-svc-fb4ll [1.453890608s] Feb 1 13:47:18.048: INFO: Created: latency-svc-d6z6m Feb 1 13:47:18.135: INFO: Got endpoints: latency-svc-d6z6m [1.432012556s] Feb 1 13:47:18.141: INFO: Created: latency-svc-rb99l Feb 1 13:47:18.147: INFO: Got endpoints: latency-svc-rb99l [1.260347036s] Feb 1 13:47:18.207: INFO: Created: latency-svc-p2svc Feb 1 13:47:18.329: INFO: Got endpoints: latency-svc-p2svc [1.415087684s] Feb 1 13:47:18.352: INFO: Created: latency-svc-q6vls Feb 1 13:47:18.394: INFO: Got endpoints: latency-svc-q6vls [1.440965434s] Feb 1 13:47:18.399: INFO: Created: latency-svc-r5rdq Feb 1 13:47:18.407: INFO: Got endpoints: latency-svc-r5rdq [1.318996285s] Feb 1 13:47:18.513: INFO: Created: latency-svc-sdqng Feb 1 13:47:18.557: INFO: Got endpoints: latency-svc-sdqng [1.395670908s] Feb 1 13:47:18.568: INFO: Created: latency-svc-flbdb Feb 1 13:47:18.587: INFO: Got endpoints: latency-svc-flbdb [1.331473453s] Feb 1 13:47:18.667: INFO: Created: latency-svc-t64lx Feb 1 13:47:18.679: INFO: Got endpoints: latency-svc-t64lx [1.356826826s] Feb 1 13:47:18.709: INFO: Created: latency-svc-h928g Feb 1 13:47:18.714: INFO: Got endpoints: latency-svc-h928g [1.301147795s] Feb 1 13:47:18.838: INFO: Created: latency-svc-pdsfz Feb 1 13:47:18.842: INFO: Got endpoints: latency-svc-pdsfz [1.245767141s] Feb 1 13:47:18.893: INFO: Created: latency-svc-9vkrm Feb 1 13:47:18.902: INFO: Got endpoints: latency-svc-9vkrm [1.294576386s] Feb 1 13:47:19.028: INFO: Created: latency-svc-lm2nf Feb 1 13:47:19.034: INFO: Got endpoints: latency-svc-lm2nf [1.36427976s] Feb 1 13:47:19.119: INFO: Created: latency-svc-l2vgl Feb 1 13:47:19.122: INFO: Got endpoints: latency-svc-l2vgl [1.335561184s] Feb 1 13:47:19.229: INFO: Created: latency-svc-wvmlh Feb 1 13:47:19.234: INFO: Got endpoints: latency-svc-wvmlh [1.276939244s] Feb 1 13:47:19.275: INFO: Created: latency-svc-vq4nn Feb 1 13:47:19.279: INFO: Got endpoints: latency-svc-vq4nn [1.284688903s] Feb 1 13:47:19.391: INFO: Created: latency-svc-vr8j4 Feb 1 13:47:19.467: INFO: Got endpoints: latency-svc-vr8j4 [1.331992812s] Feb 1 13:47:19.468: INFO: Created: latency-svc-476ww Feb 1 13:47:19.487: INFO: Got endpoints: latency-svc-476ww [1.340647033s] Feb 1 13:47:19.701: INFO: Created: latency-svc-klv2q Feb 1 13:47:19.709: INFO: Got endpoints: latency-svc-klv2q [1.379000177s] Feb 1 13:47:19.766: INFO: Created: latency-svc-fn4pr Feb 1 13:47:19.769: INFO: Got endpoints: latency-svc-fn4pr [1.37426503s] Feb 1 13:47:19.847: INFO: Created: latency-svc-g4z4k Feb 1 13:47:20.624: INFO: Got endpoints: latency-svc-g4z4k [2.217091849s] Feb 1 13:47:20.666: INFO: Created: latency-svc-l7zpl Feb 1 13:47:20.688: INFO: Got endpoints: latency-svc-l7zpl [2.130367696s] Feb 1 13:47:20.773: INFO: Created: latency-svc-dp54t Feb 1 13:47:20.780: INFO: Got endpoints: latency-svc-dp54t [2.192077531s] Feb 1 13:47:20.840: INFO: Created: latency-svc-t8n5v Feb 1 13:47:20.849: INFO: Got endpoints: latency-svc-t8n5v [2.169410761s] Feb 1 13:47:20.963: INFO: Created: latency-svc-r9vwl Feb 1 13:47:20.975: INFO: Got endpoints: latency-svc-r9vwl [2.261107136s] Feb 1 13:47:21.026: INFO: Created: latency-svc-wtjnl Feb 1 13:47:21.040: INFO: Got endpoints: latency-svc-wtjnl [2.197025355s] Feb 1 13:47:21.156: INFO: Created: latency-svc-844d7 Feb 1 13:47:21.164: INFO: Got endpoints: latency-svc-844d7 [2.261704627s] Feb 1 13:47:21.201: INFO: Created: latency-svc-h4zpm Feb 1 13:47:21.220: INFO: Got endpoints: latency-svc-h4zpm [2.185690151s] Feb 1 13:47:21.403: INFO: Created: latency-svc-95rgh Feb 1 13:47:21.419: INFO: Got endpoints: latency-svc-95rgh [2.295886746s] Feb 1 13:47:21.483: INFO: Created: latency-svc-9dldk Feb 1 13:47:21.562: INFO: Got endpoints: latency-svc-9dldk [2.326924839s] Feb 1 13:47:21.608: INFO: Created: latency-svc-kds2b Feb 1 13:47:21.610: INFO: Got endpoints: latency-svc-kds2b [2.330710528s] Feb 1 13:47:21.714: INFO: Created: latency-svc-dvx2q Feb 1 13:47:21.745: INFO: Got endpoints: latency-svc-dvx2q [2.277776146s] Feb 1 13:47:21.756: INFO: Created: latency-svc-dnlwc Feb 1 13:47:21.772: INFO: Got endpoints: latency-svc-dnlwc [2.283350857s] Feb 1 13:47:21.817: INFO: Created: latency-svc-8r4c2 Feb 1 13:47:21.881: INFO: Got endpoints: latency-svc-8r4c2 [2.171760779s] Feb 1 13:47:21.898: INFO: Created: latency-svc-2qfck Feb 1 13:47:21.921: INFO: Got endpoints: latency-svc-2qfck [2.151877218s] Feb 1 13:47:21.966: INFO: Created: latency-svc-n55l7 Feb 1 13:47:21.972: INFO: Got endpoints: latency-svc-n55l7 [1.3475065s] Feb 1 13:47:22.047: INFO: Created: latency-svc-76qjp Feb 1 13:47:22.054: INFO: Got endpoints: latency-svc-76qjp [1.365137918s] Feb 1 13:47:22.142: INFO: Created: latency-svc-p8d8h Feb 1 13:47:22.190: INFO: Got endpoints: latency-svc-p8d8h [1.409848895s] Feb 1 13:47:22.225: INFO: Created: latency-svc-bfrdc Feb 1 13:47:22.237: INFO: Got endpoints: latency-svc-bfrdc [1.387558159s] Feb 1 13:47:22.280: INFO: Created: latency-svc-gwcw9 Feb 1 13:47:22.378: INFO: Got endpoints: latency-svc-gwcw9 [1.40224999s] Feb 1 13:47:22.392: INFO: Created: latency-svc-cqsbp Feb 1 13:47:22.395: INFO: Got endpoints: latency-svc-cqsbp [1.35527291s] Feb 1 13:47:22.469: INFO: Created: latency-svc-pr57j Feb 1 13:47:22.470: INFO: Got endpoints: latency-svc-pr57j [1.306117735s] Feb 1 13:47:22.562: INFO: Created: latency-svc-p5j58 Feb 1 13:47:22.599: INFO: Got endpoints: latency-svc-p5j58 [1.378549015s] Feb 1 13:47:22.609: INFO: Created: latency-svc-7whb9 Feb 1 13:47:22.716: INFO: Got endpoints: latency-svc-7whb9 [1.29705907s] Feb 1 13:47:22.717: INFO: Created: latency-svc-5wjmd Feb 1 13:47:22.723: INFO: Got endpoints: latency-svc-5wjmd [1.161180884s] Feb 1 13:47:22.776: INFO: Created: latency-svc-m69ch Feb 1 13:47:22.787: INFO: Got endpoints: latency-svc-m69ch [1.177345568s] Feb 1 13:47:22.932: INFO: Created: latency-svc-lsbj9 Feb 1 13:47:22.945: INFO: Got endpoints: latency-svc-lsbj9 [1.198246864s] Feb 1 13:47:23.008: INFO: Created: latency-svc-ptbcl Feb 1 13:47:23.009: INFO: Got endpoints: latency-svc-ptbcl [1.23715729s] Feb 1 13:47:23.140: INFO: Created: latency-svc-j7c5n Feb 1 13:47:23.148: INFO: Got endpoints: latency-svc-j7c5n [1.267364164s] Feb 1 13:47:23.206: INFO: Created: latency-svc-f42lq Feb 1 13:47:23.214: INFO: Got endpoints: latency-svc-f42lq [1.292170957s] Feb 1 13:47:23.354: INFO: Created: latency-svc-8c8df Feb 1 13:47:23.376: INFO: Got endpoints: latency-svc-8c8df [1.403104815s] Feb 1 13:47:23.562: INFO: Created: latency-svc-s6xtk Feb 1 13:47:23.572: INFO: Got endpoints: latency-svc-s6xtk [1.517677723s] Feb 1 13:47:23.656: INFO: Created: latency-svc-b4jxl Feb 1 13:47:23.740: INFO: Got endpoints: latency-svc-b4jxl [1.549610538s] Feb 1 13:47:23.769: INFO: Created: latency-svc-wmj8z Feb 1 13:47:23.989: INFO: Created: latency-svc-mjtvc Feb 1 13:47:23.990: INFO: Got endpoints: latency-svc-wmj8z [1.752299143s] Feb 1 13:47:24.003: INFO: Got endpoints: latency-svc-mjtvc [1.625424379s] Feb 1 13:47:24.075: INFO: Created: latency-svc-rr2n4 Feb 1 13:47:24.220: INFO: Got endpoints: latency-svc-rr2n4 [1.824988714s] Feb 1 13:47:24.241: INFO: Created: latency-svc-ckpmf Feb 1 13:47:24.255: INFO: Got endpoints: latency-svc-ckpmf [1.78492148s] Feb 1 13:47:24.424: INFO: Created: latency-svc-j996w Feb 1 13:47:24.428: INFO: Got endpoints: latency-svc-j996w [1.829126706s] Feb 1 13:47:24.533: INFO: Created: latency-svc-49pnv Feb 1 13:47:24.661: INFO: Got endpoints: latency-svc-49pnv [1.945081174s] Feb 1 13:47:24.666: INFO: Created: latency-svc-lwm4t Feb 1 13:47:24.680: INFO: Got endpoints: latency-svc-lwm4t [1.95660738s] Feb 1 13:47:24.787: INFO: Created: latency-svc-4wkq4 Feb 1 13:47:24.793: INFO: Got endpoints: latency-svc-4wkq4 [2.005446882s] Feb 1 13:47:24.827: INFO: Created: latency-svc-s4vdw Feb 1 13:47:24.832: INFO: Got endpoints: latency-svc-s4vdw [1.887279228s] Feb 1 13:47:24.883: INFO: Created: latency-svc-9cbtx Feb 1 13:47:24.999: INFO: Got endpoints: latency-svc-9cbtx [1.989793485s] Feb 1 13:47:25.048: INFO: Created: latency-svc-jsrk8 Feb 1 13:47:25.185: INFO: Created: latency-svc-4qkv6 Feb 1 13:47:25.197: INFO: Got endpoints: latency-svc-jsrk8 [2.048539653s] Feb 1 13:47:25.197: INFO: Got endpoints: latency-svc-4qkv6 [1.983339869s] Feb 1 13:47:25.256: INFO: Created: latency-svc-bhjnd Feb 1 13:47:25.326: INFO: Got endpoints: latency-svc-bhjnd [1.950495272s] Feb 1 13:47:25.334: INFO: Created: latency-svc-82nl8 Feb 1 13:47:25.353: INFO: Got endpoints: latency-svc-82nl8 [1.78122228s] Feb 1 13:47:25.388: INFO: Created: latency-svc-sfkfx Feb 1 13:47:25.399: INFO: Got endpoints: latency-svc-sfkfx [1.657943843s] Feb 1 13:47:25.507: INFO: Created: latency-svc-4dbpt Feb 1 13:47:25.526: INFO: Got endpoints: latency-svc-4dbpt [1.535107317s] Feb 1 13:47:25.571: INFO: Created: latency-svc-wjff7 Feb 1 13:47:25.653: INFO: Created: latency-svc-z5xls Feb 1 13:47:25.656: INFO: Got endpoints: latency-svc-wjff7 [1.652342552s] Feb 1 13:47:25.676: INFO: Got endpoints: latency-svc-z5xls [1.454855253s] Feb 1 13:47:25.742: INFO: Created: latency-svc-4zckg Feb 1 13:47:25.813: INFO: Got endpoints: latency-svc-4zckg [1.557486387s] Feb 1 13:47:25.818: INFO: Created: latency-svc-vf7nl Feb 1 13:47:25.822: INFO: Got endpoints: latency-svc-vf7nl [1.393727254s] Feb 1 13:47:25.868: INFO: Created: latency-svc-wx4kp Feb 1 13:47:25.876: INFO: Got endpoints: latency-svc-wx4kp [1.21472904s] Feb 1 13:47:25.970: INFO: Created: latency-svc-l7whf Feb 1 13:47:25.977: INFO: Got endpoints: latency-svc-l7whf [1.297215004s] Feb 1 13:47:26.024: INFO: Created: latency-svc-jwkhh Feb 1 13:47:26.031: INFO: Got endpoints: latency-svc-jwkhh [1.23780724s] Feb 1 13:47:26.126: INFO: Created: latency-svc-n7fbv Feb 1 13:47:26.195: INFO: Got endpoints: latency-svc-n7fbv [217.993684ms] Feb 1 13:47:26.207: INFO: Created: latency-svc-spkx7 Feb 1 13:47:26.311: INFO: Got endpoints: latency-svc-spkx7 [1.478700736s] Feb 1 13:47:26.351: INFO: Created: latency-svc-7h4r4 Feb 1 13:47:26.364: INFO: Got endpoints: latency-svc-7h4r4 [1.364538013s] Feb 1 13:47:26.392: INFO: Created: latency-svc-77cbp Feb 1 13:47:26.518: INFO: Got endpoints: latency-svc-77cbp [1.320611041s] Feb 1 13:47:26.523: INFO: Created: latency-svc-m86v9 Feb 1 13:47:26.539: INFO: Got endpoints: latency-svc-m86v9 [1.341266365s] Feb 1 13:47:26.605: INFO: Created: latency-svc-66vwv Feb 1 13:47:26.721: INFO: Got endpoints: latency-svc-66vwv [1.394209075s] Feb 1 13:47:26.758: INFO: Created: latency-svc-7mdkc Feb 1 13:47:26.765: INFO: Got endpoints: latency-svc-7mdkc [1.411070794s] Feb 1 13:47:26.821: INFO: Created: latency-svc-xn2pb Feb 1 13:47:26.917: INFO: Got endpoints: latency-svc-xn2pb [1.518052076s] Feb 1 13:47:26.968: INFO: Created: latency-svc-2zqqm Feb 1 13:47:26.988: INFO: Got endpoints: latency-svc-2zqqm [1.461930307s] Feb 1 13:47:27.159: INFO: Created: latency-svc-8rtcn Feb 1 13:47:27.166: INFO: Got endpoints: latency-svc-8rtcn [1.509976235s] Feb 1 13:47:27.299: INFO: Created: latency-svc-kg5sp Feb 1 13:47:27.304: INFO: Got endpoints: latency-svc-kg5sp [1.627837251s] Feb 1 13:47:27.375: INFO: Created: latency-svc-f8k6q Feb 1 13:47:27.386: INFO: Got endpoints: latency-svc-f8k6q [1.572474073s] Feb 1 13:47:27.544: INFO: Created: latency-svc-mngwr Feb 1 13:47:27.544: INFO: Got endpoints: latency-svc-mngwr [1.721896341s] Feb 1 13:47:27.593: INFO: Created: latency-svc-qnp4j Feb 1 13:47:27.601: INFO: Got endpoints: latency-svc-qnp4j [1.724725041s] Feb 1 13:47:27.730: INFO: Created: latency-svc-wgdw6 Feb 1 13:47:27.788: INFO: Got endpoints: latency-svc-wgdw6 [1.756645553s] Feb 1 13:47:27.795: INFO: Created: latency-svc-j588c Feb 1 13:47:27.799: INFO: Got endpoints: latency-svc-j588c [1.602912865s] Feb 1 13:47:27.953: INFO: Created: latency-svc-2kmcr Feb 1 13:47:27.954: INFO: Got endpoints: latency-svc-2kmcr [1.642253223s] Feb 1 13:47:28.010: INFO: Created: latency-svc-k7c5k Feb 1 13:47:28.025: INFO: Got endpoints: latency-svc-k7c5k [1.661136067s] Feb 1 13:47:28.198: INFO: Created: latency-svc-hhjfv Feb 1 13:47:28.214: INFO: Got endpoints: latency-svc-hhjfv [1.695193288s] Feb 1 13:47:28.265: INFO: Created: latency-svc-f95kd Feb 1 13:47:28.279: INFO: Got endpoints: latency-svc-f95kd [1.739759901s] Feb 1 13:47:28.398: INFO: Created: latency-svc-xxkkq Feb 1 13:47:28.407: INFO: Got endpoints: latency-svc-xxkkq [1.685001057s] Feb 1 13:47:28.464: INFO: Created: latency-svc-tpr5q Feb 1 13:47:28.467: INFO: Got endpoints: latency-svc-tpr5q [1.701395239s] Feb 1 13:47:28.575: INFO: Created: latency-svc-m9ldr Feb 1 13:47:28.583: INFO: Got endpoints: latency-svc-m9ldr [1.666077701s] Feb 1 13:47:28.634: INFO: Created: latency-svc-dp9ph Feb 1 13:47:28.643: INFO: Got endpoints: latency-svc-dp9ph [1.654839261s] Feb 1 13:47:28.765: INFO: Created: latency-svc-hppj2 Feb 1 13:47:28.771: INFO: Got endpoints: latency-svc-hppj2 [1.604283683s] Feb 1 13:47:28.828: INFO: Created: latency-svc-65r2x Feb 1 13:47:28.838: INFO: Got endpoints: latency-svc-65r2x [1.533761901s] Feb 1 13:47:29.093: INFO: Created: latency-svc-gzrnm Feb 1 13:47:29.108: INFO: Got endpoints: latency-svc-gzrnm [1.721747846s] Feb 1 13:47:29.237: INFO: Created: latency-svc-7p5gn Feb 1 13:47:29.242: INFO: Got endpoints: latency-svc-7p5gn [1.697178361s] Feb 1 13:47:29.287: INFO: Created: latency-svc-r7xsn Feb 1 13:47:29.395: INFO: Got endpoints: latency-svc-r7xsn [1.793286673s] Feb 1 13:47:29.398: INFO: Created: latency-svc-tsbh9 Feb 1 13:47:29.412: INFO: Got endpoints: latency-svc-tsbh9 [1.623890047s] Feb 1 13:47:29.463: INFO: Created: latency-svc-mwq7g Feb 1 13:47:29.473: INFO: Got endpoints: latency-svc-mwq7g [1.674170149s] Feb 1 13:47:29.620: INFO: Created: latency-svc-wh26l Feb 1 13:47:29.658: INFO: Got endpoints: latency-svc-wh26l [1.704167821s] Feb 1 13:47:29.678: INFO: Created: latency-svc-x7dg4 Feb 1 13:47:29.679: INFO: Got endpoints: latency-svc-x7dg4 [1.65343486s] Feb 1 13:47:29.848: INFO: Created: latency-svc-ngxnn Feb 1 13:47:29.855: INFO: Got endpoints: latency-svc-ngxnn [1.641637988s] Feb 1 13:47:30.071: INFO: Created: latency-svc-rpf8v Feb 1 13:47:30.084: INFO: Got endpoints: latency-svc-rpf8v [1.804424155s] Feb 1 13:47:30.161: INFO: Created: latency-svc-272jj Feb 1 13:47:30.339: INFO: Got endpoints: latency-svc-272jj [1.932249154s] Feb 1 13:47:30.345: INFO: Created: latency-svc-t7h98 Feb 1 13:47:30.384: INFO: Got endpoints: latency-svc-t7h98 [1.916641704s] Feb 1 13:47:30.529: INFO: Created: latency-svc-m4x77 Feb 1 13:47:30.566: INFO: Got endpoints: latency-svc-m4x77 [1.982150824s] Feb 1 13:47:30.577: INFO: Created: latency-svc-9jzth Feb 1 13:47:30.578: INFO: Got endpoints: latency-svc-9jzth [1.935157609s] Feb 1 13:47:30.691: INFO: Created: latency-svc-nwnql Feb 1 13:47:30.703: INFO: Got endpoints: latency-svc-nwnql [1.931567963s] Feb 1 13:47:30.765: INFO: Created: latency-svc-d5xps Feb 1 13:47:30.869: INFO: Got endpoints: latency-svc-d5xps [2.031263977s] Feb 1 13:47:30.873: INFO: Created: latency-svc-68t4l Feb 1 13:47:30.905: INFO: Got endpoints: latency-svc-68t4l [1.797143352s] Feb 1 13:47:31.103: INFO: Created: latency-svc-c7m2s Feb 1 13:47:31.109: INFO: Created: latency-svc-br57g Feb 1 13:47:31.147: INFO: Got endpoints: latency-svc-c7m2s [1.904455905s] Feb 1 13:47:31.148: INFO: Got endpoints: latency-svc-br57g [1.752019123s] Feb 1 13:47:31.274: INFO: Created: latency-svc-47qhb Feb 1 13:47:31.291: INFO: Got endpoints: latency-svc-47qhb [1.878158586s] Feb 1 13:47:31.333: INFO: Created: latency-svc-j4k6j Feb 1 13:47:31.351: INFO: Got endpoints: latency-svc-j4k6j [1.876935601s] Feb 1 13:47:31.439: INFO: Created: latency-svc-v6h54 Feb 1 13:47:31.449: INFO: Got endpoints: latency-svc-v6h54 [1.791371882s] Feb 1 13:47:31.527: INFO: Created: latency-svc-x5vzm Feb 1 13:47:31.527: INFO: Got endpoints: latency-svc-x5vzm [1.84817458s] Feb 1 13:47:31.632: INFO: Created: latency-svc-w298b Feb 1 13:47:31.638: INFO: Got endpoints: latency-svc-w298b [1.782131965s] Feb 1 13:47:31.684: INFO: Created: latency-svc-7q9qv Feb 1 13:47:31.702: INFO: Got endpoints: latency-svc-7q9qv [1.618146097s] Feb 1 13:47:31.722: INFO: Created: latency-svc-sg8hz Feb 1 13:47:31.730: INFO: Got endpoints: latency-svc-sg8hz [1.390987973s] Feb 1 13:47:31.836: INFO: Created: latency-svc-lppjt Feb 1 13:47:31.846: INFO: Got endpoints: latency-svc-lppjt [1.461114627s] Feb 1 13:47:31.889: INFO: Created: latency-svc-stc8t Feb 1 13:47:32.025: INFO: Created: latency-svc-5dqxz Feb 1 13:47:32.026: INFO: Got endpoints: latency-svc-stc8t [1.459519441s] Feb 1 13:47:32.034: INFO: Got endpoints: latency-svc-5dqxz [1.455869903s] Feb 1 13:47:32.076: INFO: Created: latency-svc-nrzwl Feb 1 13:47:32.084: INFO: Got endpoints: latency-svc-nrzwl [1.380943272s] Feb 1 13:47:32.218: INFO: Created: latency-svc-4fjhn Feb 1 13:47:32.265: INFO: Created: latency-svc-f9sfh Feb 1 13:47:32.265: INFO: Got endpoints: latency-svc-4fjhn [1.395858858s] Feb 1 13:47:32.287: INFO: Got endpoints: latency-svc-f9sfh [1.380951795s] Feb 1 13:47:32.410: INFO: Created: latency-svc-z47wb Feb 1 13:47:32.449: INFO: Got endpoints: latency-svc-z47wb [1.300638628s] Feb 1 13:47:32.449: INFO: Created: latency-svc-vpp9n Feb 1 13:47:32.466: INFO: Got endpoints: latency-svc-vpp9n [1.318585781s] Feb 1 13:47:32.584: INFO: Created: latency-svc-7bbhq Feb 1 13:47:32.616: INFO: Got endpoints: latency-svc-7bbhq [1.324486883s] Feb 1 13:47:32.656: INFO: Created: latency-svc-ckjdl Feb 1 13:47:32.748: INFO: Got endpoints: latency-svc-ckjdl [1.397396192s] Feb 1 13:47:32.749: INFO: Created: latency-svc-kmkts Feb 1 13:47:32.759: INFO: Got endpoints: latency-svc-kmkts [1.309109377s] Feb 1 13:47:32.805: INFO: Created: latency-svc-t4jpw Feb 1 13:47:32.808: INFO: Got endpoints: latency-svc-t4jpw [1.280191887s] Feb 1 13:47:32.908: INFO: Created: latency-svc-xs5xf Feb 1 13:47:32.917: INFO: Got endpoints: latency-svc-xs5xf [1.278482504s] Feb 1 13:47:32.962: INFO: Created: latency-svc-mk6k8 Feb 1 13:47:32.985: INFO: Got endpoints: latency-svc-mk6k8 [1.282676838s] Feb 1 13:47:33.138: INFO: Created: latency-svc-pkl6j Feb 1 13:47:33.146: INFO: Got endpoints: latency-svc-pkl6j [1.415473147s] Feb 1 13:47:33.212: INFO: Created: latency-svc-tftlt Feb 1 13:47:33.288: INFO: Got endpoints: latency-svc-tftlt [1.441937931s] Feb 1 13:47:33.313: INFO: Created: latency-svc-whr7k Feb 1 13:47:33.337: INFO: Got endpoints: latency-svc-whr7k [1.310295298s] Feb 1 13:47:33.388: INFO: Created: latency-svc-r4rs2 Feb 1 13:47:33.488: INFO: Got endpoints: latency-svc-r4rs2 [1.453752359s] Feb 1 13:47:33.506: INFO: Created: latency-svc-tp762 Feb 1 13:47:33.517: INFO: Got endpoints: latency-svc-tp762 [1.432204315s] Feb 1 13:47:33.566: INFO: Created: latency-svc-27t9f Feb 1 13:47:33.570: INFO: Got endpoints: latency-svc-27t9f [1.304618562s] Feb 1 13:47:33.670: INFO: Created: latency-svc-xdrhv Feb 1 13:47:33.676: INFO: Got endpoints: latency-svc-xdrhv [1.388975175s] Feb 1 13:47:33.720: INFO: Created: latency-svc-8cqbm Feb 1 13:47:33.748: INFO: Got endpoints: latency-svc-8cqbm [1.298651964s] Feb 1 13:47:33.826: INFO: Created: latency-svc-rlhmm Feb 1 13:47:33.870: INFO: Got endpoints: latency-svc-rlhmm [1.404065799s] Feb 1 13:47:33.924: INFO: Created: latency-svc-99j6j Feb 1 13:47:34.041: INFO: Got endpoints: latency-svc-99j6j [1.425393542s] Feb 1 13:47:34.104: INFO: Created: latency-svc-sq6kr Feb 1 13:47:34.268: INFO: Got endpoints: latency-svc-sq6kr [1.51861297s] Feb 1 13:47:34.289: INFO: Created: latency-svc-mnn5k Feb 1 13:47:34.344: INFO: Got endpoints: latency-svc-mnn5k [1.584995994s] Feb 1 13:47:34.360: INFO: Created: latency-svc-9h4lz Feb 1 13:47:34.485: INFO: Got endpoints: latency-svc-9h4lz [1.67680447s] Feb 1 13:47:34.571: INFO: Created: latency-svc-lbxlh Feb 1 13:47:34.688: INFO: Got endpoints: latency-svc-lbxlh [1.770875283s] Feb 1 13:47:34.698: INFO: Created: latency-svc-d768x Feb 1 13:47:34.724: INFO: Got endpoints: latency-svc-d768x [1.739073492s] Feb 1 13:47:34.916: INFO: Created: latency-svc-msp4c Feb 1 13:47:34.923: INFO: Got endpoints: latency-svc-msp4c [1.776106867s] Feb 1 13:47:35.007: INFO: Created: latency-svc-ljclq Feb 1 13:47:35.147: INFO: Got endpoints: latency-svc-ljclq [1.857971029s] Feb 1 13:47:35.219: INFO: Created: latency-svc-7rvlt Feb 1 13:47:35.410: INFO: Got endpoints: latency-svc-7rvlt [2.073249202s] Feb 1 13:47:35.426: INFO: Created: latency-svc-8w2vn Feb 1 13:47:35.470: INFO: Got endpoints: latency-svc-8w2vn [1.981149569s] Feb 1 13:47:35.641: INFO: Created: latency-svc-q4w2f Feb 1 13:47:35.650: INFO: Got endpoints: latency-svc-q4w2f [2.132561925s] Feb 1 13:47:35.721: INFO: Created: latency-svc-984vf Feb 1 13:47:35.737: INFO: Got endpoints: latency-svc-984vf [2.165844398s] Feb 1 13:47:35.883: INFO: Created: latency-svc-nx85c Feb 1 13:47:35.904: INFO: Got endpoints: latency-svc-nx85c [2.227769868s] Feb 1 13:47:36.108: INFO: Created: latency-svc-6z9hz Feb 1 13:47:36.151: INFO: Got endpoints: latency-svc-6z9hz [2.402123604s] Feb 1 13:47:36.156: INFO: Created: latency-svc-qf8gt Feb 1 13:47:36.165: INFO: Got endpoints: latency-svc-qf8gt [2.294958228s] Feb 1 13:47:36.304: INFO: Created: latency-svc-9vrhp Feb 1 13:47:36.373: INFO: Got endpoints: latency-svc-9vrhp [2.331088415s] Feb 1 13:47:36.374: INFO: Created: latency-svc-45qnc Feb 1 13:47:36.381: INFO: Got endpoints: latency-svc-45qnc [2.112289229s] Feb 1 13:47:36.618: INFO: Created: latency-svc-5zkxf Feb 1 13:47:36.670: INFO: Got endpoints: latency-svc-5zkxf [2.32553588s] Feb 1 13:47:36.674: INFO: Created: latency-svc-6r9mx Feb 1 13:47:36.696: INFO: Got endpoints: latency-svc-6r9mx [2.209439682s] Feb 1 13:47:36.827: INFO: Created: latency-svc-74jpg Feb 1 13:47:36.827: INFO: Got endpoints: latency-svc-74jpg [2.138872585s] Feb 1 13:47:36.886: INFO: Created: latency-svc-vgj96 Feb 1 13:47:36.996: INFO: Got endpoints: latency-svc-vgj96 [2.271384471s] Feb 1 13:47:37.005: INFO: Created: latency-svc-z2v8h Feb 1 13:47:37.009: INFO: Got endpoints: latency-svc-z2v8h [2.085520765s] Feb 1 13:47:37.053: INFO: Created: latency-svc-7z7ff Feb 1 13:47:37.067: INFO: Got endpoints: latency-svc-7z7ff [1.91965493s] Feb 1 13:47:37.221: INFO: Created: latency-svc-8qrwk Feb 1 13:47:37.222: INFO: Got endpoints: latency-svc-8qrwk [1.811299482s] Feb 1 13:47:37.267: INFO: Created: latency-svc-4hlk5 Feb 1 13:47:37.279: INFO: Got endpoints: latency-svc-4hlk5 [1.808968477s] Feb 1 13:47:37.392: INFO: Created: latency-svc-7d8qf Feb 1 13:47:37.442: INFO: Got endpoints: latency-svc-7d8qf [1.791912652s] Feb 1 13:47:37.445: INFO: Created: latency-svc-rvwsd Feb 1 13:47:37.519: INFO: Got endpoints: latency-svc-rvwsd [1.781502464s] Feb 1 13:47:37.551: INFO: Created: latency-svc-8pc9s Feb 1 13:47:37.570: INFO: Got endpoints: latency-svc-8pc9s [1.665419325s] Feb 1 13:47:37.604: INFO: Created: latency-svc-wgxv8 Feb 1 13:47:37.608: INFO: Got endpoints: latency-svc-wgxv8 [1.4561677s] Feb 1 13:47:37.720: INFO: Created: latency-svc-7vfkl Feb 1 13:47:37.722: INFO: Got endpoints: latency-svc-7vfkl [1.556256082s] Feb 1 13:47:37.778: INFO: Created: latency-svc-k8w62 Feb 1 13:47:37.790: INFO: Got endpoints: latency-svc-k8w62 [1.415925627s] Feb 1 13:47:37.914: INFO: Created: latency-svc-gwbhj Feb 1 13:47:37.933: INFO: Got endpoints: latency-svc-gwbhj [1.551864603s] Feb 1 13:47:37.961: INFO: Created: latency-svc-nmh7r Feb 1 13:47:37.977: INFO: Got endpoints: latency-svc-nmh7r [1.306060777s] Feb 1 13:47:38.071: INFO: Created: latency-svc-ncqw6 Feb 1 13:47:38.076: INFO: Got endpoints: latency-svc-ncqw6 [1.380007841s] Feb 1 13:47:38.122: INFO: Created: latency-svc-5z6j8 Feb 1 13:47:38.156: INFO: Got endpoints: latency-svc-5z6j8 [1.327902621s] Feb 1 13:47:38.300: INFO: Created: latency-svc-62hrs Feb 1 13:47:38.326: INFO: Got endpoints: latency-svc-62hrs [1.328641272s] Feb 1 13:47:38.326: INFO: Latencies: [163.989459ms 217.993684ms 345.89073ms 376.693101ms 412.82257ms 548.530482ms 622.173243ms 716.381712ms 782.845079ms 872.505263ms 1.055977779s 1.068263102s 1.129510453s 1.161180884s 1.177345568s 1.198246864s 1.21472904s 1.23715729s 1.23780724s 1.245767141s 1.246614618s 1.260347036s 1.267364164s 1.276939244s 1.278482504s 1.280191887s 1.282676838s 1.284688903s 1.292170957s 1.294576386s 1.29705907s 1.297215004s 1.298651964s 1.300638628s 1.301147795s 1.304618562s 1.306060777s 1.306117735s 1.309109377s 1.310295298s 1.318585781s 1.318996285s 1.320611041s 1.324486883s 1.327902621s 1.328641272s 1.331473453s 1.331992812s 1.335561184s 1.340647033s 1.341266365s 1.3475065s 1.35527291s 1.356826826s 1.36427976s 1.364538013s 1.365137918s 1.37426503s 1.378549015s 1.379000177s 1.380007841s 1.380943272s 1.380951795s 1.387558159s 1.388975175s 1.390987973s 1.393727254s 1.394209075s 1.395670908s 1.395858858s 1.397396192s 1.40224999s 1.403104815s 1.404065799s 1.409848895s 1.411070794s 1.415087684s 1.415473147s 1.415925627s 1.417301178s 1.425393542s 1.432012556s 1.432204315s 1.440965434s 1.441937931s 1.453752359s 1.453890608s 1.454855253s 1.455869903s 1.4561677s 1.459519441s 1.461114627s 1.461930307s 1.478700736s 1.509976235s 1.517677723s 1.518052076s 1.51861297s 1.533761901s 1.535107317s 1.549610538s 1.551864603s 1.556256082s 1.557486387s 1.572474073s 1.584995994s 1.602912865s 1.604283683s 1.618146097s 1.623890047s 1.625424379s 1.627837251s 1.641637988s 1.642253223s 1.652342552s 1.65343486s 1.654839261s 1.657943843s 1.661136067s 1.665419325s 1.666077701s 1.674170149s 1.67680447s 1.685001057s 1.695193288s 1.697178361s 1.701395239s 1.704167821s 1.721747846s 1.721896341s 1.724725041s 1.739073492s 1.739759901s 1.752019123s 1.752299143s 1.756645553s 1.770875283s 1.776106867s 1.78122228s 1.781502464s 1.782131965s 1.78492148s 1.791371882s 1.791912652s 1.793286673s 1.797143352s 1.804424155s 1.808968477s 1.811299482s 1.824988714s 1.829126706s 1.84817458s 1.857971029s 1.876935601s 1.878158586s 1.887279228s 1.904455905s 1.916641704s 1.91965493s 1.931567963s 1.932249154s 1.935157609s 1.945081174s 1.950495272s 1.95660738s 1.981149569s 1.982150824s 1.983339869s 1.989793485s 2.005446882s 2.031263977s 2.048539653s 2.073249202s 2.085520765s 2.112289229s 2.130367696s 2.132561925s 2.138872585s 2.151877218s 2.165844398s 2.169410761s 2.171760779s 2.185690151s 2.192077531s 2.197025355s 2.209439682s 2.217091849s 2.227769868s 2.261107136s 2.261704627s 2.271384471s 2.277776146s 2.283350857s 2.294958228s 2.295886746s 2.32553588s 2.326924839s 2.330710528s 2.331088415s 2.402123604s] Feb 1 13:47:38.327: INFO: 50 %ile: 1.549610538s Feb 1 13:47:38.327: INFO: 90 %ile: 2.169410761s Feb 1 13:47:38.327: INFO: 99 %ile: 2.331088415s Feb 1 13:47:38.327: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 1 13:47:38.328: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svc-latency-4918" for this suite. Feb 1 13:48:14.351: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 1 13:48:14.444: INFO: namespace svc-latency-4918 deletion completed in 36.110996841s • [SLOW TEST:66.266 seconds] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 1 13:48:14.447: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars Feb 1 13:48:14.594: INFO: Waiting up to 5m0s for pod "downward-api-6bba37e2-e201-49ea-834f-35bba7159a79" in namespace "downward-api-9564" to be "success or failure" Feb 1 13:48:14.600: INFO: Pod "downward-api-6bba37e2-e201-49ea-834f-35bba7159a79": Phase="Pending", Reason="", readiness=false. Elapsed: 6.573308ms Feb 1 13:48:16.616: INFO: Pod "downward-api-6bba37e2-e201-49ea-834f-35bba7159a79": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022456084s Feb 1 13:48:18.668: INFO: Pod "downward-api-6bba37e2-e201-49ea-834f-35bba7159a79": Phase="Pending", Reason="", readiness=false. Elapsed: 4.073707672s Feb 1 13:48:20.715: INFO: Pod "downward-api-6bba37e2-e201-49ea-834f-35bba7159a79": Phase="Pending", Reason="", readiness=false. Elapsed: 6.121166652s Feb 1 13:48:22.759: INFO: Pod "downward-api-6bba37e2-e201-49ea-834f-35bba7159a79": Phase="Pending", Reason="", readiness=false. Elapsed: 8.165147401s Feb 1 13:48:24.768: INFO: Pod "downward-api-6bba37e2-e201-49ea-834f-35bba7159a79": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.17440021s STEP: Saw pod success Feb 1 13:48:24.769: INFO: Pod "downward-api-6bba37e2-e201-49ea-834f-35bba7159a79" satisfied condition "success or failure" Feb 1 13:48:24.773: INFO: Trying to get logs from node iruya-node pod downward-api-6bba37e2-e201-49ea-834f-35bba7159a79 container dapi-container: STEP: delete the pod Feb 1 13:48:25.010: INFO: Waiting for pod downward-api-6bba37e2-e201-49ea-834f-35bba7159a79 to disappear Feb 1 13:48:25.024: INFO: Pod downward-api-6bba37e2-e201-49ea-834f-35bba7159a79 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 1 13:48:25.024: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9564" for this suite. Feb 1 13:48:31.057: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 1 13:48:31.250: INFO: namespace downward-api-9564 deletion completed in 6.216840934s • [SLOW TEST:16.803 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 1 13:48:31.251: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Feb 1 13:48:42.138: INFO: Successfully updated pod "pod-update-5b9256ba-ff4a-4f1a-b856-1b5749d82e07" STEP: verifying the updated pod is in kubernetes Feb 1 13:48:42.214: INFO: Pod update OK [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 1 13:48:42.214: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-6036" for this suite. Feb 1 13:49:04.247: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 1 13:49:04.374: INFO: namespace pods-6036 deletion completed in 22.15491551s • [SLOW TEST:33.124 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 1 13:49:04.375: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 1 13:49:12.615: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-6255" for this suite. Feb 1 13:49:58.646: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 1 13:49:58.769: INFO: namespace kubelet-test-6255 deletion completed in 46.145149076s • [SLOW TEST:54.393 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a read only busybox container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:187 should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 1 13:49:58.769: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating the pod Feb 1 13:50:09.443: INFO: Successfully updated pod "annotationupdate7ee3be10-ef2b-4574-9c68-9f05e2728cef" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 1 13:50:11.572: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4347" for this suite. Feb 1 13:50:33.660: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 1 13:50:33.800: INFO: namespace projected-4347 deletion completed in 22.218832252s • [SLOW TEST:35.031 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 1 13:50:33.801: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Feb 1 13:50:34.012: INFO: Waiting up to 5m0s for pod "downwardapi-volume-bb982a0c-69d5-4689-8ee7-c04faa409899" in namespace "projected-693" to be "success or failure" Feb 1 13:50:34.025: INFO: Pod "downwardapi-volume-bb982a0c-69d5-4689-8ee7-c04faa409899": Phase="Pending", Reason="", readiness=false. Elapsed: 12.693585ms Feb 1 13:50:36.039: INFO: Pod "downwardapi-volume-bb982a0c-69d5-4689-8ee7-c04faa409899": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027324061s Feb 1 13:50:38.052: INFO: Pod "downwardapi-volume-bb982a0c-69d5-4689-8ee7-c04faa409899": Phase="Pending", Reason="", readiness=false. Elapsed: 4.039690345s Feb 1 13:50:40.062: INFO: Pod "downwardapi-volume-bb982a0c-69d5-4689-8ee7-c04faa409899": Phase="Pending", Reason="", readiness=false. Elapsed: 6.049915542s Feb 1 13:50:42.075: INFO: Pod "downwardapi-volume-bb982a0c-69d5-4689-8ee7-c04faa409899": Phase="Pending", Reason="", readiness=false. Elapsed: 8.062756081s Feb 1 13:50:44.086: INFO: Pod "downwardapi-volume-bb982a0c-69d5-4689-8ee7-c04faa409899": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.073499601s STEP: Saw pod success Feb 1 13:50:44.086: INFO: Pod "downwardapi-volume-bb982a0c-69d5-4689-8ee7-c04faa409899" satisfied condition "success or failure" Feb 1 13:50:44.088: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-bb982a0c-69d5-4689-8ee7-c04faa409899 container client-container: STEP: delete the pod Feb 1 13:50:44.143: INFO: Waiting for pod downwardapi-volume-bb982a0c-69d5-4689-8ee7-c04faa409899 to disappear Feb 1 13:50:44.165: INFO: Pod downwardapi-volume-bb982a0c-69d5-4689-8ee7-c04faa409899 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 1 13:50:44.165: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-693" for this suite. Feb 1 13:50:50.222: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 1 13:50:50.434: INFO: namespace projected-693 deletion completed in 6.26253809s • [SLOW TEST:16.633 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 1 13:50:50.435: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-e8ea249c-f04f-4940-9380-43bc7d1a34d3 STEP: Creating a pod to test consume secrets Feb 1 13:50:50.652: INFO: Waiting up to 5m0s for pod "pod-secrets-4801709b-323a-476a-bf40-925e4d7e96de" in namespace "secrets-4830" to be "success or failure" Feb 1 13:50:50.661: INFO: Pod "pod-secrets-4801709b-323a-476a-bf40-925e4d7e96de": Phase="Pending", Reason="", readiness=false. Elapsed: 9.290865ms Feb 1 13:50:52.668: INFO: Pod "pod-secrets-4801709b-323a-476a-bf40-925e4d7e96de": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016529948s Feb 1 13:50:54.681: INFO: Pod "pod-secrets-4801709b-323a-476a-bf40-925e4d7e96de": Phase="Pending", Reason="", readiness=false. Elapsed: 4.028646622s Feb 1 13:50:56.701: INFO: Pod "pod-secrets-4801709b-323a-476a-bf40-925e4d7e96de": Phase="Pending", Reason="", readiness=false. Elapsed: 6.049341722s Feb 1 13:50:58.711: INFO: Pod "pod-secrets-4801709b-323a-476a-bf40-925e4d7e96de": Phase="Pending", Reason="", readiness=false. Elapsed: 8.058807278s Feb 1 13:51:00.721: INFO: Pod "pod-secrets-4801709b-323a-476a-bf40-925e4d7e96de": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.068706279s STEP: Saw pod success Feb 1 13:51:00.721: INFO: Pod "pod-secrets-4801709b-323a-476a-bf40-925e4d7e96de" satisfied condition "success or failure" Feb 1 13:51:00.724: INFO: Trying to get logs from node iruya-node pod pod-secrets-4801709b-323a-476a-bf40-925e4d7e96de container secret-volume-test: STEP: delete the pod Feb 1 13:51:00.780: INFO: Waiting for pod pod-secrets-4801709b-323a-476a-bf40-925e4d7e96de to disappear Feb 1 13:51:00.820: INFO: Pod pod-secrets-4801709b-323a-476a-bf40-925e4d7e96de no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 1 13:51:00.821: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-4830" for this suite. Feb 1 13:51:06.906: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 1 13:51:07.029: INFO: namespace secrets-4830 deletion completed in 6.150667395s • [SLOW TEST:16.594 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 1 13:51:07.030: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44 [It] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod Feb 1 13:51:07.139: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 1 13:51:20.900: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-9800" for this suite. Feb 1 13:51:26.941: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 1 13:51:27.071: INFO: namespace init-container-9800 deletion completed in 6.15324357s • [SLOW TEST:20.042 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 1 13:51:27.072: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0644 on node default medium Feb 1 13:51:27.189: INFO: Waiting up to 5m0s for pod "pod-fc1af5e1-83a0-49a0-9b1d-7841a3c95572" in namespace "emptydir-1841" to be "success or failure" Feb 1 13:51:27.196: INFO: Pod "pod-fc1af5e1-83a0-49a0-9b1d-7841a3c95572": Phase="Pending", Reason="", readiness=false. Elapsed: 6.64244ms Feb 1 13:51:29.209: INFO: Pod "pod-fc1af5e1-83a0-49a0-9b1d-7841a3c95572": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019929465s Feb 1 13:51:31.223: INFO: Pod "pod-fc1af5e1-83a0-49a0-9b1d-7841a3c95572": Phase="Pending", Reason="", readiness=false. Elapsed: 4.034201442s Feb 1 13:51:33.234: INFO: Pod "pod-fc1af5e1-83a0-49a0-9b1d-7841a3c95572": Phase="Pending", Reason="", readiness=false. Elapsed: 6.044527325s Feb 1 13:51:35.242: INFO: Pod "pod-fc1af5e1-83a0-49a0-9b1d-7841a3c95572": Phase="Pending", Reason="", readiness=false. Elapsed: 8.052504709s Feb 1 13:51:37.251: INFO: Pod "pod-fc1af5e1-83a0-49a0-9b1d-7841a3c95572": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.061676812s STEP: Saw pod success Feb 1 13:51:37.251: INFO: Pod "pod-fc1af5e1-83a0-49a0-9b1d-7841a3c95572" satisfied condition "success or failure" Feb 1 13:51:37.255: INFO: Trying to get logs from node iruya-node pod pod-fc1af5e1-83a0-49a0-9b1d-7841a3c95572 container test-container: STEP: delete the pod Feb 1 13:51:37.398: INFO: Waiting for pod pod-fc1af5e1-83a0-49a0-9b1d-7841a3c95572 to disappear Feb 1 13:51:37.406: INFO: Pod pod-fc1af5e1-83a0-49a0-9b1d-7841a3c95572 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 1 13:51:37.407: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1841" for this suite. Feb 1 13:51:43.449: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 1 13:51:43.583: INFO: namespace emptydir-1841 deletion completed in 6.165920943s • [SLOW TEST:16.512 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl patch should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 1 13:51:43.585: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating Redis RC Feb 1 13:51:43.681: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7033' Feb 1 13:51:45.971: INFO: stderr: "" Feb 1 13:51:45.971: INFO: stdout: "replicationcontroller/redis-master created\n" STEP: Waiting for Redis master to start. Feb 1 13:51:46.984: INFO: Selector matched 1 pods for map[app:redis] Feb 1 13:51:46.984: INFO: Found 0 / 1 Feb 1 13:51:47.983: INFO: Selector matched 1 pods for map[app:redis] Feb 1 13:51:47.984: INFO: Found 0 / 1 Feb 1 13:51:48.984: INFO: Selector matched 1 pods for map[app:redis] Feb 1 13:51:48.984: INFO: Found 0 / 1 Feb 1 13:51:49.982: INFO: Selector matched 1 pods for map[app:redis] Feb 1 13:51:49.982: INFO: Found 0 / 1 Feb 1 13:51:50.982: INFO: Selector matched 1 pods for map[app:redis] Feb 1 13:51:50.982: INFO: Found 0 / 1 Feb 1 13:51:51.981: INFO: Selector matched 1 pods for map[app:redis] Feb 1 13:51:51.981: INFO: Found 0 / 1 Feb 1 13:51:52.983: INFO: Selector matched 1 pods for map[app:redis] Feb 1 13:51:52.983: INFO: Found 0 / 1 Feb 1 13:51:53.986: INFO: Selector matched 1 pods for map[app:redis] Feb 1 13:51:53.986: INFO: Found 1 / 1 Feb 1 13:51:53.986: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods Feb 1 13:51:53.992: INFO: Selector matched 1 pods for map[app:redis] Feb 1 13:51:53.992: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Feb 1 13:51:53.992: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod redis-master-djf8f --namespace=kubectl-7033 -p {"metadata":{"annotations":{"x":"y"}}}' Feb 1 13:51:54.248: INFO: stderr: "" Feb 1 13:51:54.248: INFO: stdout: "pod/redis-master-djf8f patched\n" STEP: checking annotations Feb 1 13:51:54.257: INFO: Selector matched 1 pods for map[app:redis] Feb 1 13:51:54.257: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 1 13:51:54.258: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7033" for this suite. Feb 1 13:52:34.285: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 1 13:52:34.421: INFO: namespace kubectl-7033 deletion completed in 40.159837593s • [SLOW TEST:50.837 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl patch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 1 13:52:34.422: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-3937d1ad-d41b-494b-9ea5-f170e87b0e85 STEP: Creating a pod to test consume secrets Feb 1 13:52:34.577: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-7eedd677-a1b4-4e8c-8ca5-3226bae180a5" in namespace "projected-5446" to be "success or failure" Feb 1 13:52:34.590: INFO: Pod "pod-projected-secrets-7eedd677-a1b4-4e8c-8ca5-3226bae180a5": Phase="Pending", Reason="", readiness=false. Elapsed: 11.928138ms Feb 1 13:52:36.612: INFO: Pod "pod-projected-secrets-7eedd677-a1b4-4e8c-8ca5-3226bae180a5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03423145s Feb 1 13:52:38.633: INFO: Pod "pod-projected-secrets-7eedd677-a1b4-4e8c-8ca5-3226bae180a5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.054883878s Feb 1 13:52:40.665: INFO: Pod "pod-projected-secrets-7eedd677-a1b4-4e8c-8ca5-3226bae180a5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.086751859s Feb 1 13:52:42.671: INFO: Pod "pod-projected-secrets-7eedd677-a1b4-4e8c-8ca5-3226bae180a5": Phase="Pending", Reason="", readiness=false. Elapsed: 8.093342652s Feb 1 13:52:44.677: INFO: Pod "pod-projected-secrets-7eedd677-a1b4-4e8c-8ca5-3226bae180a5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.099423993s STEP: Saw pod success Feb 1 13:52:44.677: INFO: Pod "pod-projected-secrets-7eedd677-a1b4-4e8c-8ca5-3226bae180a5" satisfied condition "success or failure" Feb 1 13:52:44.680: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-7eedd677-a1b4-4e8c-8ca5-3226bae180a5 container projected-secret-volume-test: STEP: delete the pod Feb 1 13:52:44.727: INFO: Waiting for pod pod-projected-secrets-7eedd677-a1b4-4e8c-8ca5-3226bae180a5 to disappear Feb 1 13:52:44.763: INFO: Pod pod-projected-secrets-7eedd677-a1b4-4e8c-8ca5-3226bae180a5 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 1 13:52:44.763: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5446" for this suite. Feb 1 13:52:50.869: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 1 13:52:51.033: INFO: namespace projected-5446 deletion completed in 6.259789262s • [SLOW TEST:16.611 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 1 13:52:51.033: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-map-ba9a337c-d3a7-400e-b9ee-8ee30c570589 STEP: Creating a pod to test consume configMaps Feb 1 13:52:51.208: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-2b4d2d45-bb0d-41c0-bdc1-61f592831e00" in namespace "projected-8575" to be "success or failure" Feb 1 13:52:51.221: INFO: Pod "pod-projected-configmaps-2b4d2d45-bb0d-41c0-bdc1-61f592831e00": Phase="Pending", Reason="", readiness=false. Elapsed: 13.103186ms Feb 1 13:52:53.230: INFO: Pod "pod-projected-configmaps-2b4d2d45-bb0d-41c0-bdc1-61f592831e00": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022123556s Feb 1 13:52:55.243: INFO: Pod "pod-projected-configmaps-2b4d2d45-bb0d-41c0-bdc1-61f592831e00": Phase="Pending", Reason="", readiness=false. Elapsed: 4.035305069s Feb 1 13:52:57.254: INFO: Pod "pod-projected-configmaps-2b4d2d45-bb0d-41c0-bdc1-61f592831e00": Phase="Pending", Reason="", readiness=false. Elapsed: 6.046045007s Feb 1 13:52:59.264: INFO: Pod "pod-projected-configmaps-2b4d2d45-bb0d-41c0-bdc1-61f592831e00": Phase="Pending", Reason="", readiness=false. Elapsed: 8.055835993s Feb 1 13:53:01.277: INFO: Pod "pod-projected-configmaps-2b4d2d45-bb0d-41c0-bdc1-61f592831e00": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.069676969s STEP: Saw pod success Feb 1 13:53:01.278: INFO: Pod "pod-projected-configmaps-2b4d2d45-bb0d-41c0-bdc1-61f592831e00" satisfied condition "success or failure" Feb 1 13:53:01.283: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-2b4d2d45-bb0d-41c0-bdc1-61f592831e00 container projected-configmap-volume-test: STEP: delete the pod Feb 1 13:53:01.438: INFO: Waiting for pod pod-projected-configmaps-2b4d2d45-bb0d-41c0-bdc1-61f592831e00 to disappear Feb 1 13:53:01.446: INFO: Pod pod-projected-configmaps-2b4d2d45-bb0d-41c0-bdc1-61f592831e00 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 1 13:53:01.446: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8575" for this suite. Feb 1 13:53:07.476: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 1 13:53:07.657: INFO: namespace projected-8575 deletion completed in 6.20527757s • [SLOW TEST:16.624 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 1 13:53:07.659: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating secret secrets-3807/secret-test-216bd2f4-4e58-4500-a759-b9f2f118996d STEP: Creating a pod to test consume secrets Feb 1 13:53:07.745: INFO: Waiting up to 5m0s for pod "pod-configmaps-775ee186-90db-4cc2-82bc-57228c7713c9" in namespace "secrets-3807" to be "success or failure" Feb 1 13:53:07.768: INFO: Pod "pod-configmaps-775ee186-90db-4cc2-82bc-57228c7713c9": Phase="Pending", Reason="", readiness=false. Elapsed: 23.229459ms Feb 1 13:53:09.782: INFO: Pod "pod-configmaps-775ee186-90db-4cc2-82bc-57228c7713c9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.037197575s Feb 1 13:53:11.800: INFO: Pod "pod-configmaps-775ee186-90db-4cc2-82bc-57228c7713c9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.055053312s Feb 1 13:53:13.813: INFO: Pod "pod-configmaps-775ee186-90db-4cc2-82bc-57228c7713c9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.067943197s Feb 1 13:53:15.820: INFO: Pod "pod-configmaps-775ee186-90db-4cc2-82bc-57228c7713c9": Phase="Pending", Reason="", readiness=false. Elapsed: 8.07552665s Feb 1 13:53:17.831: INFO: Pod "pod-configmaps-775ee186-90db-4cc2-82bc-57228c7713c9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.086316334s STEP: Saw pod success Feb 1 13:53:17.831: INFO: Pod "pod-configmaps-775ee186-90db-4cc2-82bc-57228c7713c9" satisfied condition "success or failure" Feb 1 13:53:17.835: INFO: Trying to get logs from node iruya-node pod pod-configmaps-775ee186-90db-4cc2-82bc-57228c7713c9 container env-test: STEP: delete the pod Feb 1 13:53:17.974: INFO: Waiting for pod pod-configmaps-775ee186-90db-4cc2-82bc-57228c7713c9 to disappear Feb 1 13:53:17.985: INFO: Pod pod-configmaps-775ee186-90db-4cc2-82bc-57228c7713c9 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 1 13:53:17.985: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-3807" for this suite. Feb 1 13:53:24.019: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 1 13:53:24.158: INFO: namespace secrets-3807 deletion completed in 6.165190826s • [SLOW TEST:16.499 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run job should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 1 13:53:24.160: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1612 [It] should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Feb 1 13:53:24.305: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-630' Feb 1 13:53:24.403: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Feb 1 13:53:24.403: INFO: stdout: "job.batch/e2e-test-nginx-job created\n" STEP: verifying the job e2e-test-nginx-job was created [AfterEach] [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1617 Feb 1 13:53:24.429: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-nginx-job --namespace=kubectl-630' Feb 1 13:53:24.621: INFO: stderr: "" Feb 1 13:53:24.621: INFO: stdout: "job.batch \"e2e-test-nginx-job\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 1 13:53:24.622: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-630" for this suite. Feb 1 13:53:46.662: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 1 13:53:46.843: INFO: namespace kubectl-630 deletion completed in 22.208327863s • [SLOW TEST:22.683 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 1 13:53:46.844: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Given a Pod with a 'name' label pod-adoption-release is created STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change Feb 1 13:53:58.174: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 1 13:53:59.298: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-1894" for this suite. Feb 1 13:54:23.381: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 1 13:54:23.589: INFO: namespace replicaset-1894 deletion completed in 24.286567745s • [SLOW TEST:36.746 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 1 13:54:23.591: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod liveness-3a89945c-b801-483d-889e-f3c1c546147a in namespace container-probe-1122 Feb 1 13:54:33.828: INFO: Started pod liveness-3a89945c-b801-483d-889e-f3c1c546147a in namespace container-probe-1122 STEP: checking the pod's current state and verifying that restartCount is present Feb 1 13:54:33.833: INFO: Initial restart count of pod liveness-3a89945c-b801-483d-889e-f3c1c546147a is 0 Feb 1 13:54:56.046: INFO: Restart count of pod container-probe-1122/liveness-3a89945c-b801-483d-889e-f3c1c546147a is now 1 (22.211996728s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 1 13:54:56.071: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-1122" for this suite. Feb 1 13:55:03.015: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 1 13:55:03.214: INFO: namespace container-probe-1122 deletion completed in 7.121850574s • [SLOW TEST:39.623 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 1 13:55:03.215: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-configmap-x9mc STEP: Creating a pod to test atomic-volume-subpath Feb 1 13:55:03.329: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-x9mc" in namespace "subpath-3755" to be "success or failure" Feb 1 13:55:03.345: INFO: Pod "pod-subpath-test-configmap-x9mc": Phase="Pending", Reason="", readiness=false. Elapsed: 15.618496ms Feb 1 13:55:05.361: INFO: Pod "pod-subpath-test-configmap-x9mc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031344611s Feb 1 13:55:07.370: INFO: Pod "pod-subpath-test-configmap-x9mc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.040924861s Feb 1 13:55:09.384: INFO: Pod "pod-subpath-test-configmap-x9mc": Phase="Pending", Reason="", readiness=false. Elapsed: 6.055246454s Feb 1 13:55:11.395: INFO: Pod "pod-subpath-test-configmap-x9mc": Phase="Running", Reason="", readiness=true. Elapsed: 8.065903817s Feb 1 13:55:13.403: INFO: Pod "pod-subpath-test-configmap-x9mc": Phase="Running", Reason="", readiness=true. Elapsed: 10.073282841s Feb 1 13:55:15.423: INFO: Pod "pod-subpath-test-configmap-x9mc": Phase="Running", Reason="", readiness=true. Elapsed: 12.094212361s Feb 1 13:55:17.435: INFO: Pod "pod-subpath-test-configmap-x9mc": Phase="Running", Reason="", readiness=true. Elapsed: 14.105926493s Feb 1 13:55:19.449: INFO: Pod "pod-subpath-test-configmap-x9mc": Phase="Running", Reason="", readiness=true. Elapsed: 16.119377775s Feb 1 13:55:21.458: INFO: Pod "pod-subpath-test-configmap-x9mc": Phase="Running", Reason="", readiness=true. Elapsed: 18.129111748s Feb 1 13:55:23.472: INFO: Pod "pod-subpath-test-configmap-x9mc": Phase="Running", Reason="", readiness=true. Elapsed: 20.142389135s Feb 1 13:55:25.481: INFO: Pod "pod-subpath-test-configmap-x9mc": Phase="Running", Reason="", readiness=true. Elapsed: 22.152110664s Feb 1 13:55:27.488: INFO: Pod "pod-subpath-test-configmap-x9mc": Phase="Running", Reason="", readiness=true. Elapsed: 24.158815586s Feb 1 13:55:29.505: INFO: Pod "pod-subpath-test-configmap-x9mc": Phase="Running", Reason="", readiness=true. Elapsed: 26.175870174s Feb 1 13:55:31.534: INFO: Pod "pod-subpath-test-configmap-x9mc": Phase="Running", Reason="", readiness=true. Elapsed: 28.204287452s Feb 1 13:55:33.541: INFO: Pod "pod-subpath-test-configmap-x9mc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 30.211915576s STEP: Saw pod success Feb 1 13:55:33.541: INFO: Pod "pod-subpath-test-configmap-x9mc" satisfied condition "success or failure" Feb 1 13:55:33.547: INFO: Trying to get logs from node iruya-node pod pod-subpath-test-configmap-x9mc container test-container-subpath-configmap-x9mc: STEP: delete the pod Feb 1 13:55:33.724: INFO: Waiting for pod pod-subpath-test-configmap-x9mc to disappear Feb 1 13:55:33.733: INFO: Pod pod-subpath-test-configmap-x9mc no longer exists STEP: Deleting pod pod-subpath-test-configmap-x9mc Feb 1 13:55:33.734: INFO: Deleting pod "pod-subpath-test-configmap-x9mc" in namespace "subpath-3755" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 1 13:55:33.736: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-3755" for this suite. Feb 1 13:55:39.761: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 1 13:55:39.936: INFO: namespace subpath-3755 deletion completed in 6.195504368s • [SLOW TEST:36.721 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 1 13:55:39.936: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-map-faa8c8e7-c604-4c5a-83d3-d9ab8bddf04f STEP: Creating a pod to test consume secrets Feb 1 13:55:40.201: INFO: Waiting up to 5m0s for pod "pod-secrets-86f3a46c-49ce-4e15-9707-f627a351b803" in namespace "secrets-3437" to be "success or failure" Feb 1 13:55:40.206: INFO: Pod "pod-secrets-86f3a46c-49ce-4e15-9707-f627a351b803": Phase="Pending", Reason="", readiness=false. Elapsed: 4.582708ms Feb 1 13:55:42.217: INFO: Pod "pod-secrets-86f3a46c-49ce-4e15-9707-f627a351b803": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01582189s Feb 1 13:55:44.224: INFO: Pod "pod-secrets-86f3a46c-49ce-4e15-9707-f627a351b803": Phase="Pending", Reason="", readiness=false. Elapsed: 4.02285915s Feb 1 13:55:46.235: INFO: Pod "pod-secrets-86f3a46c-49ce-4e15-9707-f627a351b803": Phase="Pending", Reason="", readiness=false. Elapsed: 6.033879741s Feb 1 13:55:48.246: INFO: Pod "pod-secrets-86f3a46c-49ce-4e15-9707-f627a351b803": Phase="Pending", Reason="", readiness=false. Elapsed: 8.045295253s Feb 1 13:55:50.256: INFO: Pod "pod-secrets-86f3a46c-49ce-4e15-9707-f627a351b803": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.055143302s STEP: Saw pod success Feb 1 13:55:50.256: INFO: Pod "pod-secrets-86f3a46c-49ce-4e15-9707-f627a351b803" satisfied condition "success or failure" Feb 1 13:55:50.262: INFO: Trying to get logs from node iruya-node pod pod-secrets-86f3a46c-49ce-4e15-9707-f627a351b803 container secret-volume-test: STEP: delete the pod Feb 1 13:55:50.353: INFO: Waiting for pod pod-secrets-86f3a46c-49ce-4e15-9707-f627a351b803 to disappear Feb 1 13:55:50.362: INFO: Pod pod-secrets-86f3a46c-49ce-4e15-9707-f627a351b803 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 1 13:55:50.362: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-3437" for this suite. Feb 1 13:55:56.399: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 1 13:55:56.566: INFO: namespace secrets-3437 deletion completed in 6.194180421s • [SLOW TEST:16.630 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 1 13:55:56.568: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Feb 1 13:55:56.795: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f3820da0-aede-4579-900c-853a086637c8" in namespace "downward-api-4914" to be "success or failure" Feb 1 13:55:56.822: INFO: Pod "downwardapi-volume-f3820da0-aede-4579-900c-853a086637c8": Phase="Pending", Reason="", readiness=false. Elapsed: 26.915316ms Feb 1 13:55:58.833: INFO: Pod "downwardapi-volume-f3820da0-aede-4579-900c-853a086637c8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.038024402s Feb 1 13:56:00.841: INFO: Pod "downwardapi-volume-f3820da0-aede-4579-900c-853a086637c8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.046057662s Feb 1 13:56:02.850: INFO: Pod "downwardapi-volume-f3820da0-aede-4579-900c-853a086637c8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.055043951s Feb 1 13:56:04.871: INFO: Pod "downwardapi-volume-f3820da0-aede-4579-900c-853a086637c8": Phase="Pending", Reason="", readiness=false. Elapsed: 8.076057878s Feb 1 13:56:06.892: INFO: Pod "downwardapi-volume-f3820da0-aede-4579-900c-853a086637c8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.096827311s STEP: Saw pod success Feb 1 13:56:06.893: INFO: Pod "downwardapi-volume-f3820da0-aede-4579-900c-853a086637c8" satisfied condition "success or failure" Feb 1 13:56:06.909: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-f3820da0-aede-4579-900c-853a086637c8 container client-container: STEP: delete the pod Feb 1 13:56:06.984: INFO: Waiting for pod downwardapi-volume-f3820da0-aede-4579-900c-853a086637c8 to disappear Feb 1 13:56:06.997: INFO: Pod downwardapi-volume-f3820da0-aede-4579-900c-853a086637c8 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 1 13:56:06.997: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4914" for this suite. Feb 1 13:56:13.090: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 1 13:56:13.234: INFO: namespace downward-api-4914 deletion completed in 6.226212862s • [SLOW TEST:16.666 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Job should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 1 13:56:13.235: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: delete a job STEP: deleting Job.batch foo in namespace job-319, will wait for the garbage collector to delete the pods Feb 1 13:56:25.457: INFO: Deleting Job.batch foo took: 7.326936ms Feb 1 13:56:25.757: INFO: Terminating Job.batch foo pods took: 300.470095ms STEP: Ensuring job was deleted [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 1 13:57:06.569: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-319" for this suite. Feb 1 13:57:12.596: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 1 13:57:12.748: INFO: namespace job-319 deletion completed in 6.172629086s • [SLOW TEST:59.513 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 1 13:57:12.750: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod Feb 1 13:57:20.966: INFO: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:send-events-26d85ff7-2c3a-40f1-b528-1a9d558de4c4,GenerateName:,Namespace:events-9168,SelfLink:/api/v1/namespaces/events-9168/pods/send-events-26d85ff7-2c3a-40f1-b528-1a9d558de4c4,UID:03c0717f-60cf-44ca-ae79-847fcba2b548,ResourceVersion:22696818,Generation:0,CreationTimestamp:2020-02-01 13:57:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: foo,time: 920169619,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-c88tz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-c88tz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{p gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 [] [] [{ 0 80 TCP }] [] [] {map[] map[]} [{default-token-c88tz true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001cc1e90} {node.kubernetes.io/unreachable Exists NoExecute 0xc001cc1eb0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-01 13:57:13 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-01 13:57:20 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-01 13:57:20 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-01 13:57:12 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.1,StartTime:2020-02-01 13:57:13 +0000 UTC,ContainerStatuses:[{p {nil ContainerStateRunning{StartedAt:2020-02-01 13:57:20 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 docker-pullable://gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 docker://683be4afb0dfc864da1cf415e17a797380204b4c76a4f30f16562827829d552c}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} STEP: checking for scheduler event about the pod Feb 1 13:57:22.978: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod Feb 1 13:57:25.013: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 1 13:57:25.025: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-9168" for this suite. Feb 1 13:58:09.064: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 1 13:58:09.178: INFO: namespace events-9168 deletion completed in 44.144678816s • [SLOW TEST:56.428 seconds] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 1 13:58:09.179: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-upd-ffc0990a-1fea-4597-b064-7ef91f35252d STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 1 13:58:21.481: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4386" for this suite. Feb 1 13:58:43.522: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 1 13:58:43.669: INFO: namespace configmap-4386 deletion completed in 22.179209848s • [SLOW TEST:34.491 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 1 13:58:43.670: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-3412 [It] Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating stateful set ss in namespace statefulset-3412 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-3412 Feb 1 13:58:43.865: INFO: Found 0 stateful pods, waiting for 1 Feb 1 13:58:53.878: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod Feb 1 13:58:53.882: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3412 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Feb 1 13:58:54.694: INFO: stderr: "I0201 13:58:54.196887 1041 log.go:172] (0xc00098c2c0) (0xc0008c65a0) Create stream\nI0201 13:58:54.197247 1041 log.go:172] (0xc00098c2c0) (0xc0008c65a0) Stream added, broadcasting: 1\nI0201 13:58:54.228673 1041 log.go:172] (0xc00098c2c0) Reply frame received for 1\nI0201 13:58:54.228913 1041 log.go:172] (0xc00098c2c0) (0xc0008c0000) Create stream\nI0201 13:58:54.228963 1041 log.go:172] (0xc00098c2c0) (0xc0008c0000) Stream added, broadcasting: 3\nI0201 13:58:54.235748 1041 log.go:172] (0xc00098c2c0) Reply frame received for 3\nI0201 13:58:54.235869 1041 log.go:172] (0xc00098c2c0) (0xc0008c00a0) Create stream\nI0201 13:58:54.235900 1041 log.go:172] (0xc00098c2c0) (0xc0008c00a0) Stream added, broadcasting: 5\nI0201 13:58:54.239289 1041 log.go:172] (0xc00098c2c0) Reply frame received for 5\nI0201 13:58:54.455749 1041 log.go:172] (0xc00098c2c0) Data frame received for 5\nI0201 13:58:54.455830 1041 log.go:172] (0xc0008c00a0) (5) Data frame handling\nI0201 13:58:54.455856 1041 log.go:172] (0xc0008c00a0) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0201 13:58:54.562831 1041 log.go:172] (0xc00098c2c0) Data frame received for 3\nI0201 13:58:54.562881 1041 log.go:172] (0xc0008c0000) (3) Data frame handling\nI0201 13:58:54.562921 1041 log.go:172] (0xc0008c0000) (3) Data frame sent\nI0201 13:58:54.683214 1041 log.go:172] (0xc00098c2c0) (0xc0008c0000) Stream removed, broadcasting: 3\nI0201 13:58:54.683323 1041 log.go:172] (0xc00098c2c0) Data frame received for 1\nI0201 13:58:54.683348 1041 log.go:172] (0xc0008c65a0) (1) Data frame handling\nI0201 13:58:54.683376 1041 log.go:172] (0xc0008c65a0) (1) Data frame sent\nI0201 13:58:54.683392 1041 log.go:172] (0xc00098c2c0) (0xc0008c65a0) Stream removed, broadcasting: 1\nI0201 13:58:54.683551 1041 log.go:172] (0xc00098c2c0) (0xc0008c00a0) Stream removed, broadcasting: 5\nI0201 13:58:54.683587 1041 log.go:172] (0xc00098c2c0) Go away received\nI0201 13:58:54.684063 1041 log.go:172] (0xc00098c2c0) (0xc0008c65a0) Stream removed, broadcasting: 1\nI0201 13:58:54.684073 1041 log.go:172] (0xc00098c2c0) (0xc0008c0000) Stream removed, broadcasting: 3\nI0201 13:58:54.684077 1041 log.go:172] (0xc00098c2c0) (0xc0008c00a0) Stream removed, broadcasting: 5\n" Feb 1 13:58:54.694: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Feb 1 13:58:54.694: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Feb 1 13:58:54.706: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Feb 1 13:59:04.734: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Feb 1 13:59:04.734: INFO: Waiting for statefulset status.replicas updated to 0 Feb 1 13:59:04.772: INFO: POD NODE PHASE GRACE CONDITIONS Feb 1 13:59:04.772: INFO: ss-0 iruya-node Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-01 13:58:44 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-01 13:58:55 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-01 13:58:55 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-01 13:58:43 +0000 UTC }] Feb 1 13:59:04.772: INFO: Feb 1 13:59:04.772: INFO: StatefulSet ss has not reached scale 3, at 1 Feb 1 13:59:06.153: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.977332917s Feb 1 13:59:07.327: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.596198692s Feb 1 13:59:08.339: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.422607153s Feb 1 13:59:09.351: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.410943977s Feb 1 13:59:10.601: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.398580972s Feb 1 13:59:11.725: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.148693357s Feb 1 13:59:13.030: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.024507663s Feb 1 13:59:14.138: INFO: Verifying statefulset ss doesn't scale past 3 for another 720.182306ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-3412 Feb 1 13:59:15.156: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3412 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 1 13:59:15.840: INFO: stderr: "I0201 13:59:15.413080 1061 log.go:172] (0xc00078e6e0) (0xc000696a00) Create stream\nI0201 13:59:15.413181 1061 log.go:172] (0xc00078e6e0) (0xc000696a00) Stream added, broadcasting: 1\nI0201 13:59:15.423376 1061 log.go:172] (0xc00078e6e0) Reply frame received for 1\nI0201 13:59:15.423435 1061 log.go:172] (0xc00078e6e0) (0xc000696000) Create stream\nI0201 13:59:15.423447 1061 log.go:172] (0xc00078e6e0) (0xc000696000) Stream added, broadcasting: 3\nI0201 13:59:15.427119 1061 log.go:172] (0xc00078e6e0) Reply frame received for 3\nI0201 13:59:15.427143 1061 log.go:172] (0xc00078e6e0) (0xc0006960a0) Create stream\nI0201 13:59:15.427155 1061 log.go:172] (0xc00078e6e0) (0xc0006960a0) Stream added, broadcasting: 5\nI0201 13:59:15.429007 1061 log.go:172] (0xc00078e6e0) Reply frame received for 5\nI0201 13:59:15.528398 1061 log.go:172] (0xc00078e6e0) Data frame received for 3\nI0201 13:59:15.528503 1061 log.go:172] (0xc000696000) (3) Data frame handling\nI0201 13:59:15.528530 1061 log.go:172] (0xc000696000) (3) Data frame sent\nI0201 13:59:15.528598 1061 log.go:172] (0xc00078e6e0) Data frame received for 5\nI0201 13:59:15.528625 1061 log.go:172] (0xc0006960a0) (5) Data frame handling\nI0201 13:59:15.528654 1061 log.go:172] (0xc0006960a0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0201 13:59:15.822346 1061 log.go:172] (0xc00078e6e0) (0xc000696000) Stream removed, broadcasting: 3\nI0201 13:59:15.822650 1061 log.go:172] (0xc00078e6e0) Data frame received for 1\nI0201 13:59:15.822680 1061 log.go:172] (0xc000696a00) (1) Data frame handling\nI0201 13:59:15.822707 1061 log.go:172] (0xc000696a00) (1) Data frame sent\nI0201 13:59:15.822890 1061 log.go:172] (0xc00078e6e0) (0xc000696a00) Stream removed, broadcasting: 1\nI0201 13:59:15.822985 1061 log.go:172] (0xc00078e6e0) (0xc0006960a0) Stream removed, broadcasting: 5\nI0201 13:59:15.823056 1061 log.go:172] (0xc00078e6e0) Go away received\nI0201 13:59:15.824083 1061 log.go:172] (0xc00078e6e0) (0xc000696a00) Stream removed, broadcasting: 1\nI0201 13:59:15.824101 1061 log.go:172] (0xc00078e6e0) (0xc000696000) Stream removed, broadcasting: 3\nI0201 13:59:15.824107 1061 log.go:172] (0xc00078e6e0) (0xc0006960a0) Stream removed, broadcasting: 5\n" Feb 1 13:59:15.841: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Feb 1 13:59:15.841: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Feb 1 13:59:15.841: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3412 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 1 13:59:16.381: INFO: stderr: "I0201 13:59:16.015401 1081 log.go:172] (0xc0006d2c60) (0xc0002e4aa0) Create stream\nI0201 13:59:16.015538 1081 log.go:172] (0xc0006d2c60) (0xc0002e4aa0) Stream added, broadcasting: 1\nI0201 13:59:16.020064 1081 log.go:172] (0xc0006d2c60) Reply frame received for 1\nI0201 13:59:16.020125 1081 log.go:172] (0xc0006d2c60) (0xc0008b2000) Create stream\nI0201 13:59:16.020160 1081 log.go:172] (0xc0006d2c60) (0xc0008b2000) Stream added, broadcasting: 3\nI0201 13:59:16.021320 1081 log.go:172] (0xc0006d2c60) Reply frame received for 3\nI0201 13:59:16.021431 1081 log.go:172] (0xc0006d2c60) (0xc00080e000) Create stream\nI0201 13:59:16.021450 1081 log.go:172] (0xc0006d2c60) (0xc00080e000) Stream added, broadcasting: 5\nI0201 13:59:16.022502 1081 log.go:172] (0xc0006d2c60) Reply frame received for 5\nI0201 13:59:16.197943 1081 log.go:172] (0xc0006d2c60) Data frame received for 5\nI0201 13:59:16.197982 1081 log.go:172] (0xc00080e000) (5) Data frame handling\nI0201 13:59:16.197999 1081 log.go:172] (0xc00080e000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0201 13:59:16.276761 1081 log.go:172] (0xc0006d2c60) Data frame received for 3\nI0201 13:59:16.276821 1081 log.go:172] (0xc0008b2000) (3) Data frame handling\nI0201 13:59:16.276843 1081 log.go:172] (0xc0008b2000) (3) Data frame sent\nI0201 13:59:16.276893 1081 log.go:172] (0xc0006d2c60) Data frame received for 5\nI0201 13:59:16.276917 1081 log.go:172] (0xc00080e000) (5) Data frame handling\nI0201 13:59:16.276940 1081 log.go:172] (0xc00080e000) (5) Data frame sent\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0201 13:59:16.374453 1081 log.go:172] (0xc0006d2c60) Data frame received for 1\nI0201 13:59:16.374568 1081 log.go:172] (0xc0002e4aa0) (1) Data frame handling\nI0201 13:59:16.374603 1081 log.go:172] (0xc0002e4aa0) (1) Data frame sent\nI0201 13:59:16.374894 1081 log.go:172] (0xc0006d2c60) (0xc00080e000) Stream removed, broadcasting: 5\nI0201 13:59:16.374980 1081 log.go:172] (0xc0006d2c60) (0xc0008b2000) Stream removed, broadcasting: 3\nI0201 13:59:16.375037 1081 log.go:172] (0xc0006d2c60) (0xc0002e4aa0) Stream removed, broadcasting: 1\nI0201 13:59:16.375053 1081 log.go:172] (0xc0006d2c60) Go away received\nI0201 13:59:16.375728 1081 log.go:172] (0xc0006d2c60) (0xc0002e4aa0) Stream removed, broadcasting: 1\nI0201 13:59:16.375745 1081 log.go:172] (0xc0006d2c60) (0xc0008b2000) Stream removed, broadcasting: 3\nI0201 13:59:16.375755 1081 log.go:172] (0xc0006d2c60) (0xc00080e000) Stream removed, broadcasting: 5\n" Feb 1 13:59:16.382: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Feb 1 13:59:16.382: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Feb 1 13:59:16.382: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3412 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 1 13:59:17.013: INFO: stderr: "I0201 13:59:16.619585 1099 log.go:172] (0xc0009840b0) (0xc000a5c5a0) Create stream\nI0201 13:59:16.620084 1099 log.go:172] (0xc0009840b0) (0xc000a5c5a0) Stream added, broadcasting: 1\nI0201 13:59:16.703736 1099 log.go:172] (0xc0009840b0) Reply frame received for 1\nI0201 13:59:16.703901 1099 log.go:172] (0xc0009840b0) (0xc000a5c6e0) Create stream\nI0201 13:59:16.703920 1099 log.go:172] (0xc0009840b0) (0xc000a5c6e0) Stream added, broadcasting: 3\nI0201 13:59:16.706021 1099 log.go:172] (0xc0009840b0) Reply frame received for 3\nI0201 13:59:16.706078 1099 log.go:172] (0xc0009840b0) (0xc0009a4000) Create stream\nI0201 13:59:16.706109 1099 log.go:172] (0xc0009840b0) (0xc0009a4000) Stream added, broadcasting: 5\nI0201 13:59:16.715897 1099 log.go:172] (0xc0009840b0) Reply frame received for 5\nI0201 13:59:16.845301 1099 log.go:172] (0xc0009840b0) Data frame received for 5\nI0201 13:59:16.845575 1099 log.go:172] (0xc0009a4000) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nmv: can't rename '/tmp/index.html': No such file or directory\nI0201 13:59:16.845695 1099 log.go:172] (0xc0009840b0) Data frame received for 3\nI0201 13:59:16.845814 1099 log.go:172] (0xc000a5c6e0) (3) Data frame handling\nI0201 13:59:16.845874 1099 log.go:172] (0xc000a5c6e0) (3) Data frame sent\nI0201 13:59:16.845930 1099 log.go:172] (0xc0009a4000) (5) Data frame sent\nI0201 13:59:16.845970 1099 log.go:172] (0xc0009840b0) Data frame received for 5\nI0201 13:59:16.845988 1099 log.go:172] (0xc0009a4000) (5) Data frame handling\nI0201 13:59:16.846007 1099 log.go:172] (0xc0009a4000) (5) Data frame sent\n+ true\nI0201 13:59:16.996726 1099 log.go:172] (0xc0009840b0) (0xc000a5c6e0) Stream removed, broadcasting: 3\nI0201 13:59:16.996949 1099 log.go:172] (0xc0009840b0) Data frame received for 1\nI0201 13:59:16.996992 1099 log.go:172] (0xc000a5c5a0) (1) Data frame handling\nI0201 13:59:16.997054 1099 log.go:172] (0xc000a5c5a0) (1) Data frame sent\nI0201 13:59:16.997111 1099 log.go:172] (0xc0009840b0) (0xc0009a4000) Stream removed, broadcasting: 5\nI0201 13:59:16.997162 1099 log.go:172] (0xc0009840b0) (0xc000a5c5a0) Stream removed, broadcasting: 1\nI0201 13:59:16.997196 1099 log.go:172] (0xc0009840b0) Go away received\nI0201 13:59:16.998100 1099 log.go:172] (0xc0009840b0) (0xc000a5c5a0) Stream removed, broadcasting: 1\nI0201 13:59:16.998124 1099 log.go:172] (0xc0009840b0) (0xc000a5c6e0) Stream removed, broadcasting: 3\nI0201 13:59:16.998142 1099 log.go:172] (0xc0009840b0) (0xc0009a4000) Stream removed, broadcasting: 5\n" Feb 1 13:59:17.014: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Feb 1 13:59:17.014: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Feb 1 13:59:17.027: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Feb 1 13:59:17.027: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Feb 1 13:59:17.027: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod Feb 1 13:59:17.033: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3412 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Feb 1 13:59:17.413: INFO: stderr: "I0201 13:59:17.167904 1119 log.go:172] (0xc00013adc0) (0xc0009b0640) Create stream\nI0201 13:59:17.168013 1119 log.go:172] (0xc00013adc0) (0xc0009b0640) Stream added, broadcasting: 1\nI0201 13:59:17.172965 1119 log.go:172] (0xc00013adc0) Reply frame received for 1\nI0201 13:59:17.173065 1119 log.go:172] (0xc00013adc0) (0xc0009b06e0) Create stream\nI0201 13:59:17.173081 1119 log.go:172] (0xc00013adc0) (0xc0009b06e0) Stream added, broadcasting: 3\nI0201 13:59:17.175698 1119 log.go:172] (0xc00013adc0) Reply frame received for 3\nI0201 13:59:17.175730 1119 log.go:172] (0xc00013adc0) (0xc000864000) Create stream\nI0201 13:59:17.175740 1119 log.go:172] (0xc00013adc0) (0xc000864000) Stream added, broadcasting: 5\nI0201 13:59:17.177744 1119 log.go:172] (0xc00013adc0) Reply frame received for 5\nI0201 13:59:17.279239 1119 log.go:172] (0xc00013adc0) Data frame received for 5\nI0201 13:59:17.279298 1119 log.go:172] (0xc000864000) (5) Data frame handling\nI0201 13:59:17.279314 1119 log.go:172] (0xc000864000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0201 13:59:17.279334 1119 log.go:172] (0xc00013adc0) Data frame received for 3\nI0201 13:59:17.279339 1119 log.go:172] (0xc0009b06e0) (3) Data frame handling\nI0201 13:59:17.279351 1119 log.go:172] (0xc0009b06e0) (3) Data frame sent\nI0201 13:59:17.403895 1119 log.go:172] (0xc00013adc0) (0xc0009b06e0) Stream removed, broadcasting: 3\nI0201 13:59:17.404062 1119 log.go:172] (0xc00013adc0) Data frame received for 1\nI0201 13:59:17.404085 1119 log.go:172] (0xc00013adc0) (0xc000864000) Stream removed, broadcasting: 5\nI0201 13:59:17.404119 1119 log.go:172] (0xc0009b0640) (1) Data frame handling\nI0201 13:59:17.404134 1119 log.go:172] (0xc0009b0640) (1) Data frame sent\nI0201 13:59:17.404145 1119 log.go:172] (0xc00013adc0) (0xc0009b0640) Stream removed, broadcasting: 1\nI0201 13:59:17.404171 1119 log.go:172] (0xc00013adc0) Go away received\nI0201 13:59:17.404790 1119 log.go:172] (0xc00013adc0) (0xc0009b0640) Stream removed, broadcasting: 1\nI0201 13:59:17.404809 1119 log.go:172] (0xc00013adc0) (0xc0009b06e0) Stream removed, broadcasting: 3\nI0201 13:59:17.404820 1119 log.go:172] (0xc00013adc0) (0xc000864000) Stream removed, broadcasting: 5\n" Feb 1 13:59:17.414: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Feb 1 13:59:17.414: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Feb 1 13:59:17.414: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3412 ss-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Feb 1 13:59:17.765: INFO: stderr: "I0201 13:59:17.598578 1140 log.go:172] (0xc00042e2c0) (0xc0009606e0) Create stream\nI0201 13:59:17.598676 1140 log.go:172] (0xc00042e2c0) (0xc0009606e0) Stream added, broadcasting: 1\nI0201 13:59:17.602496 1140 log.go:172] (0xc00042e2c0) Reply frame received for 1\nI0201 13:59:17.602572 1140 log.go:172] (0xc00042e2c0) (0xc0005941e0) Create stream\nI0201 13:59:17.602591 1140 log.go:172] (0xc00042e2c0) (0xc0005941e0) Stream added, broadcasting: 3\nI0201 13:59:17.603409 1140 log.go:172] (0xc00042e2c0) Reply frame received for 3\nI0201 13:59:17.603430 1140 log.go:172] (0xc00042e2c0) (0xc000960780) Create stream\nI0201 13:59:17.603436 1140 log.go:172] (0xc00042e2c0) (0xc000960780) Stream added, broadcasting: 5\nI0201 13:59:17.604310 1140 log.go:172] (0xc00042e2c0) Reply frame received for 5\nI0201 13:59:17.671443 1140 log.go:172] (0xc00042e2c0) Data frame received for 5\nI0201 13:59:17.671522 1140 log.go:172] (0xc000960780) (5) Data frame handling\nI0201 13:59:17.671539 1140 log.go:172] (0xc000960780) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0201 13:59:17.693550 1140 log.go:172] (0xc00042e2c0) Data frame received for 3\nI0201 13:59:17.693577 1140 log.go:172] (0xc0005941e0) (3) Data frame handling\nI0201 13:59:17.693594 1140 log.go:172] (0xc0005941e0) (3) Data frame sent\nI0201 13:59:17.757542 1140 log.go:172] (0xc00042e2c0) (0xc0005941e0) Stream removed, broadcasting: 3\nI0201 13:59:17.757634 1140 log.go:172] (0xc00042e2c0) Data frame received for 1\nI0201 13:59:17.757658 1140 log.go:172] (0xc0009606e0) (1) Data frame handling\nI0201 13:59:17.757668 1140 log.go:172] (0xc0009606e0) (1) Data frame sent\nI0201 13:59:17.757676 1140 log.go:172] (0xc00042e2c0) (0xc0009606e0) Stream removed, broadcasting: 1\nI0201 13:59:17.757713 1140 log.go:172] (0xc00042e2c0) (0xc000960780) Stream removed, broadcasting: 5\nI0201 13:59:17.757776 1140 log.go:172] (0xc00042e2c0) Go away received\nI0201 13:59:17.758098 1140 log.go:172] (0xc00042e2c0) (0xc0009606e0) Stream removed, broadcasting: 1\nI0201 13:59:17.758106 1140 log.go:172] (0xc00042e2c0) (0xc0005941e0) Stream removed, broadcasting: 3\nI0201 13:59:17.758110 1140 log.go:172] (0xc00042e2c0) (0xc000960780) Stream removed, broadcasting: 5\n" Feb 1 13:59:17.766: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Feb 1 13:59:17.766: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Feb 1 13:59:17.766: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3412 ss-2 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Feb 1 13:59:18.369: INFO: stderr: "I0201 13:59:17.983415 1160 log.go:172] (0xc000954630) (0xc000658be0) Create stream\nI0201 13:59:17.983752 1160 log.go:172] (0xc000954630) (0xc000658be0) Stream added, broadcasting: 1\nI0201 13:59:17.998036 1160 log.go:172] (0xc000954630) Reply frame received for 1\nI0201 13:59:17.998153 1160 log.go:172] (0xc000954630) (0xc0009e6000) Create stream\nI0201 13:59:17.998177 1160 log.go:172] (0xc000954630) (0xc0009e6000) Stream added, broadcasting: 3\nI0201 13:59:18.000016 1160 log.go:172] (0xc000954630) Reply frame received for 3\nI0201 13:59:18.000043 1160 log.go:172] (0xc000954630) (0xc0007d2000) Create stream\nI0201 13:59:18.000053 1160 log.go:172] (0xc000954630) (0xc0007d2000) Stream added, broadcasting: 5\nI0201 13:59:18.001333 1160 log.go:172] (0xc000954630) Reply frame received for 5\nI0201 13:59:18.137104 1160 log.go:172] (0xc000954630) Data frame received for 5\nI0201 13:59:18.137205 1160 log.go:172] (0xc0007d2000) (5) Data frame handling\nI0201 13:59:18.137275 1160 log.go:172] (0xc0007d2000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0201 13:59:18.165808 1160 log.go:172] (0xc000954630) Data frame received for 3\nI0201 13:59:18.166147 1160 log.go:172] (0xc0009e6000) (3) Data frame handling\nI0201 13:59:18.166194 1160 log.go:172] (0xc0009e6000) (3) Data frame sent\nI0201 13:59:18.344778 1160 log.go:172] (0xc000954630) Data frame received for 1\nI0201 13:59:18.344909 1160 log.go:172] (0xc000658be0) (1) Data frame handling\nI0201 13:59:18.344932 1160 log.go:172] (0xc000658be0) (1) Data frame sent\nI0201 13:59:18.347655 1160 log.go:172] (0xc000954630) (0xc000658be0) Stream removed, broadcasting: 1\nI0201 13:59:18.349138 1160 log.go:172] (0xc000954630) (0xc0009e6000) Stream removed, broadcasting: 3\nI0201 13:59:18.349457 1160 log.go:172] (0xc000954630) (0xc0007d2000) Stream removed, broadcasting: 5\nI0201 13:59:18.349613 1160 log.go:172] (0xc000954630) Go away received\nI0201 13:59:18.349813 1160 log.go:172] (0xc000954630) (0xc000658be0) Stream removed, broadcasting: 1\nI0201 13:59:18.349859 1160 log.go:172] (0xc000954630) (0xc0009e6000) Stream removed, broadcasting: 3\nI0201 13:59:18.349875 1160 log.go:172] (0xc000954630) (0xc0007d2000) Stream removed, broadcasting: 5\n" Feb 1 13:59:18.370: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Feb 1 13:59:18.370: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Feb 1 13:59:18.370: INFO: Waiting for statefulset status.replicas updated to 0 Feb 1 13:59:18.414: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1 Feb 1 13:59:28.432: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Feb 1 13:59:28.432: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Feb 1 13:59:28.432: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Feb 1 13:59:28.466: INFO: POD NODE PHASE GRACE CONDITIONS Feb 1 13:59:28.466: INFO: ss-0 iruya-node Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-01 13:58:44 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-01 13:59:18 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-01 13:59:18 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-01 13:58:43 +0000 UTC }] Feb 1 13:59:28.466: INFO: ss-1 iruya-server-sfge57q7djm7 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-01 13:59:04 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-01 13:59:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-01 13:59:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-01 13:59:04 +0000 UTC }] Feb 1 13:59:28.466: INFO: ss-2 iruya-node Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-01 13:59:04 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-01 13:59:18 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-01 13:59:18 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-01 13:59:04 +0000 UTC }] Feb 1 13:59:28.466: INFO: Feb 1 13:59:28.466: INFO: StatefulSet ss has not reached scale 0, at 3 Feb 1 13:59:30.142: INFO: POD NODE PHASE GRACE CONDITIONS Feb 1 13:59:30.142: INFO: ss-0 iruya-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-01 13:58:44 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-01 13:59:18 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-01 13:59:18 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-01 13:58:43 +0000 UTC }] Feb 1 13:59:30.143: INFO: ss-1 iruya-server-sfge57q7djm7 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-01 13:59:04 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-01 13:59:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-01 13:59:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-01 13:59:04 +0000 UTC }] Feb 1 13:59:30.143: INFO: ss-2 iruya-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-01 13:59:04 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-01 13:59:18 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-01 13:59:18 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-01 13:59:04 +0000 UTC }] Feb 1 13:59:30.143: INFO: Feb 1 13:59:30.143: INFO: StatefulSet ss has not reached scale 0, at 3 Feb 1 13:59:31.155: INFO: POD NODE PHASE GRACE CONDITIONS Feb 1 13:59:31.155: INFO: ss-0 iruya-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-01 13:58:44 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-01 13:59:18 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-01 13:59:18 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-01 13:58:43 +0000 UTC }] Feb 1 13:59:31.155: INFO: ss-1 iruya-server-sfge57q7djm7 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-01 13:59:04 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-01 13:59:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-01 13:59:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-01 13:59:04 +0000 UTC }] Feb 1 13:59:31.155: INFO: ss-2 iruya-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-01 13:59:04 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-01 13:59:18 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-01 13:59:18 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-01 13:59:04 +0000 UTC }] Feb 1 13:59:31.155: INFO: Feb 1 13:59:31.155: INFO: StatefulSet ss has not reached scale 0, at 3 Feb 1 13:59:32.606: INFO: POD NODE PHASE GRACE CONDITIONS Feb 1 13:59:32.606: INFO: ss-0 iruya-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-01 13:58:44 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-01 13:59:18 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-01 13:59:18 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-01 13:58:43 +0000 UTC }] Feb 1 13:59:32.607: INFO: ss-1 iruya-server-sfge57q7djm7 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-01 13:59:04 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-01 13:59:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-01 13:59:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-01 13:59:04 +0000 UTC }] Feb 1 13:59:32.607: INFO: ss-2 iruya-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-01 13:59:04 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-01 13:59:18 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-01 13:59:18 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-01 13:59:04 +0000 UTC }] Feb 1 13:59:32.607: INFO: Feb 1 13:59:32.607: INFO: StatefulSet ss has not reached scale 0, at 3 Feb 1 13:59:33.624: INFO: POD NODE PHASE GRACE CONDITIONS Feb 1 13:59:33.625: INFO: ss-0 iruya-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-01 13:58:44 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-01 13:59:18 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-01 13:59:18 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-01 13:58:43 +0000 UTC }] Feb 1 13:59:33.625: INFO: ss-1 iruya-server-sfge57q7djm7 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-01 13:59:04 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-01 13:59:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-01 13:59:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-01 13:59:04 +0000 UTC }] Feb 1 13:59:33.625: INFO: ss-2 iruya-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-01 13:59:04 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-01 13:59:18 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-01 13:59:18 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-01 13:59:04 +0000 UTC }] Feb 1 13:59:33.625: INFO: Feb 1 13:59:33.625: INFO: StatefulSet ss has not reached scale 0, at 3 Feb 1 13:59:34.649: INFO: POD NODE PHASE GRACE CONDITIONS Feb 1 13:59:34.649: INFO: ss-0 iruya-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-01 13:58:44 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-01 13:59:18 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-01 13:59:18 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-01 13:58:43 +0000 UTC }] Feb 1 13:59:34.650: INFO: ss-1 iruya-server-sfge57q7djm7 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-01 13:59:04 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-01 13:59:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-01 13:59:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-01 13:59:04 +0000 UTC }] Feb 1 13:59:34.650: INFO: ss-2 iruya-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-01 13:59:04 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-01 13:59:18 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-01 13:59:18 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-01 13:59:04 +0000 UTC }] Feb 1 13:59:34.650: INFO: Feb 1 13:59:34.650: INFO: StatefulSet ss has not reached scale 0, at 3 Feb 1 13:59:35.662: INFO: POD NODE PHASE GRACE CONDITIONS Feb 1 13:59:35.662: INFO: ss-0 iruya-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-01 13:58:44 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-01 13:59:18 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-01 13:59:18 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-01 13:58:43 +0000 UTC }] Feb 1 13:59:35.663: INFO: ss-1 iruya-server-sfge57q7djm7 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-01 13:59:04 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-01 13:59:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-01 13:59:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-01 13:59:04 +0000 UTC }] Feb 1 13:59:35.663: INFO: ss-2 iruya-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-01 13:59:04 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-01 13:59:18 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-01 13:59:18 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-01 13:59:04 +0000 UTC }] Feb 1 13:59:35.663: INFO: Feb 1 13:59:35.663: INFO: StatefulSet ss has not reached scale 0, at 3 Feb 1 13:59:36.682: INFO: POD NODE PHASE GRACE CONDITIONS Feb 1 13:59:36.682: INFO: ss-0 iruya-node Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-01 13:58:44 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-01 13:59:18 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-01 13:59:18 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-01 13:58:43 +0000 UTC }] Feb 1 13:59:36.682: INFO: ss-1 iruya-server-sfge57q7djm7 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-01 13:59:04 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-01 13:59:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-01 13:59:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-01 13:59:04 +0000 UTC }] Feb 1 13:59:36.682: INFO: ss-2 iruya-node Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-01 13:59:04 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-01 13:59:18 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-01 13:59:18 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-01 13:59:04 +0000 UTC }] Feb 1 13:59:36.682: INFO: Feb 1 13:59:36.682: INFO: StatefulSet ss has not reached scale 0, at 3 Feb 1 13:59:37.698: INFO: POD NODE PHASE GRACE CONDITIONS Feb 1 13:59:37.698: INFO: ss-1 iruya-server-sfge57q7djm7 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-01 13:59:04 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-01 13:59:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-01 13:59:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-01 13:59:04 +0000 UTC }] Feb 1 13:59:37.698: INFO: Feb 1 13:59:37.698: INFO: StatefulSet ss has not reached scale 0, at 1 STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-3412 Feb 1 13:59:38.724: INFO: Scaling statefulset ss to 0 Feb 1 13:59:38.740: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 Feb 1 13:59:38.743: INFO: Deleting all statefulset in ns statefulset-3412 Feb 1 13:59:38.746: INFO: Scaling statefulset ss to 0 Feb 1 13:59:38.757: INFO: Waiting for statefulset status.replicas updated to 0 Feb 1 13:59:38.760: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 1 13:59:38.789: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-3412" for this suite. Feb 1 13:59:44.859: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 1 13:59:44.953: INFO: namespace statefulset-3412 deletion completed in 6.153922734s • [SLOW TEST:61.283 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 1 13:59:44.954: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name cm-test-opt-del-9020265f-5daa-4fc5-9745-50fe8d49ff3b STEP: Creating configMap with name cm-test-opt-upd-a86a75eb-4f4a-4550-8d4a-f3d14102bf82 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-9020265f-5daa-4fc5-9745-50fe8d49ff3b STEP: Updating configmap cm-test-opt-upd-a86a75eb-4f4a-4550-8d4a-f3d14102bf82 STEP: Creating configMap with name cm-test-opt-create-18dbcf3c-44ff-46ac-b162-d3a2461a6a3f STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 1 14:01:17.746: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1624" for this suite. Feb 1 14:01:41.797: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 1 14:01:41.992: INFO: namespace projected-1624 deletion completed in 24.239357413s • [SLOW TEST:117.038 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 1 14:01:41.995: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0644 on node default medium Feb 1 14:01:42.179: INFO: Waiting up to 5m0s for pod "pod-af96f690-2c75-4cc3-baaf-160de2530d84" in namespace "emptydir-5982" to be "success or failure" Feb 1 14:01:42.210: INFO: Pod "pod-af96f690-2c75-4cc3-baaf-160de2530d84": Phase="Pending", Reason="", readiness=false. Elapsed: 30.401265ms Feb 1 14:01:44.219: INFO: Pod "pod-af96f690-2c75-4cc3-baaf-160de2530d84": Phase="Pending", Reason="", readiness=false. Elapsed: 2.039266234s Feb 1 14:01:46.277: INFO: Pod "pod-af96f690-2c75-4cc3-baaf-160de2530d84": Phase="Pending", Reason="", readiness=false. Elapsed: 4.09790357s Feb 1 14:01:48.285: INFO: Pod "pod-af96f690-2c75-4cc3-baaf-160de2530d84": Phase="Pending", Reason="", readiness=false. Elapsed: 6.106042383s Feb 1 14:01:50.334: INFO: Pod "pod-af96f690-2c75-4cc3-baaf-160de2530d84": Phase="Pending", Reason="", readiness=false. Elapsed: 8.154834216s Feb 1 14:01:52.342: INFO: Pod "pod-af96f690-2c75-4cc3-baaf-160de2530d84": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.162474208s STEP: Saw pod success Feb 1 14:01:52.342: INFO: Pod "pod-af96f690-2c75-4cc3-baaf-160de2530d84" satisfied condition "success or failure" Feb 1 14:01:52.347: INFO: Trying to get logs from node iruya-node pod pod-af96f690-2c75-4cc3-baaf-160de2530d84 container test-container: STEP: delete the pod Feb 1 14:01:52.676: INFO: Waiting for pod pod-af96f690-2c75-4cc3-baaf-160de2530d84 to disappear Feb 1 14:01:52.710: INFO: Pod pod-af96f690-2c75-4cc3-baaf-160de2530d84 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 1 14:01:52.710: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5982" for this suite. Feb 1 14:01:58.803: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 1 14:01:58.982: INFO: namespace emptydir-5982 deletion completed in 6.260291765s • [SLOW TEST:16.987 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 1 14:01:58.982: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 1 14:02:07.208: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-5136" for this suite. Feb 1 14:02:59.244: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 1 14:02:59.396: INFO: namespace kubelet-test-5136 deletion completed in 52.181269578s • [SLOW TEST:60.414 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a busybox command in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:40 should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 1 14:02:59.398: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4480.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4480.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Feb 1 14:03:13.783: INFO: Unable to read wheezy_udp@kubernetes.default.svc.cluster.local from pod dns-4480/dns-test-631af07e-c974-4cb4-b59b-63de113bada7: the server could not find the requested resource (get pods dns-test-631af07e-c974-4cb4-b59b-63de113bada7) Feb 1 14:03:13.793: INFO: Unable to read wheezy_tcp@kubernetes.default.svc.cluster.local from pod dns-4480/dns-test-631af07e-c974-4cb4-b59b-63de113bada7: the server could not find the requested resource (get pods dns-test-631af07e-c974-4cb4-b59b-63de113bada7) Feb 1 14:03:13.804: INFO: Unable to read wheezy_udp@PodARecord from pod dns-4480/dns-test-631af07e-c974-4cb4-b59b-63de113bada7: the server could not find the requested resource (get pods dns-test-631af07e-c974-4cb4-b59b-63de113bada7) Feb 1 14:03:13.818: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-4480/dns-test-631af07e-c974-4cb4-b59b-63de113bada7: the server could not find the requested resource (get pods dns-test-631af07e-c974-4cb4-b59b-63de113bada7) Feb 1 14:03:13.827: INFO: Unable to read jessie_udp@kubernetes.default.svc.cluster.local from pod dns-4480/dns-test-631af07e-c974-4cb4-b59b-63de113bada7: the server could not find the requested resource (get pods dns-test-631af07e-c974-4cb4-b59b-63de113bada7) Feb 1 14:03:13.832: INFO: Unable to read jessie_tcp@kubernetes.default.svc.cluster.local from pod dns-4480/dns-test-631af07e-c974-4cb4-b59b-63de113bada7: the server could not find the requested resource (get pods dns-test-631af07e-c974-4cb4-b59b-63de113bada7) Feb 1 14:03:13.838: INFO: Unable to read jessie_udp@PodARecord from pod dns-4480/dns-test-631af07e-c974-4cb4-b59b-63de113bada7: the server could not find the requested resource (get pods dns-test-631af07e-c974-4cb4-b59b-63de113bada7) Feb 1 14:03:13.843: INFO: Unable to read jessie_tcp@PodARecord from pod dns-4480/dns-test-631af07e-c974-4cb4-b59b-63de113bada7: the server could not find the requested resource (get pods dns-test-631af07e-c974-4cb4-b59b-63de113bada7) Feb 1 14:03:13.843: INFO: Lookups using dns-4480/dns-test-631af07e-c974-4cb4-b59b-63de113bada7 failed for: [wheezy_udp@kubernetes.default.svc.cluster.local wheezy_tcp@kubernetes.default.svc.cluster.local wheezy_udp@PodARecord wheezy_tcp@PodARecord jessie_udp@kubernetes.default.svc.cluster.local jessie_tcp@kubernetes.default.svc.cluster.local jessie_udp@PodARecord jessie_tcp@PodARecord] Feb 1 14:03:18.928: INFO: DNS probes using dns-4480/dns-test-631af07e-c974-4cb4-b59b-63de113bada7 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 1 14:03:19.019: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-4480" for this suite. Feb 1 14:03:25.169: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 1 14:03:25.358: INFO: namespace dns-4480 deletion completed in 6.225931831s • [SLOW TEST:25.961 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 1 14:03:25.359: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap that has name configmap-test-emptyKey-1dde00e3-51ef-4b2a-bfd0-6c08ad4d65fc [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 1 14:03:25.487: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4762" for this suite. Feb 1 14:03:31.525: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 1 14:03:31.669: INFO: namespace configmap-4762 deletion completed in 6.171149192s • [SLOW TEST:6.310 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 1 14:03:31.669: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:179 [It] should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 1 14:03:31.920: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-2701" for this suite. Feb 1 14:03:55.973: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 1 14:03:56.101: INFO: namespace pods-2701 deletion completed in 24.161875981s • [SLOW TEST:24.432 seconds] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 1 14:03:56.101: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test substitution in container's command Feb 1 14:03:56.183: INFO: Waiting up to 5m0s for pod "var-expansion-3600ea28-229f-47a9-a598-c3bf8efd3806" in namespace "var-expansion-2978" to be "success or failure" Feb 1 14:03:56.250: INFO: Pod "var-expansion-3600ea28-229f-47a9-a598-c3bf8efd3806": Phase="Pending", Reason="", readiness=false. Elapsed: 67.168625ms Feb 1 14:03:58.260: INFO: Pod "var-expansion-3600ea28-229f-47a9-a598-c3bf8efd3806": Phase="Pending", Reason="", readiness=false. Elapsed: 2.077610522s Feb 1 14:04:00.277: INFO: Pod "var-expansion-3600ea28-229f-47a9-a598-c3bf8efd3806": Phase="Pending", Reason="", readiness=false. Elapsed: 4.094243574s Feb 1 14:04:02.285: INFO: Pod "var-expansion-3600ea28-229f-47a9-a598-c3bf8efd3806": Phase="Pending", Reason="", readiness=false. Elapsed: 6.102418847s Feb 1 14:04:04.294: INFO: Pod "var-expansion-3600ea28-229f-47a9-a598-c3bf8efd3806": Phase="Pending", Reason="", readiness=false. Elapsed: 8.111020952s Feb 1 14:04:06.309: INFO: Pod "var-expansion-3600ea28-229f-47a9-a598-c3bf8efd3806": Phase="Pending", Reason="", readiness=false. Elapsed: 10.125718527s Feb 1 14:04:08.321: INFO: Pod "var-expansion-3600ea28-229f-47a9-a598-c3bf8efd3806": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.137811423s STEP: Saw pod success Feb 1 14:04:08.321: INFO: Pod "var-expansion-3600ea28-229f-47a9-a598-c3bf8efd3806" satisfied condition "success or failure" Feb 1 14:04:08.327: INFO: Trying to get logs from node iruya-node pod var-expansion-3600ea28-229f-47a9-a598-c3bf8efd3806 container dapi-container: STEP: delete the pod Feb 1 14:04:08.568: INFO: Waiting for pod var-expansion-3600ea28-229f-47a9-a598-c3bf8efd3806 to disappear Feb 1 14:04:08.574: INFO: Pod var-expansion-3600ea28-229f-47a9-a598-c3bf8efd3806 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 1 14:04:08.574: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-2978" for this suite. Feb 1 14:04:14.610: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 1 14:04:14.726: INFO: namespace var-expansion-2978 deletion completed in 6.142063401s • [SLOW TEST:18.625 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 1 14:04:14.726: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-projected-all-test-volume-b4b2c24d-0bce-4865-8ecf-f368a010ca25 STEP: Creating secret with name secret-projected-all-test-volume-ea06ce2e-417e-40de-ba9f-c8ac7c20c43a STEP: Creating a pod to test Check all projections for projected volume plugin Feb 1 14:04:14.880: INFO: Waiting up to 5m0s for pod "projected-volume-0d0f60ee-1211-4401-ad21-81e11a5e54c8" in namespace "projected-5519" to be "success or failure" Feb 1 14:04:14.900: INFO: Pod "projected-volume-0d0f60ee-1211-4401-ad21-81e11a5e54c8": Phase="Pending", Reason="", readiness=false. Elapsed: 20.514502ms Feb 1 14:04:16.920: INFO: Pod "projected-volume-0d0f60ee-1211-4401-ad21-81e11a5e54c8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.040611153s Feb 1 14:04:18.933: INFO: Pod "projected-volume-0d0f60ee-1211-4401-ad21-81e11a5e54c8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.053524153s Feb 1 14:04:20.949: INFO: Pod "projected-volume-0d0f60ee-1211-4401-ad21-81e11a5e54c8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.069450698s Feb 1 14:04:22.960: INFO: Pod "projected-volume-0d0f60ee-1211-4401-ad21-81e11a5e54c8": Phase="Pending", Reason="", readiness=false. Elapsed: 8.080277039s Feb 1 14:04:25.121: INFO: Pod "projected-volume-0d0f60ee-1211-4401-ad21-81e11a5e54c8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.24105799s STEP: Saw pod success Feb 1 14:04:25.121: INFO: Pod "projected-volume-0d0f60ee-1211-4401-ad21-81e11a5e54c8" satisfied condition "success or failure" Feb 1 14:04:25.126: INFO: Trying to get logs from node iruya-node pod projected-volume-0d0f60ee-1211-4401-ad21-81e11a5e54c8 container projected-all-volume-test: STEP: delete the pod Feb 1 14:04:25.181: INFO: Waiting for pod projected-volume-0d0f60ee-1211-4401-ad21-81e11a5e54c8 to disappear Feb 1 14:04:25.358: INFO: Pod projected-volume-0d0f60ee-1211-4401-ad21-81e11a5e54c8 no longer exists [AfterEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 1 14:04:25.359: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5519" for this suite. Feb 1 14:04:31.401: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 1 14:04:31.579: INFO: namespace projected-5519 deletion completed in 6.205702066s • [SLOW TEST:16.853 seconds] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:31 should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 1 14:04:31.581: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Feb 1 14:04:31.680: INFO: Waiting up to 5m0s for pod "downwardapi-volume-048ae062-f4e8-44ae-ac98-c40dcd3d934d" in namespace "downward-api-2113" to be "success or failure" Feb 1 14:04:31.767: INFO: Pod "downwardapi-volume-048ae062-f4e8-44ae-ac98-c40dcd3d934d": Phase="Pending", Reason="", readiness=false. Elapsed: 87.528527ms Feb 1 14:04:33.782: INFO: Pod "downwardapi-volume-048ae062-f4e8-44ae-ac98-c40dcd3d934d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.102501883s Feb 1 14:04:35.801: INFO: Pod "downwardapi-volume-048ae062-f4e8-44ae-ac98-c40dcd3d934d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.121427221s Feb 1 14:04:37.812: INFO: Pod "downwardapi-volume-048ae062-f4e8-44ae-ac98-c40dcd3d934d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.132079943s Feb 1 14:04:39.826: INFO: Pod "downwardapi-volume-048ae062-f4e8-44ae-ac98-c40dcd3d934d": Phase="Pending", Reason="", readiness=false. Elapsed: 8.145907853s Feb 1 14:04:41.843: INFO: Pod "downwardapi-volume-048ae062-f4e8-44ae-ac98-c40dcd3d934d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.163280938s STEP: Saw pod success Feb 1 14:04:41.843: INFO: Pod "downwardapi-volume-048ae062-f4e8-44ae-ac98-c40dcd3d934d" satisfied condition "success or failure" Feb 1 14:04:41.849: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-048ae062-f4e8-44ae-ac98-c40dcd3d934d container client-container: STEP: delete the pod Feb 1 14:04:41.939: INFO: Waiting for pod downwardapi-volume-048ae062-f4e8-44ae-ac98-c40dcd3d934d to disappear Feb 1 14:04:41.946: INFO: Pod downwardapi-volume-048ae062-f4e8-44ae-ac98-c40dcd3d934d no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 1 14:04:41.946: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2113" for this suite. Feb 1 14:04:47.992: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 1 14:04:48.084: INFO: namespace downward-api-2113 deletion completed in 6.12940762s • [SLOW TEST:16.503 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 1 14:04:48.084: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-4e8183d2-a850-4a35-80d8-3f62d86a0d6d STEP: Creating a pod to test consume configMaps Feb 1 14:04:48.161: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-dfacc26b-03ae-4f56-8a0a-453d14f6d5ff" in namespace "projected-2627" to be "success or failure" Feb 1 14:04:48.207: INFO: Pod "pod-projected-configmaps-dfacc26b-03ae-4f56-8a0a-453d14f6d5ff": Phase="Pending", Reason="", readiness=false. Elapsed: 45.541882ms Feb 1 14:04:50.220: INFO: Pod "pod-projected-configmaps-dfacc26b-03ae-4f56-8a0a-453d14f6d5ff": Phase="Pending", Reason="", readiness=false. Elapsed: 2.059064813s Feb 1 14:04:52.228: INFO: Pod "pod-projected-configmaps-dfacc26b-03ae-4f56-8a0a-453d14f6d5ff": Phase="Pending", Reason="", readiness=false. Elapsed: 4.066492436s Feb 1 14:04:54.238: INFO: Pod "pod-projected-configmaps-dfacc26b-03ae-4f56-8a0a-453d14f6d5ff": Phase="Pending", Reason="", readiness=false. Elapsed: 6.076656613s Feb 1 14:04:56.289: INFO: Pod "pod-projected-configmaps-dfacc26b-03ae-4f56-8a0a-453d14f6d5ff": Phase="Pending", Reason="", readiness=false. Elapsed: 8.127710203s Feb 1 14:04:58.304: INFO: Pod "pod-projected-configmaps-dfacc26b-03ae-4f56-8a0a-453d14f6d5ff": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.143051806s STEP: Saw pod success Feb 1 14:04:58.304: INFO: Pod "pod-projected-configmaps-dfacc26b-03ae-4f56-8a0a-453d14f6d5ff" satisfied condition "success or failure" Feb 1 14:04:58.315: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-dfacc26b-03ae-4f56-8a0a-453d14f6d5ff container projected-configmap-volume-test: STEP: delete the pod Feb 1 14:04:58.428: INFO: Waiting for pod pod-projected-configmaps-dfacc26b-03ae-4f56-8a0a-453d14f6d5ff to disappear Feb 1 14:04:58.453: INFO: Pod pod-projected-configmaps-dfacc26b-03ae-4f56-8a0a-453d14f6d5ff no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 1 14:04:58.453: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2627" for this suite. Feb 1 14:05:04.502: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 1 14:05:04.717: INFO: namespace projected-2627 deletion completed in 6.254341989s • [SLOW TEST:16.633 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [k8s.io] Pods should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 1 14:05:04.717: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating pod Feb 1 14:05:14.854: INFO: Pod pod-hostip-93283d42-6ec9-4010-a766-a8468fff6278 has hostIP: 10.96.3.65 [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 1 14:05:14.854: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-8787" for this suite. Feb 1 14:05:54.901: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 1 14:05:55.077: INFO: namespace pods-8787 deletion completed in 40.209300062s • [SLOW TEST:50.360 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 1 14:05:55.078: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W0201 14:06:05.884968 8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Feb 1 14:06:05.885: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 1 14:06:05.885: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-266" for this suite. Feb 1 14:06:11.923: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 1 14:06:12.106: INFO: namespace gc-266 deletion completed in 6.213427437s • [SLOW TEST:17.028 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 1 14:06:12.106: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Feb 1 14:06:12.182: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 1 14:06:20.321: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-6212" for this suite. Feb 1 14:07:04.374: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 1 14:07:04.533: INFO: namespace pods-6212 deletion completed in 44.201156866s • [SLOW TEST:52.427 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl version should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 1 14:07:04.537: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Feb 1 14:07:04.626: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version' Feb 1 14:07:04.748: INFO: stderr: "" Feb 1 14:07:04.748: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"15\", GitVersion:\"v1.15.7\", GitCommit:\"6c143d35bb11d74970e7bc0b6c45b6bfdffc0bd4\", GitTreeState:\"clean\", BuildDate:\"2019-12-22T16:55:20Z\", GoVersion:\"go1.12.14\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"15\", GitVersion:\"v1.15.1\", GitCommit:\"4485c6f18cee9a5d3c3b4e523bd27972b1b53892\", GitTreeState:\"clean\", BuildDate:\"2019-07-18T09:09:21Z\", GoVersion:\"go1.12.5\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 1 14:07:04.748: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-857" for this suite. Feb 1 14:07:10.775: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 1 14:07:10.952: INFO: namespace kubectl-857 deletion completed in 6.196075378s • [SLOW TEST:6.415 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl version /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 1 14:07:10.952: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Feb 1 14:07:11.119: INFO: Waiting up to 5m0s for pod "downwardapi-volume-084e4805-0868-447f-9cf6-93646354b866" in namespace "downward-api-1880" to be "success or failure" Feb 1 14:07:11.129: INFO: Pod "downwardapi-volume-084e4805-0868-447f-9cf6-93646354b866": Phase="Pending", Reason="", readiness=false. Elapsed: 9.749845ms Feb 1 14:07:13.138: INFO: Pod "downwardapi-volume-084e4805-0868-447f-9cf6-93646354b866": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019451178s Feb 1 14:07:15.144: INFO: Pod "downwardapi-volume-084e4805-0868-447f-9cf6-93646354b866": Phase="Pending", Reason="", readiness=false. Elapsed: 4.025028217s Feb 1 14:07:17.152: INFO: Pod "downwardapi-volume-084e4805-0868-447f-9cf6-93646354b866": Phase="Pending", Reason="", readiness=false. Elapsed: 6.033406342s Feb 1 14:07:19.163: INFO: Pod "downwardapi-volume-084e4805-0868-447f-9cf6-93646354b866": Phase="Running", Reason="", readiness=true. Elapsed: 8.04415433s Feb 1 14:07:21.171: INFO: Pod "downwardapi-volume-084e4805-0868-447f-9cf6-93646354b866": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.051776418s STEP: Saw pod success Feb 1 14:07:21.171: INFO: Pod "downwardapi-volume-084e4805-0868-447f-9cf6-93646354b866" satisfied condition "success or failure" Feb 1 14:07:21.174: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-084e4805-0868-447f-9cf6-93646354b866 container client-container: STEP: delete the pod Feb 1 14:07:21.523: INFO: Waiting for pod downwardapi-volume-084e4805-0868-447f-9cf6-93646354b866 to disappear Feb 1 14:07:21.562: INFO: Pod downwardapi-volume-084e4805-0868-447f-9cf6-93646354b866 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 1 14:07:21.562: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1880" for this suite. Feb 1 14:07:27.689: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 1 14:07:27.868: INFO: namespace downward-api-1880 deletion completed in 6.292863327s • [SLOW TEST:16.916 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 1 14:07:27.869: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test override command Feb 1 14:07:28.000: INFO: Waiting up to 5m0s for pod "client-containers-7f6049d2-4ef7-4f96-8ade-d72d14b2a6c8" in namespace "containers-1721" to be "success or failure" Feb 1 14:07:28.012: INFO: Pod "client-containers-7f6049d2-4ef7-4f96-8ade-d72d14b2a6c8": Phase="Pending", Reason="", readiness=false. Elapsed: 12.062653ms Feb 1 14:07:30.021: INFO: Pod "client-containers-7f6049d2-4ef7-4f96-8ade-d72d14b2a6c8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020911801s Feb 1 14:07:32.036: INFO: Pod "client-containers-7f6049d2-4ef7-4f96-8ade-d72d14b2a6c8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.03621461s Feb 1 14:07:34.046: INFO: Pod "client-containers-7f6049d2-4ef7-4f96-8ade-d72d14b2a6c8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.045935026s Feb 1 14:07:36.061: INFO: Pod "client-containers-7f6049d2-4ef7-4f96-8ade-d72d14b2a6c8": Phase="Pending", Reason="", readiness=false. Elapsed: 8.060841284s Feb 1 14:07:38.076: INFO: Pod "client-containers-7f6049d2-4ef7-4f96-8ade-d72d14b2a6c8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.075457044s STEP: Saw pod success Feb 1 14:07:38.076: INFO: Pod "client-containers-7f6049d2-4ef7-4f96-8ade-d72d14b2a6c8" satisfied condition "success or failure" Feb 1 14:07:38.080: INFO: Trying to get logs from node iruya-node pod client-containers-7f6049d2-4ef7-4f96-8ade-d72d14b2a6c8 container test-container: STEP: delete the pod Feb 1 14:07:38.155: INFO: Waiting for pod client-containers-7f6049d2-4ef7-4f96-8ade-d72d14b2a6c8 to disappear Feb 1 14:07:38.208: INFO: Pod client-containers-7f6049d2-4ef7-4f96-8ade-d72d14b2a6c8 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 1 14:07:38.208: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-1721" for this suite. Feb 1 14:07:44.249: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 1 14:07:44.383: INFO: namespace containers-1721 deletion completed in 6.168200248s • [SLOW TEST:16.515 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 1 14:07:44.384: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating the pod Feb 1 14:07:53.108: INFO: Successfully updated pod "labelsupdate9d1f6cf7-5229-42d1-ba5e-41d8f0a49335" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 1 14:07:57.247: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7224" for this suite. Feb 1 14:08:19.300: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 1 14:08:19.490: INFO: namespace projected-7224 deletion completed in 22.234389315s • [SLOW TEST:35.106 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 1 14:08:19.491: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name s-test-opt-del-9ad41a58-5b37-4ddb-b58e-9d27078524be STEP: Creating secret with name s-test-opt-upd-2a76ff22-e109-4cf1-bd3c-8d5c49286500 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-9ad41a58-5b37-4ddb-b58e-9d27078524be STEP: Updating secret s-test-opt-upd-2a76ff22-e109-4cf1-bd3c-8d5c49286500 STEP: Creating secret with name s-test-opt-create-c929f7cd-2c27-4e27-8eea-d83d0100149b STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 1 14:08:34.049: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6411" for this suite. Feb 1 14:09:14.077: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 1 14:09:14.185: INFO: namespace projected-6411 deletion completed in 40.131525021s • [SLOW TEST:54.695 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 1 14:09:14.186: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Feb 1 14:09:14.340: INFO: Waiting up to 5m0s for pod "downwardapi-volume-33b72d77-48bc-4051-a0fb-1c588e408157" in namespace "downward-api-2331" to be "success or failure" Feb 1 14:09:14.369: INFO: Pod "downwardapi-volume-33b72d77-48bc-4051-a0fb-1c588e408157": Phase="Pending", Reason="", readiness=false. Elapsed: 29.024654ms Feb 1 14:09:16.378: INFO: Pod "downwardapi-volume-33b72d77-48bc-4051-a0fb-1c588e408157": Phase="Pending", Reason="", readiness=false. Elapsed: 2.037450334s Feb 1 14:09:18.393: INFO: Pod "downwardapi-volume-33b72d77-48bc-4051-a0fb-1c588e408157": Phase="Pending", Reason="", readiness=false. Elapsed: 4.053121809s Feb 1 14:09:20.401: INFO: Pod "downwardapi-volume-33b72d77-48bc-4051-a0fb-1c588e408157": Phase="Pending", Reason="", readiness=false. Elapsed: 6.061105958s Feb 1 14:09:22.405: INFO: Pod "downwardapi-volume-33b72d77-48bc-4051-a0fb-1c588e408157": Phase="Pending", Reason="", readiness=false. Elapsed: 8.064993056s Feb 1 14:09:24.435: INFO: Pod "downwardapi-volume-33b72d77-48bc-4051-a0fb-1c588e408157": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.095203464s STEP: Saw pod success Feb 1 14:09:24.436: INFO: Pod "downwardapi-volume-33b72d77-48bc-4051-a0fb-1c588e408157" satisfied condition "success or failure" Feb 1 14:09:24.441: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-33b72d77-48bc-4051-a0fb-1c588e408157 container client-container: STEP: delete the pod Feb 1 14:09:24.580: INFO: Waiting for pod downwardapi-volume-33b72d77-48bc-4051-a0fb-1c588e408157 to disappear Feb 1 14:09:24.602: INFO: Pod downwardapi-volume-33b72d77-48bc-4051-a0fb-1c588e408157 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 1 14:09:24.603: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2331" for this suite. Feb 1 14:09:30.643: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 1 14:09:30.772: INFO: namespace downward-api-2331 deletion completed in 6.156409034s • [SLOW TEST:16.586 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 1 14:09:30.773: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating Pod STEP: Waiting for the pod running STEP: Geting the pod STEP: Reading file content from the nginx-container Feb 1 14:09:42.934: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec pod-sharedvolume-0662b147-62b1-4549-ac20-536f44540b8f -c busybox-main-container --namespace=emptydir-2304 -- cat /usr/share/volumeshare/shareddata.txt' Feb 1 14:09:45.270: INFO: stderr: "I0201 14:09:44.989554 1194 log.go:172] (0xc00010e0b0) (0xc000620960) Create stream\nI0201 14:09:44.989666 1194 log.go:172] (0xc00010e0b0) (0xc000620960) Stream added, broadcasting: 1\nI0201 14:09:45.004129 1194 log.go:172] (0xc00010e0b0) Reply frame received for 1\nI0201 14:09:45.004251 1194 log.go:172] (0xc00010e0b0) (0xc000620a00) Create stream\nI0201 14:09:45.004266 1194 log.go:172] (0xc00010e0b0) (0xc000620a00) Stream added, broadcasting: 3\nI0201 14:09:45.006180 1194 log.go:172] (0xc00010e0b0) Reply frame received for 3\nI0201 14:09:45.006227 1194 log.go:172] (0xc00010e0b0) (0xc0007200a0) Create stream\nI0201 14:09:45.006238 1194 log.go:172] (0xc00010e0b0) (0xc0007200a0) Stream added, broadcasting: 5\nI0201 14:09:45.007909 1194 log.go:172] (0xc00010e0b0) Reply frame received for 5\nI0201 14:09:45.125161 1194 log.go:172] (0xc00010e0b0) Data frame received for 3\nI0201 14:09:45.125225 1194 log.go:172] (0xc000620a00) (3) Data frame handling\nI0201 14:09:45.125255 1194 log.go:172] (0xc000620a00) (3) Data frame sent\nI0201 14:09:45.253951 1194 log.go:172] (0xc00010e0b0) Data frame received for 1\nI0201 14:09:45.254049 1194 log.go:172] (0xc00010e0b0) (0xc000620a00) Stream removed, broadcasting: 3\nI0201 14:09:45.254145 1194 log.go:172] (0xc000620960) (1) Data frame handling\nI0201 14:09:45.254195 1194 log.go:172] (0xc000620960) (1) Data frame sent\nI0201 14:09:45.254234 1194 log.go:172] (0xc00010e0b0) (0xc000620960) Stream removed, broadcasting: 1\nI0201 14:09:45.254772 1194 log.go:172] (0xc00010e0b0) (0xc0007200a0) Stream removed, broadcasting: 5\nI0201 14:09:45.255230 1194 log.go:172] (0xc00010e0b0) (0xc000620960) Stream removed, broadcasting: 1\nI0201 14:09:45.255331 1194 log.go:172] (0xc00010e0b0) (0xc000620a00) Stream removed, broadcasting: 3\nI0201 14:09:45.255350 1194 log.go:172] (0xc00010e0b0) (0xc0007200a0) Stream removed, broadcasting: 5\nI0201 14:09:45.255464 1194 log.go:172] (0xc00010e0b0) Go away received\n" Feb 1 14:09:45.271: INFO: stdout: "Hello from the busy-box sub-container\n" [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 1 14:09:45.271: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2304" for this suite. Feb 1 14:09:51.324: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 1 14:09:51.452: INFO: namespace emptydir-2304 deletion completed in 6.169499636s • [SLOW TEST:20.680 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 1 14:09:51.453: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods STEP: Gathering metrics W0201 14:10:32.165091 8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Feb 1 14:10:32.165: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 1 14:10:32.165: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-4336" for this suite. Feb 1 14:10:44.204: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 1 14:10:44.697: INFO: namespace gc-4336 deletion completed in 12.5257687s • [SLOW TEST:53.244 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 1 14:10:44.698: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Feb 1 14:11:00.347: INFO: Waiting up to 5m0s for pod "client-envvars-e7ab15c9-458e-47ba-8da9-2f75936a0007" in namespace "pods-655" to be "success or failure" Feb 1 14:11:00.361: INFO: Pod "client-envvars-e7ab15c9-458e-47ba-8da9-2f75936a0007": Phase="Pending", Reason="", readiness=false. Elapsed: 13.205683ms Feb 1 14:11:02.371: INFO: Pod "client-envvars-e7ab15c9-458e-47ba-8da9-2f75936a0007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023343434s Feb 1 14:11:04.383: INFO: Pod "client-envvars-e7ab15c9-458e-47ba-8da9-2f75936a0007": Phase="Pending", Reason="", readiness=false. Elapsed: 4.035009911s Feb 1 14:11:06.397: INFO: Pod "client-envvars-e7ab15c9-458e-47ba-8da9-2f75936a0007": Phase="Pending", Reason="", readiness=false. Elapsed: 6.049658932s Feb 1 14:11:08.407: INFO: Pod "client-envvars-e7ab15c9-458e-47ba-8da9-2f75936a0007": Phase="Pending", Reason="", readiness=false. Elapsed: 8.059267724s Feb 1 14:11:10.452: INFO: Pod "client-envvars-e7ab15c9-458e-47ba-8da9-2f75936a0007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.104379923s STEP: Saw pod success Feb 1 14:11:10.452: INFO: Pod "client-envvars-e7ab15c9-458e-47ba-8da9-2f75936a0007" satisfied condition "success or failure" Feb 1 14:11:10.461: INFO: Trying to get logs from node iruya-node pod client-envvars-e7ab15c9-458e-47ba-8da9-2f75936a0007 container env3cont: STEP: delete the pod Feb 1 14:11:10.919: INFO: Waiting for pod client-envvars-e7ab15c9-458e-47ba-8da9-2f75936a0007 to disappear Feb 1 14:11:10.928: INFO: Pod client-envvars-e7ab15c9-458e-47ba-8da9-2f75936a0007 no longer exists [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 1 14:11:10.929: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-655" for this suite. Feb 1 14:12:03.006: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 1 14:12:03.186: INFO: namespace pods-655 deletion completed in 52.250871471s • [SLOW TEST:78.488 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 1 14:12:03.188: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-map-dbb97f54-2c85-4915-b5d6-7eecef90ef4f STEP: Creating a pod to test consume secrets Feb 1 14:12:03.353: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-2b24042c-4b4a-4583-a7ee-e3c8b3c7f757" in namespace "projected-1841" to be "success or failure" Feb 1 14:12:03.365: INFO: Pod "pod-projected-secrets-2b24042c-4b4a-4583-a7ee-e3c8b3c7f757": Phase="Pending", Reason="", readiness=false. Elapsed: 12.105924ms Feb 1 14:12:05.379: INFO: Pod "pod-projected-secrets-2b24042c-4b4a-4583-a7ee-e3c8b3c7f757": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025949161s Feb 1 14:12:07.387: INFO: Pod "pod-projected-secrets-2b24042c-4b4a-4583-a7ee-e3c8b3c7f757": Phase="Pending", Reason="", readiness=false. Elapsed: 4.03352808s Feb 1 14:12:09.397: INFO: Pod "pod-projected-secrets-2b24042c-4b4a-4583-a7ee-e3c8b3c7f757": Phase="Pending", Reason="", readiness=false. Elapsed: 6.044197715s Feb 1 14:12:11.413: INFO: Pod "pod-projected-secrets-2b24042c-4b4a-4583-a7ee-e3c8b3c7f757": Phase="Pending", Reason="", readiness=false. Elapsed: 8.059614517s Feb 1 14:12:13.426: INFO: Pod "pod-projected-secrets-2b24042c-4b4a-4583-a7ee-e3c8b3c7f757": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.072684126s STEP: Saw pod success Feb 1 14:12:13.426: INFO: Pod "pod-projected-secrets-2b24042c-4b4a-4583-a7ee-e3c8b3c7f757" satisfied condition "success or failure" Feb 1 14:12:13.432: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-2b24042c-4b4a-4583-a7ee-e3c8b3c7f757 container projected-secret-volume-test: STEP: delete the pod Feb 1 14:12:13.795: INFO: Waiting for pod pod-projected-secrets-2b24042c-4b4a-4583-a7ee-e3c8b3c7f757 to disappear Feb 1 14:12:13.810: INFO: Pod pod-projected-secrets-2b24042c-4b4a-4583-a7ee-e3c8b3c7f757 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 1 14:12:13.811: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1841" for this suite. Feb 1 14:12:19.858: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 1 14:12:19.971: INFO: namespace projected-1841 deletion completed in 6.148161793s • [SLOW TEST:16.783 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 1 14:12:19.971: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Feb 1 14:12:20.063: INFO: Waiting up to 5m0s for pod "downwardapi-volume-680ef0d8-3cb3-4f51-8f43-d2bd3864d56c" in namespace "downward-api-182" to be "success or failure" Feb 1 14:12:20.071: INFO: Pod "downwardapi-volume-680ef0d8-3cb3-4f51-8f43-d2bd3864d56c": Phase="Pending", Reason="", readiness=false. Elapsed: 7.636967ms Feb 1 14:12:22.090: INFO: Pod "downwardapi-volume-680ef0d8-3cb3-4f51-8f43-d2bd3864d56c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026527148s Feb 1 14:12:24.109: INFO: Pod "downwardapi-volume-680ef0d8-3cb3-4f51-8f43-d2bd3864d56c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.045670644s Feb 1 14:12:26.119: INFO: Pod "downwardapi-volume-680ef0d8-3cb3-4f51-8f43-d2bd3864d56c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.055463359s Feb 1 14:12:28.130: INFO: Pod "downwardapi-volume-680ef0d8-3cb3-4f51-8f43-d2bd3864d56c": Phase="Pending", Reason="", readiness=false. Elapsed: 8.067296529s Feb 1 14:12:30.140: INFO: Pod "downwardapi-volume-680ef0d8-3cb3-4f51-8f43-d2bd3864d56c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.077377209s STEP: Saw pod success Feb 1 14:12:30.141: INFO: Pod "downwardapi-volume-680ef0d8-3cb3-4f51-8f43-d2bd3864d56c" satisfied condition "success or failure" Feb 1 14:12:30.188: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-680ef0d8-3cb3-4f51-8f43-d2bd3864d56c container client-container: STEP: delete the pod Feb 1 14:12:30.244: INFO: Waiting for pod downwardapi-volume-680ef0d8-3cb3-4f51-8f43-d2bd3864d56c to disappear Feb 1 14:12:30.248: INFO: Pod downwardapi-volume-680ef0d8-3cb3-4f51-8f43-d2bd3864d56c no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 1 14:12:30.248: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-182" for this suite. Feb 1 14:12:36.289: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 1 14:12:37.028: INFO: namespace downward-api-182 deletion completed in 6.775037127s • [SLOW TEST:17.057 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 1 14:12:37.029: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Feb 1 14:12:37.117: INFO: Waiting up to 5m0s for pod "downwardapi-volume-91327044-2898-4ab7-82a9-16c134a162d8" in namespace "projected-2886" to be "success or failure" Feb 1 14:12:37.224: INFO: Pod "downwardapi-volume-91327044-2898-4ab7-82a9-16c134a162d8": Phase="Pending", Reason="", readiness=false. Elapsed: 106.812937ms Feb 1 14:12:39.231: INFO: Pod "downwardapi-volume-91327044-2898-4ab7-82a9-16c134a162d8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.113537862s Feb 1 14:12:41.242: INFO: Pod "downwardapi-volume-91327044-2898-4ab7-82a9-16c134a162d8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.125158577s Feb 1 14:12:43.251: INFO: Pod "downwardapi-volume-91327044-2898-4ab7-82a9-16c134a162d8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.134042072s Feb 1 14:12:45.261: INFO: Pod "downwardapi-volume-91327044-2898-4ab7-82a9-16c134a162d8": Phase="Pending", Reason="", readiness=false. Elapsed: 8.143952085s Feb 1 14:12:47.278: INFO: Pod "downwardapi-volume-91327044-2898-4ab7-82a9-16c134a162d8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.160291191s STEP: Saw pod success Feb 1 14:12:47.278: INFO: Pod "downwardapi-volume-91327044-2898-4ab7-82a9-16c134a162d8" satisfied condition "success or failure" Feb 1 14:12:47.281: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-91327044-2898-4ab7-82a9-16c134a162d8 container client-container: STEP: delete the pod Feb 1 14:12:47.407: INFO: Waiting for pod downwardapi-volume-91327044-2898-4ab7-82a9-16c134a162d8 to disappear Feb 1 14:12:47.507: INFO: Pod downwardapi-volume-91327044-2898-4ab7-82a9-16c134a162d8 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 1 14:12:47.507: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2886" for this suite. Feb 1 14:12:53.553: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 1 14:12:53.703: INFO: namespace projected-2886 deletion completed in 6.184865148s • [SLOW TEST:16.675 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 1 14:12:53.704: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Feb 1 14:13:03.076: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 1 14:13:03.228: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-216" for this suite. Feb 1 14:13:09.279: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 1 14:13:09.414: INFO: namespace container-runtime-216 deletion completed in 6.180477732s • [SLOW TEST:15.710 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129 should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 1 14:13:09.415: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: starting a background goroutine to produce watch events STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 1 14:13:14.921: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-6294" for this suite. Feb 1 14:13:21.054: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 1 14:13:21.202: INFO: namespace watch-6294 deletion completed in 6.216859545s • [SLOW TEST:11.787 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 1 14:13:21.203: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Feb 1 14:13:21.351: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. Feb 1 14:13:21.366: INFO: Number of nodes with available pods: 0 Feb 1 14:13:21.366: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. Feb 1 14:13:21.461: INFO: Number of nodes with available pods: 0 Feb 1 14:13:21.461: INFO: Node iruya-node is running more than one daemon pod Feb 1 14:13:22.472: INFO: Number of nodes with available pods: 0 Feb 1 14:13:22.473: INFO: Node iruya-node is running more than one daemon pod Feb 1 14:13:23.478: INFO: Number of nodes with available pods: 0 Feb 1 14:13:23.478: INFO: Node iruya-node is running more than one daemon pod Feb 1 14:13:24.476: INFO: Number of nodes with available pods: 0 Feb 1 14:13:24.476: INFO: Node iruya-node is running more than one daemon pod Feb 1 14:13:25.470: INFO: Number of nodes with available pods: 0 Feb 1 14:13:25.470: INFO: Node iruya-node is running more than one daemon pod Feb 1 14:13:26.475: INFO: Number of nodes with available pods: 0 Feb 1 14:13:26.475: INFO: Node iruya-node is running more than one daemon pod Feb 1 14:13:27.557: INFO: Number of nodes with available pods: 0 Feb 1 14:13:27.557: INFO: Node iruya-node is running more than one daemon pod Feb 1 14:13:28.485: INFO: Number of nodes with available pods: 0 Feb 1 14:13:28.485: INFO: Node iruya-node is running more than one daemon pod Feb 1 14:13:29.474: INFO: Number of nodes with available pods: 1 Feb 1 14:13:29.474: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled Feb 1 14:13:29.519: INFO: Number of nodes with available pods: 1 Feb 1 14:13:29.519: INFO: Number of running nodes: 0, number of available pods: 1 Feb 1 14:13:30.533: INFO: Number of nodes with available pods: 0 Feb 1 14:13:30.533: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate Feb 1 14:13:30.609: INFO: Number of nodes with available pods: 0 Feb 1 14:13:30.610: INFO: Node iruya-node is running more than one daemon pod Feb 1 14:13:31.619: INFO: Number of nodes with available pods: 0 Feb 1 14:13:31.620: INFO: Node iruya-node is running more than one daemon pod Feb 1 14:13:32.620: INFO: Number of nodes with available pods: 0 Feb 1 14:13:32.620: INFO: Node iruya-node is running more than one daemon pod Feb 1 14:13:33.621: INFO: Number of nodes with available pods: 0 Feb 1 14:13:33.621: INFO: Node iruya-node is running more than one daemon pod Feb 1 14:13:34.627: INFO: Number of nodes with available pods: 0 Feb 1 14:13:34.627: INFO: Node iruya-node is running more than one daemon pod Feb 1 14:13:35.620: INFO: Number of nodes with available pods: 0 Feb 1 14:13:35.620: INFO: Node iruya-node is running more than one daemon pod Feb 1 14:13:36.629: INFO: Number of nodes with available pods: 0 Feb 1 14:13:36.629: INFO: Node iruya-node is running more than one daemon pod Feb 1 14:13:37.624: INFO: Number of nodes with available pods: 0 Feb 1 14:13:37.624: INFO: Node iruya-node is running more than one daemon pod Feb 1 14:13:38.617: INFO: Number of nodes with available pods: 0 Feb 1 14:13:38.617: INFO: Node iruya-node is running more than one daemon pod Feb 1 14:13:39.620: INFO: Number of nodes with available pods: 0 Feb 1 14:13:39.621: INFO: Node iruya-node is running more than one daemon pod Feb 1 14:13:40.619: INFO: Number of nodes with available pods: 0 Feb 1 14:13:40.619: INFO: Node iruya-node is running more than one daemon pod Feb 1 14:13:41.624: INFO: Number of nodes with available pods: 0 Feb 1 14:13:41.624: INFO: Node iruya-node is running more than one daemon pod Feb 1 14:13:42.634: INFO: Number of nodes with available pods: 0 Feb 1 14:13:42.634: INFO: Node iruya-node is running more than one daemon pod Feb 1 14:13:43.800: INFO: Number of nodes with available pods: 0 Feb 1 14:13:43.800: INFO: Node iruya-node is running more than one daemon pod Feb 1 14:13:44.620: INFO: Number of nodes with available pods: 0 Feb 1 14:13:44.620: INFO: Node iruya-node is running more than one daemon pod Feb 1 14:13:45.632: INFO: Number of nodes with available pods: 0 Feb 1 14:13:45.632: INFO: Node iruya-node is running more than one daemon pod Feb 1 14:13:46.622: INFO: Number of nodes with available pods: 0 Feb 1 14:13:46.622: INFO: Node iruya-node is running more than one daemon pod Feb 1 14:13:47.624: INFO: Number of nodes with available pods: 0 Feb 1 14:13:47.624: INFO: Node iruya-node is running more than one daemon pod Feb 1 14:13:48.630: INFO: Number of nodes with available pods: 0 Feb 1 14:13:48.631: INFO: Node iruya-node is running more than one daemon pod Feb 1 14:13:49.650: INFO: Number of nodes with available pods: 0 Feb 1 14:13:49.650: INFO: Node iruya-node is running more than one daemon pod Feb 1 14:13:50.622: INFO: Number of nodes with available pods: 0 Feb 1 14:13:50.622: INFO: Node iruya-node is running more than one daemon pod Feb 1 14:13:51.622: INFO: Number of nodes with available pods: 0 Feb 1 14:13:51.623: INFO: Node iruya-node is running more than one daemon pod Feb 1 14:13:52.620: INFO: Number of nodes with available pods: 0 Feb 1 14:13:52.620: INFO: Node iruya-node is running more than one daemon pod Feb 1 14:13:53.652: INFO: Number of nodes with available pods: 0 Feb 1 14:13:53.652: INFO: Node iruya-node is running more than one daemon pod Feb 1 14:13:54.624: INFO: Number of nodes with available pods: 0 Feb 1 14:13:54.624: INFO: Node iruya-node is running more than one daemon pod Feb 1 14:13:55.624: INFO: Number of nodes with available pods: 1 Feb 1 14:13:55.624: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-3489, will wait for the garbage collector to delete the pods Feb 1 14:13:55.721: INFO: Deleting DaemonSet.extensions daemon-set took: 18.497347ms Feb 1 14:13:56.021: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.673134ms Feb 1 14:14:06.629: INFO: Number of nodes with available pods: 0 Feb 1 14:14:06.629: INFO: Number of running nodes: 0, number of available pods: 0 Feb 1 14:14:06.636: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-3489/daemonsets","resourceVersion":"22699370"},"items":null} Feb 1 14:14:06.640: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-3489/pods","resourceVersion":"22699370"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 1 14:14:06.734: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-3489" for this suite. Feb 1 14:14:12.800: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 1 14:14:12.960: INFO: namespace daemonsets-3489 deletion completed in 6.219506032s • [SLOW TEST:51.758 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 1 14:14:12.961: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: Gathering metrics W0201 14:14:15.957452 8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Feb 1 14:14:15.957: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 1 14:14:15.957: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-7404" for this suite. Feb 1 14:14:24.026: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 1 14:14:24.176: INFO: namespace gc-7404 deletion completed in 8.191051173s • [SLOW TEST:11.215 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl rolling-update should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 1 14:14:24.177: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1516 [It] should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Feb 1 14:14:24.276: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=kubectl-848' Feb 1 14:14:24.404: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Feb 1 14:14:24.404: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n" STEP: verifying the rc e2e-test-nginx-rc was created STEP: rolling-update to same image controller Feb 1 14:14:24.440: INFO: scanned /root for discovery docs: Feb 1 14:14:24.441: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-nginx-rc --update-period=1s --image=docker.io/library/nginx:1.14-alpine --image-pull-policy=IfNotPresent --namespace=kubectl-848' Feb 1 14:14:45.829: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" Feb 1 14:14:45.830: INFO: stdout: "Created e2e-test-nginx-rc-895c7691cefd086f72bac38856333824\nScaling up e2e-test-nginx-rc-895c7691cefd086f72bac38856333824 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-895c7691cefd086f72bac38856333824 up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-895c7691cefd086f72bac38856333824 to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n" Feb 1 14:14:45.846: INFO: stdout: "Created e2e-test-nginx-rc-895c7691cefd086f72bac38856333824\nScaling up e2e-test-nginx-rc-895c7691cefd086f72bac38856333824 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-895c7691cefd086f72bac38856333824 up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-895c7691cefd086f72bac38856333824 to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n" STEP: waiting for all containers in run=e2e-test-nginx-rc pods to come up. Feb 1 14:14:45.847: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-848' Feb 1 14:14:46.033: INFO: stderr: "" Feb 1 14:14:46.033: INFO: stdout: "e2e-test-nginx-rc-895c7691cefd086f72bac38856333824-knvbs e2e-test-nginx-rc-l4p7v " STEP: Replicas for run=e2e-test-nginx-rc: expected=1 actual=2 Feb 1 14:14:51.034: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-848' Feb 1 14:14:51.623: INFO: stderr: "" Feb 1 14:14:51.623: INFO: stdout: "e2e-test-nginx-rc-895c7691cefd086f72bac38856333824-knvbs e2e-test-nginx-rc-l4p7v " STEP: Replicas for run=e2e-test-nginx-rc: expected=1 actual=2 Feb 1 14:14:56.624: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-848' Feb 1 14:14:56.792: INFO: stderr: "" Feb 1 14:14:56.793: INFO: stdout: "e2e-test-nginx-rc-895c7691cefd086f72bac38856333824-knvbs e2e-test-nginx-rc-l4p7v " STEP: Replicas for run=e2e-test-nginx-rc: expected=1 actual=2 Feb 1 14:15:01.795: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-848' Feb 1 14:15:01.975: INFO: stderr: "" Feb 1 14:15:01.975: INFO: stdout: "e2e-test-nginx-rc-895c7691cefd086f72bac38856333824-knvbs e2e-test-nginx-rc-l4p7v " STEP: Replicas for run=e2e-test-nginx-rc: expected=1 actual=2 Feb 1 14:15:06.976: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-848' Feb 1 14:15:07.152: INFO: stderr: "" Feb 1 14:15:07.153: INFO: stdout: "e2e-test-nginx-rc-895c7691cefd086f72bac38856333824-knvbs e2e-test-nginx-rc-l4p7v " STEP: Replicas for run=e2e-test-nginx-rc: expected=1 actual=2 Feb 1 14:15:12.154: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-848' Feb 1 14:15:12.350: INFO: stderr: "" Feb 1 14:15:12.351: INFO: stdout: "e2e-test-nginx-rc-895c7691cefd086f72bac38856333824-knvbs e2e-test-nginx-rc-l4p7v " STEP: Replicas for run=e2e-test-nginx-rc: expected=1 actual=2 Feb 1 14:15:17.353: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-848' Feb 1 14:15:17.551: INFO: stderr: "" Feb 1 14:15:17.551: INFO: stdout: "e2e-test-nginx-rc-895c7691cefd086f72bac38856333824-knvbs e2e-test-nginx-rc-l4p7v " STEP: Replicas for run=e2e-test-nginx-rc: expected=1 actual=2 Feb 1 14:15:22.552: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-848' Feb 1 14:15:22.705: INFO: stderr: "" Feb 1 14:15:22.705: INFO: stdout: "e2e-test-nginx-rc-895c7691cefd086f72bac38856333824-knvbs e2e-test-nginx-rc-l4p7v " STEP: Replicas for run=e2e-test-nginx-rc: expected=1 actual=2 Feb 1 14:15:27.706: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-848' Feb 1 14:15:27.901: INFO: stderr: "" Feb 1 14:15:27.901: INFO: stdout: "e2e-test-nginx-rc-895c7691cefd086f72bac38856333824-knvbs e2e-test-nginx-rc-l4p7v " STEP: Replicas for run=e2e-test-nginx-rc: expected=1 actual=2 Feb 1 14:15:32.902: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-848' Feb 1 14:15:33.055: INFO: stderr: "" Feb 1 14:15:33.055: INFO: stdout: "e2e-test-nginx-rc-895c7691cefd086f72bac38856333824-knvbs e2e-test-nginx-rc-l4p7v " STEP: Replicas for run=e2e-test-nginx-rc: expected=1 actual=2 Feb 1 14:15:38.056: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-848' Feb 1 14:15:38.224: INFO: stderr: "" Feb 1 14:15:38.224: INFO: stdout: "e2e-test-nginx-rc-895c7691cefd086f72bac38856333824-knvbs e2e-test-nginx-rc-l4p7v " STEP: Replicas for run=e2e-test-nginx-rc: expected=1 actual=2 Feb 1 14:15:43.225: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-848' Feb 1 14:15:43.447: INFO: stderr: "" Feb 1 14:15:43.447: INFO: stdout: "e2e-test-nginx-rc-895c7691cefd086f72bac38856333824-knvbs e2e-test-nginx-rc-l4p7v " STEP: Replicas for run=e2e-test-nginx-rc: expected=1 actual=2 Feb 1 14:15:48.449: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-848' Feb 1 14:15:48.648: INFO: stderr: "" Feb 1 14:15:48.648: INFO: stdout: "e2e-test-nginx-rc-895c7691cefd086f72bac38856333824-knvbs e2e-test-nginx-rc-l4p7v " STEP: Replicas for run=e2e-test-nginx-rc: expected=1 actual=2 Feb 1 14:15:53.650: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-848' Feb 1 14:15:53.918: INFO: stderr: "" Feb 1 14:15:53.919: INFO: stdout: "e2e-test-nginx-rc-895c7691cefd086f72bac38856333824-knvbs " Feb 1 14:15:53.919: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-895c7691cefd086f72bac38856333824-knvbs -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-nginx-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-848' Feb 1 14:15:54.075: INFO: stderr: "" Feb 1 14:15:54.075: INFO: stdout: "true" Feb 1 14:15:54.076: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-895c7691cefd086f72bac38856333824-knvbs -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-nginx-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-848' Feb 1 14:15:54.174: INFO: stderr: "" Feb 1 14:15:54.175: INFO: stdout: "docker.io/library/nginx:1.14-alpine" Feb 1 14:15:54.175: INFO: e2e-test-nginx-rc-895c7691cefd086f72bac38856333824-knvbs is verified up and running [AfterEach] [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1522 Feb 1 14:15:54.175: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=kubectl-848' Feb 1 14:15:54.289: INFO: stderr: "" Feb 1 14:15:54.289: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 1 14:15:54.289: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-848" for this suite. Feb 1 14:16:16.397: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 1 14:16:16.547: INFO: namespace kubectl-848 deletion completed in 22.17700606s • [SLOW TEST:112.370 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 1 14:16:16.547: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 1 14:16:24.785: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-743" for this suite. Feb 1 14:16:30.818: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 1 14:16:31.015: INFO: namespace kubelet-test-743 deletion completed in 6.222266887s • [SLOW TEST:14.468 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78 should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl replace should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 1 14:16:31.016: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1721 [It] should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Feb 1 14:16:31.154: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --labels=run=e2e-test-nginx-pod --namespace=kubectl-3846' Feb 1 14:16:31.352: INFO: stderr: "" Feb 1 14:16:31.352: INFO: stdout: "pod/e2e-test-nginx-pod created\n" STEP: verifying the pod e2e-test-nginx-pod is running STEP: verifying the pod e2e-test-nginx-pod was created Feb 1 14:16:41.405: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-nginx-pod --namespace=kubectl-3846 -o json' Feb 1 14:16:41.554: INFO: stderr: "" Feb 1 14:16:41.554: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2020-02-01T14:16:31Z\",\n \"labels\": {\n \"run\": \"e2e-test-nginx-pod\"\n },\n \"name\": \"e2e-test-nginx-pod\",\n \"namespace\": \"kubectl-3846\",\n \"resourceVersion\": \"22699742\",\n \"selfLink\": \"/api/v1/namespaces/kubectl-3846/pods/e2e-test-nginx-pod\",\n \"uid\": \"73072c1a-3956-4335-83c8-430634413371\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/nginx:1.14-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-nginx-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-jq64j\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"iruya-node\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-jq64j\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-jq64j\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-02-01T14:16:31Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-02-01T14:16:38Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-02-01T14:16:38Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-02-01T14:16:31Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"docker://d7d76f78ea9818b8c15b80c705606c19adac055c528de3c41051e1c0f7283153\",\n \"image\": \"nginx:1.14-alpine\",\n \"imageID\": \"docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7\",\n \"lastState\": {},\n \"name\": \"e2e-test-nginx-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2020-02-01T14:16:37Z\"\n }\n }\n }\n ],\n \"hostIP\": \"10.96.3.65\",\n \"phase\": \"Running\",\n \"podIP\": \"10.44.0.1\",\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2020-02-01T14:16:31Z\"\n }\n}\n" STEP: replace the image in the pod Feb 1 14:16:41.555: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-3846' Feb 1 14:16:41.912: INFO: stderr: "" Feb 1 14:16:41.912: INFO: stdout: "pod/e2e-test-nginx-pod replaced\n" STEP: verifying the pod e2e-test-nginx-pod has the right image docker.io/library/busybox:1.29 [AfterEach] [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1726 Feb 1 14:16:41.953: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=kubectl-3846' Feb 1 14:16:49.463: INFO: stderr: "" Feb 1 14:16:49.463: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 1 14:16:49.464: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3846" for this suite. Feb 1 14:16:55.544: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 1 14:16:55.679: INFO: namespace kubectl-3846 deletion completed in 6.175385872s • [SLOW TEST:24.663 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 1 14:16:55.679: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object Feb 1 14:16:56.001: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-6719,SelfLink:/api/v1/namespaces/watch-6719/configmaps/e2e-watch-test-label-changed,UID:9008638f-5609-489a-8d24-b19d992335c3,ResourceVersion:22699791,Generation:0,CreationTimestamp:2020-02-01 14:16:55 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} Feb 1 14:16:56.001: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-6719,SelfLink:/api/v1/namespaces/watch-6719/configmaps/e2e-watch-test-label-changed,UID:9008638f-5609-489a-8d24-b19d992335c3,ResourceVersion:22699792,Generation:0,CreationTimestamp:2020-02-01 14:16:55 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} Feb 1 14:16:56.001: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-6719,SelfLink:/api/v1/namespaces/watch-6719/configmaps/e2e-watch-test-label-changed,UID:9008638f-5609-489a-8d24-b19d992335c3,ResourceVersion:22699793,Generation:0,CreationTimestamp:2020-02-01 14:16:55 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored Feb 1 14:17:06.167: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-6719,SelfLink:/api/v1/namespaces/watch-6719/configmaps/e2e-watch-test-label-changed,UID:9008638f-5609-489a-8d24-b19d992335c3,ResourceVersion:22699808,Generation:0,CreationTimestamp:2020-02-01 14:16:55 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Feb 1 14:17:06.168: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-6719,SelfLink:/api/v1/namespaces/watch-6719/configmaps/e2e-watch-test-label-changed,UID:9008638f-5609-489a-8d24-b19d992335c3,ResourceVersion:22699809,Generation:0,CreationTimestamp:2020-02-01 14:16:55 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} Feb 1 14:17:06.168: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-6719,SelfLink:/api/v1/namespaces/watch-6719/configmaps/e2e-watch-test-label-changed,UID:9008638f-5609-489a-8d24-b19d992335c3,ResourceVersion:22699810,Generation:0,CreationTimestamp:2020-02-01 14:16:55 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 1 14:17:06.169: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-6719" for this suite. Feb 1 14:17:12.212: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 1 14:17:12.364: INFO: namespace watch-6719 deletion completed in 6.174883779s • [SLOW TEST:16.685 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 1 14:17:12.367: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-e8132f9a-ecb3-413c-b89e-98c3abe1df8d STEP: Creating a pod to test consume configMaps Feb 1 14:17:12.540: INFO: Waiting up to 5m0s for pod "pod-configmaps-296d7f53-816d-446f-b682-806a0d31b556" in namespace "configmap-5848" to be "success or failure" Feb 1 14:17:12.631: INFO: Pod "pod-configmaps-296d7f53-816d-446f-b682-806a0d31b556": Phase="Pending", Reason="", readiness=false. Elapsed: 91.267377ms Feb 1 14:17:14.642: INFO: Pod "pod-configmaps-296d7f53-816d-446f-b682-806a0d31b556": Phase="Pending", Reason="", readiness=false. Elapsed: 2.102522669s Feb 1 14:17:16.664: INFO: Pod "pod-configmaps-296d7f53-816d-446f-b682-806a0d31b556": Phase="Pending", Reason="", readiness=false. Elapsed: 4.123699709s Feb 1 14:17:18.681: INFO: Pod "pod-configmaps-296d7f53-816d-446f-b682-806a0d31b556": Phase="Pending", Reason="", readiness=false. Elapsed: 6.14115343s Feb 1 14:17:20.689: INFO: Pod "pod-configmaps-296d7f53-816d-446f-b682-806a0d31b556": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.148894966s STEP: Saw pod success Feb 1 14:17:20.689: INFO: Pod "pod-configmaps-296d7f53-816d-446f-b682-806a0d31b556" satisfied condition "success or failure" Feb 1 14:17:20.693: INFO: Trying to get logs from node iruya-node pod pod-configmaps-296d7f53-816d-446f-b682-806a0d31b556 container configmap-volume-test: STEP: delete the pod Feb 1 14:17:20.777: INFO: Waiting for pod pod-configmaps-296d7f53-816d-446f-b682-806a0d31b556 to disappear Feb 1 14:17:20.800: INFO: Pod pod-configmaps-296d7f53-816d-446f-b682-806a0d31b556 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 1 14:17:20.801: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5848" for this suite. Feb 1 14:17:26.935: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 1 14:17:27.107: INFO: namespace configmap-5848 deletion completed in 6.292623078s • [SLOW TEST:14.740 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 1 14:17:27.107: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 1 14:18:19.833: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-254" for this suite. Feb 1 14:18:25.882: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 1 14:18:26.013: INFO: namespace container-runtime-254 deletion completed in 6.175701782s • [SLOW TEST:58.906 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 when starting a container that exits /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:39 should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 1 14:18:26.013: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Feb 1 14:18:26.089: INFO: Creating ReplicaSet my-hostname-basic-269e7316-6b1f-4815-9654-6619203ac0fc Feb 1 14:18:26.108: INFO: Pod name my-hostname-basic-269e7316-6b1f-4815-9654-6619203ac0fc: Found 0 pods out of 1 Feb 1 14:18:31.118: INFO: Pod name my-hostname-basic-269e7316-6b1f-4815-9654-6619203ac0fc: Found 1 pods out of 1 Feb 1 14:18:31.118: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-269e7316-6b1f-4815-9654-6619203ac0fc" is running Feb 1 14:18:35.134: INFO: Pod "my-hostname-basic-269e7316-6b1f-4815-9654-6619203ac0fc-wm2z4" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-01 14:18:26 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-01 14:18:26 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-269e7316-6b1f-4815-9654-6619203ac0fc]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-01 14:18:26 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-269e7316-6b1f-4815-9654-6619203ac0fc]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-01 14:18:26 +0000 UTC Reason: Message:}]) Feb 1 14:18:35.134: INFO: Trying to dial the pod Feb 1 14:18:40.196: INFO: Controller my-hostname-basic-269e7316-6b1f-4815-9654-6619203ac0fc: Got expected result from replica 1 [my-hostname-basic-269e7316-6b1f-4815-9654-6619203ac0fc-wm2z4]: "my-hostname-basic-269e7316-6b1f-4815-9654-6619203ac0fc-wm2z4", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 1 14:18:40.197: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-5331" for this suite. Feb 1 14:18:46.233: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 1 14:18:46.395: INFO: namespace replicaset-5331 deletion completed in 6.189951531s • [SLOW TEST:20.382 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 1 14:18:46.397: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0777 on tmpfs Feb 1 14:18:46.561: INFO: Waiting up to 5m0s for pod "pod-2b29f519-ca78-4bd1-a06d-d8881e471602" in namespace "emptydir-9260" to be "success or failure" Feb 1 14:18:46.568: INFO: Pod "pod-2b29f519-ca78-4bd1-a06d-d8881e471602": Phase="Pending", Reason="", readiness=false. Elapsed: 6.73653ms Feb 1 14:18:48.585: INFO: Pod "pod-2b29f519-ca78-4bd1-a06d-d8881e471602": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023507734s Feb 1 14:18:50.605: INFO: Pod "pod-2b29f519-ca78-4bd1-a06d-d8881e471602": Phase="Pending", Reason="", readiness=false. Elapsed: 4.043860236s Feb 1 14:18:52.617: INFO: Pod "pod-2b29f519-ca78-4bd1-a06d-d8881e471602": Phase="Pending", Reason="", readiness=false. Elapsed: 6.055164301s Feb 1 14:18:54.626: INFO: Pod "pod-2b29f519-ca78-4bd1-a06d-d8881e471602": Phase="Pending", Reason="", readiness=false. Elapsed: 8.064041887s Feb 1 14:18:56.649: INFO: Pod "pod-2b29f519-ca78-4bd1-a06d-d8881e471602": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.087722831s STEP: Saw pod success Feb 1 14:18:56.650: INFO: Pod "pod-2b29f519-ca78-4bd1-a06d-d8881e471602" satisfied condition "success or failure" Feb 1 14:18:56.657: INFO: Trying to get logs from node iruya-node pod pod-2b29f519-ca78-4bd1-a06d-d8881e471602 container test-container: STEP: delete the pod Feb 1 14:18:56.762: INFO: Waiting for pod pod-2b29f519-ca78-4bd1-a06d-d8881e471602 to disappear Feb 1 14:18:56.768: INFO: Pod pod-2b29f519-ca78-4bd1-a06d-d8881e471602 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 1 14:18:56.769: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9260" for this suite. Feb 1 14:19:02.813: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 1 14:19:02.976: INFO: namespace emptydir-9260 deletion completed in 6.200246727s • [SLOW TEST:16.579 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 1 14:19:02.977: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test use defaults Feb 1 14:19:03.096: INFO: Waiting up to 5m0s for pod "client-containers-2df4db39-0b44-46fa-b4e2-3422a8a235ee" in namespace "containers-4466" to be "success or failure" Feb 1 14:19:03.105: INFO: Pod "client-containers-2df4db39-0b44-46fa-b4e2-3422a8a235ee": Phase="Pending", Reason="", readiness=false. Elapsed: 8.128084ms Feb 1 14:19:05.113: INFO: Pod "client-containers-2df4db39-0b44-46fa-b4e2-3422a8a235ee": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01631012s Feb 1 14:19:07.119: INFO: Pod "client-containers-2df4db39-0b44-46fa-b4e2-3422a8a235ee": Phase="Pending", Reason="", readiness=false. Elapsed: 4.022713655s Feb 1 14:19:09.133: INFO: Pod "client-containers-2df4db39-0b44-46fa-b4e2-3422a8a235ee": Phase="Pending", Reason="", readiness=false. Elapsed: 6.036910393s Feb 1 14:19:11.145: INFO: Pod "client-containers-2df4db39-0b44-46fa-b4e2-3422a8a235ee": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.048600474s STEP: Saw pod success Feb 1 14:19:11.145: INFO: Pod "client-containers-2df4db39-0b44-46fa-b4e2-3422a8a235ee" satisfied condition "success or failure" Feb 1 14:19:11.148: INFO: Trying to get logs from node iruya-node pod client-containers-2df4db39-0b44-46fa-b4e2-3422a8a235ee container test-container: STEP: delete the pod Feb 1 14:19:11.277: INFO: Waiting for pod client-containers-2df4db39-0b44-46fa-b4e2-3422a8a235ee to disappear Feb 1 14:19:11.282: INFO: Pod client-containers-2df4db39-0b44-46fa-b4e2-3422a8a235ee no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 1 14:19:11.282: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-4466" for this suite. Feb 1 14:19:17.410: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 1 14:19:17.535: INFO: namespace containers-4466 deletion completed in 6.246685448s • [SLOW TEST:14.559 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 1 14:19:17.539: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted Feb 1 14:19:23.891: INFO: 0 pods remaining Feb 1 14:19:23.891: INFO: 0 pods has nil DeletionTimestamp Feb 1 14:19:23.891: INFO: STEP: Gathering metrics W0201 14:19:24.723505 8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Feb 1 14:19:24.723: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 1 14:19:24.723: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-8576" for this suite. Feb 1 14:19:32.969: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 1 14:19:33.119: INFO: namespace gc-8576 deletion completed in 8.391601046s • [SLOW TEST:15.580 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 1 14:19:33.120: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap configmap-7438/configmap-test-bc06257b-cc12-467b-80a1-979aaa0ccf52 STEP: Creating a pod to test consume configMaps Feb 1 14:19:33.209: INFO: Waiting up to 5m0s for pod "pod-configmaps-05de527f-263d-4e42-a7f5-799c99572ea1" in namespace "configmap-7438" to be "success or failure" Feb 1 14:19:33.329: INFO: Pod "pod-configmaps-05de527f-263d-4e42-a7f5-799c99572ea1": Phase="Pending", Reason="", readiness=false. Elapsed: 119.72166ms Feb 1 14:19:35.337: INFO: Pod "pod-configmaps-05de527f-263d-4e42-a7f5-799c99572ea1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.127876003s Feb 1 14:19:37.353: INFO: Pod "pod-configmaps-05de527f-263d-4e42-a7f5-799c99572ea1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.143795419s Feb 1 14:19:39.363: INFO: Pod "pod-configmaps-05de527f-263d-4e42-a7f5-799c99572ea1": Phase="Pending", Reason="", readiness=false. Elapsed: 6.153502345s Feb 1 14:19:41.375: INFO: Pod "pod-configmaps-05de527f-263d-4e42-a7f5-799c99572ea1": Phase="Pending", Reason="", readiness=false. Elapsed: 8.165167769s Feb 1 14:19:43.385: INFO: Pod "pod-configmaps-05de527f-263d-4e42-a7f5-799c99572ea1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.175801784s STEP: Saw pod success Feb 1 14:19:43.385: INFO: Pod "pod-configmaps-05de527f-263d-4e42-a7f5-799c99572ea1" satisfied condition "success or failure" Feb 1 14:19:43.392: INFO: Trying to get logs from node iruya-node pod pod-configmaps-05de527f-263d-4e42-a7f5-799c99572ea1 container env-test: STEP: delete the pod Feb 1 14:19:43.454: INFO: Waiting for pod pod-configmaps-05de527f-263d-4e42-a7f5-799c99572ea1 to disappear Feb 1 14:19:43.461: INFO: Pod pod-configmaps-05de527f-263d-4e42-a7f5-799c99572ea1 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 1 14:19:43.461: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7438" for this suite. Feb 1 14:19:49.563: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 1 14:19:49.711: INFO: namespace configmap-7438 deletion completed in 6.223840377s • [SLOW TEST:16.592 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 1 14:19:49.715: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating the pod Feb 1 14:20:00.404: INFO: Successfully updated pod "annotationupdate123a83b0-0eac-459a-b902-c717e016969b" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 1 14:20:02.481: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7230" for this suite. Feb 1 14:20:25.605: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 1 14:20:25.751: INFO: namespace downward-api-7230 deletion completed in 23.264605751s • [SLOW TEST:36.036 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl expose should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 1 14:20:25.751: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating Redis RC Feb 1 14:20:25.901: INFO: namespace kubectl-5438 Feb 1 14:20:25.902: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5438' Feb 1 14:20:28.165: INFO: stderr: "" Feb 1 14:20:28.165: INFO: stdout: "replicationcontroller/redis-master created\n" STEP: Waiting for Redis master to start. Feb 1 14:20:29.178: INFO: Selector matched 1 pods for map[app:redis] Feb 1 14:20:29.178: INFO: Found 0 / 1 Feb 1 14:20:30.178: INFO: Selector matched 1 pods for map[app:redis] Feb 1 14:20:30.178: INFO: Found 0 / 1 Feb 1 14:20:31.184: INFO: Selector matched 1 pods for map[app:redis] Feb 1 14:20:31.185: INFO: Found 0 / 1 Feb 1 14:20:32.207: INFO: Selector matched 1 pods for map[app:redis] Feb 1 14:20:32.207: INFO: Found 0 / 1 Feb 1 14:20:33.176: INFO: Selector matched 1 pods for map[app:redis] Feb 1 14:20:33.176: INFO: Found 0 / 1 Feb 1 14:20:34.541: INFO: Selector matched 1 pods for map[app:redis] Feb 1 14:20:34.541: INFO: Found 0 / 1 Feb 1 14:20:35.176: INFO: Selector matched 1 pods for map[app:redis] Feb 1 14:20:35.177: INFO: Found 0 / 1 Feb 1 14:20:36.177: INFO: Selector matched 1 pods for map[app:redis] Feb 1 14:20:36.177: INFO: Found 1 / 1 Feb 1 14:20:36.177: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Feb 1 14:20:36.182: INFO: Selector matched 1 pods for map[app:redis] Feb 1 14:20:36.182: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Feb 1 14:20:36.182: INFO: wait on redis-master startup in kubectl-5438 Feb 1 14:20:36.182: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-887sg redis-master --namespace=kubectl-5438' Feb 1 14:20:36.394: INFO: stderr: "" Feb 1 14:20:36.394: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 01 Feb 14:20:34.711 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 01 Feb 14:20:34.711 # Server started, Redis version 3.2.12\n1:M 01 Feb 14:20:34.711 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 01 Feb 14:20:34.711 * The server is now ready to accept connections on port 6379\n" STEP: exposing RC Feb 1 14:20:36.394: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc redis-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-5438' Feb 1 14:20:36.606: INFO: stderr: "" Feb 1 14:20:36.606: INFO: stdout: "service/rm2 exposed\n" Feb 1 14:20:36.618: INFO: Service rm2 in namespace kubectl-5438 found. STEP: exposing service Feb 1 14:20:38.656: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-5438' Feb 1 14:20:38.872: INFO: stderr: "" Feb 1 14:20:38.873: INFO: stdout: "service/rm3 exposed\n" Feb 1 14:20:38.947: INFO: Service rm3 in namespace kubectl-5438 found. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 1 14:20:40.957: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5438" for this suite. Feb 1 14:21:03.016: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 1 14:21:03.125: INFO: namespace kubectl-5438 deletion completed in 22.163295784s • [SLOW TEST:37.373 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl expose /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 1 14:21:03.125: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0666 on tmpfs Feb 1 14:21:03.234: INFO: Waiting up to 5m0s for pod "pod-85405d76-ae3f-4402-b7fd-ae6c68bd8b3d" in namespace "emptydir-8789" to be "success or failure" Feb 1 14:21:03.239: INFO: Pod "pod-85405d76-ae3f-4402-b7fd-ae6c68bd8b3d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.604952ms Feb 1 14:21:05.254: INFO: Pod "pod-85405d76-ae3f-4402-b7fd-ae6c68bd8b3d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019640066s Feb 1 14:21:07.262: INFO: Pod "pod-85405d76-ae3f-4402-b7fd-ae6c68bd8b3d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.027767471s Feb 1 14:21:09.279: INFO: Pod "pod-85405d76-ae3f-4402-b7fd-ae6c68bd8b3d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.043962542s Feb 1 14:21:11.554: INFO: Pod "pod-85405d76-ae3f-4402-b7fd-ae6c68bd8b3d": Phase="Pending", Reason="", readiness=false. Elapsed: 8.319027494s Feb 1 14:21:13.564: INFO: Pod "pod-85405d76-ae3f-4402-b7fd-ae6c68bd8b3d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.328966868s STEP: Saw pod success Feb 1 14:21:13.564: INFO: Pod "pod-85405d76-ae3f-4402-b7fd-ae6c68bd8b3d" satisfied condition "success or failure" Feb 1 14:21:13.571: INFO: Trying to get logs from node iruya-node pod pod-85405d76-ae3f-4402-b7fd-ae6c68bd8b3d container test-container: STEP: delete the pod Feb 1 14:21:13.863: INFO: Waiting for pod pod-85405d76-ae3f-4402-b7fd-ae6c68bd8b3d to disappear Feb 1 14:21:13.876: INFO: Pod pod-85405d76-ae3f-4402-b7fd-ae6c68bd8b3d no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 1 14:21:13.876: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8789" for this suite. Feb 1 14:21:19.964: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 1 14:21:20.051: INFO: namespace emptydir-8789 deletion completed in 6.163335072s • [SLOW TEST:16.926 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 1 14:21:20.053: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0666 on tmpfs Feb 1 14:21:20.139: INFO: Waiting up to 5m0s for pod "pod-528c0f6a-30cf-47ca-9ccd-2ff5111e2d09" in namespace "emptydir-847" to be "success or failure" Feb 1 14:21:20.148: INFO: Pod "pod-528c0f6a-30cf-47ca-9ccd-2ff5111e2d09": Phase="Pending", Reason="", readiness=false. Elapsed: 8.538378ms Feb 1 14:21:22.168: INFO: Pod "pod-528c0f6a-30cf-47ca-9ccd-2ff5111e2d09": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029177844s Feb 1 14:21:24.179: INFO: Pod "pod-528c0f6a-30cf-47ca-9ccd-2ff5111e2d09": Phase="Pending", Reason="", readiness=false. Elapsed: 4.039944745s Feb 1 14:21:26.187: INFO: Pod "pod-528c0f6a-30cf-47ca-9ccd-2ff5111e2d09": Phase="Pending", Reason="", readiness=false. Elapsed: 6.047533216s Feb 1 14:21:28.198: INFO: Pod "pod-528c0f6a-30cf-47ca-9ccd-2ff5111e2d09": Phase="Pending", Reason="", readiness=false. Elapsed: 8.058781383s Feb 1 14:21:30.209: INFO: Pod "pod-528c0f6a-30cf-47ca-9ccd-2ff5111e2d09": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.069704869s STEP: Saw pod success Feb 1 14:21:30.209: INFO: Pod "pod-528c0f6a-30cf-47ca-9ccd-2ff5111e2d09" satisfied condition "success or failure" Feb 1 14:21:30.214: INFO: Trying to get logs from node iruya-node pod pod-528c0f6a-30cf-47ca-9ccd-2ff5111e2d09 container test-container: STEP: delete the pod Feb 1 14:21:30.306: INFO: Waiting for pod pod-528c0f6a-30cf-47ca-9ccd-2ff5111e2d09 to disappear Feb 1 14:21:30.326: INFO: Pod pod-528c0f6a-30cf-47ca-9ccd-2ff5111e2d09 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 1 14:21:30.326: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-847" for this suite. Feb 1 14:21:36.403: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 1 14:21:36.541: INFO: namespace emptydir-847 deletion completed in 6.180165754s • [SLOW TEST:16.488 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 1 14:21:36.541: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars Feb 1 14:21:36.634: INFO: Waiting up to 5m0s for pod "downward-api-745f59f3-3d20-49d4-ae26-d0da16127d18" in namespace "downward-api-7482" to be "success or failure" Feb 1 14:21:36.643: INFO: Pod "downward-api-745f59f3-3d20-49d4-ae26-d0da16127d18": Phase="Pending", Reason="", readiness=false. Elapsed: 8.500935ms Feb 1 14:21:38.666: INFO: Pod "downward-api-745f59f3-3d20-49d4-ae26-d0da16127d18": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032065152s Feb 1 14:21:40.673: INFO: Pod "downward-api-745f59f3-3d20-49d4-ae26-d0da16127d18": Phase="Pending", Reason="", readiness=false. Elapsed: 4.039107391s Feb 1 14:21:42.682: INFO: Pod "downward-api-745f59f3-3d20-49d4-ae26-d0da16127d18": Phase="Pending", Reason="", readiness=false. Elapsed: 6.04785819s Feb 1 14:21:44.690: INFO: Pod "downward-api-745f59f3-3d20-49d4-ae26-d0da16127d18": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.056333986s STEP: Saw pod success Feb 1 14:21:44.691: INFO: Pod "downward-api-745f59f3-3d20-49d4-ae26-d0da16127d18" satisfied condition "success or failure" Feb 1 14:21:44.694: INFO: Trying to get logs from node iruya-node pod downward-api-745f59f3-3d20-49d4-ae26-d0da16127d18 container dapi-container: STEP: delete the pod Feb 1 14:21:44.763: INFO: Waiting for pod downward-api-745f59f3-3d20-49d4-ae26-d0da16127d18 to disappear Feb 1 14:21:44.774: INFO: Pod downward-api-745f59f3-3d20-49d4-ae26-d0da16127d18 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 1 14:21:44.775: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7482" for this suite. Feb 1 14:21:50.804: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 1 14:21:50.997: INFO: namespace downward-api-7482 deletion completed in 6.214550838s • [SLOW TEST:14.455 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 1 14:21:50.998: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods Feb 1 14:21:51.905: INFO: Pod name wrapped-volume-race-46bc5978-c77d-4b7b-8ba1-c8d1ad60218f: Found 0 pods out of 5 Feb 1 14:21:56.927: INFO: Pod name wrapped-volume-race-46bc5978-c77d-4b7b-8ba1-c8d1ad60218f: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-46bc5978-c77d-4b7b-8ba1-c8d1ad60218f in namespace emptydir-wrapper-1213, will wait for the garbage collector to delete the pods Feb 1 14:22:27.030: INFO: Deleting ReplicationController wrapped-volume-race-46bc5978-c77d-4b7b-8ba1-c8d1ad60218f took: 10.507612ms Feb 1 14:22:27.431: INFO: Terminating ReplicationController wrapped-volume-race-46bc5978-c77d-4b7b-8ba1-c8d1ad60218f pods took: 400.852096ms STEP: Creating RC which spawns configmap-volume pods Feb 1 14:23:17.103: INFO: Pod name wrapped-volume-race-9ca49a46-84bf-4771-b2df-e582987a9799: Found 0 pods out of 5 Feb 1 14:23:22.125: INFO: Pod name wrapped-volume-race-9ca49a46-84bf-4771-b2df-e582987a9799: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-9ca49a46-84bf-4771-b2df-e582987a9799 in namespace emptydir-wrapper-1213, will wait for the garbage collector to delete the pods Feb 1 14:23:58.325: INFO: Deleting ReplicationController wrapped-volume-race-9ca49a46-84bf-4771-b2df-e582987a9799 took: 70.434635ms Feb 1 14:23:58.626: INFO: Terminating ReplicationController wrapped-volume-race-9ca49a46-84bf-4771-b2df-e582987a9799 pods took: 301.166547ms STEP: Creating RC which spawns configmap-volume pods Feb 1 14:24:47.280: INFO: Pod name wrapped-volume-race-d26ad848-fc34-44c4-96c1-b12fa38cfe18: Found 0 pods out of 5 Feb 1 14:24:52.291: INFO: Pod name wrapped-volume-race-d26ad848-fc34-44c4-96c1-b12fa38cfe18: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-d26ad848-fc34-44c4-96c1-b12fa38cfe18 in namespace emptydir-wrapper-1213, will wait for the garbage collector to delete the pods Feb 1 14:25:28.425: INFO: Deleting ReplicationController wrapped-volume-race-d26ad848-fc34-44c4-96c1-b12fa38cfe18 took: 12.85352ms Feb 1 14:25:28.825: INFO: Terminating ReplicationController wrapped-volume-race-d26ad848-fc34-44c4-96c1-b12fa38cfe18 pods took: 400.771785ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 1 14:26:17.861: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-1213" for this suite. Feb 1 14:26:27.923: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 1 14:26:28.036: INFO: namespace emptydir-wrapper-1213 deletion completed in 10.166476028s • [SLOW TEST:277.038 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 1 14:26:28.037: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars Feb 1 14:26:28.184: INFO: Waiting up to 5m0s for pod "downward-api-fdc0e889-ea6c-4cc0-802f-5f148eb852ee" in namespace "downward-api-6282" to be "success or failure" Feb 1 14:26:28.197: INFO: Pod "downward-api-fdc0e889-ea6c-4cc0-802f-5f148eb852ee": Phase="Pending", Reason="", readiness=false. Elapsed: 13.413909ms Feb 1 14:26:30.206: INFO: Pod "downward-api-fdc0e889-ea6c-4cc0-802f-5f148eb852ee": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022639409s Feb 1 14:26:32.225: INFO: Pod "downward-api-fdc0e889-ea6c-4cc0-802f-5f148eb852ee": Phase="Pending", Reason="", readiness=false. Elapsed: 4.041044713s Feb 1 14:26:34.236: INFO: Pod "downward-api-fdc0e889-ea6c-4cc0-802f-5f148eb852ee": Phase="Pending", Reason="", readiness=false. Elapsed: 6.051851674s Feb 1 14:26:36.257: INFO: Pod "downward-api-fdc0e889-ea6c-4cc0-802f-5f148eb852ee": Phase="Pending", Reason="", readiness=false. Elapsed: 8.073344166s Feb 1 14:26:38.266: INFO: Pod "downward-api-fdc0e889-ea6c-4cc0-802f-5f148eb852ee": Phase="Pending", Reason="", readiness=false. Elapsed: 10.0822177s Feb 1 14:26:40.275: INFO: Pod "downward-api-fdc0e889-ea6c-4cc0-802f-5f148eb852ee": Phase="Pending", Reason="", readiness=false. Elapsed: 12.091694905s Feb 1 14:26:42.290: INFO: Pod "downward-api-fdc0e889-ea6c-4cc0-802f-5f148eb852ee": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.106589839s STEP: Saw pod success Feb 1 14:26:42.291: INFO: Pod "downward-api-fdc0e889-ea6c-4cc0-802f-5f148eb852ee" satisfied condition "success or failure" Feb 1 14:26:42.294: INFO: Trying to get logs from node iruya-node pod downward-api-fdc0e889-ea6c-4cc0-802f-5f148eb852ee container dapi-container: STEP: delete the pod Feb 1 14:26:42.428: INFO: Waiting for pod downward-api-fdc0e889-ea6c-4cc0-802f-5f148eb852ee to disappear Feb 1 14:26:42.440: INFO: Pod downward-api-fdc0e889-ea6c-4cc0-802f-5f148eb852ee no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 1 14:26:42.440: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6282" for this suite. Feb 1 14:26:48.541: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 1 14:26:48.666: INFO: namespace downward-api-6282 deletion completed in 6.219638075s • [SLOW TEST:20.630 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 1 14:26:48.668: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-downwardapi-txnm STEP: Creating a pod to test atomic-volume-subpath Feb 1 14:26:48.833: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-txnm" in namespace "subpath-8402" to be "success or failure" Feb 1 14:26:48.839: INFO: Pod "pod-subpath-test-downwardapi-txnm": Phase="Pending", Reason="", readiness=false. Elapsed: 6.395776ms Feb 1 14:26:50.855: INFO: Pod "pod-subpath-test-downwardapi-txnm": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022444076s Feb 1 14:26:52.872: INFO: Pod "pod-subpath-test-downwardapi-txnm": Phase="Pending", Reason="", readiness=false. Elapsed: 4.038661526s Feb 1 14:26:54.891: INFO: Pod "pod-subpath-test-downwardapi-txnm": Phase="Pending", Reason="", readiness=false. Elapsed: 6.05780355s Feb 1 14:26:56.898: INFO: Pod "pod-subpath-test-downwardapi-txnm": Phase="Running", Reason="", readiness=true. Elapsed: 8.064546298s Feb 1 14:26:58.914: INFO: Pod "pod-subpath-test-downwardapi-txnm": Phase="Running", Reason="", readiness=true. Elapsed: 10.081255094s Feb 1 14:27:00.925: INFO: Pod "pod-subpath-test-downwardapi-txnm": Phase="Running", Reason="", readiness=true. Elapsed: 12.092120691s Feb 1 14:27:02.936: INFO: Pod "pod-subpath-test-downwardapi-txnm": Phase="Running", Reason="", readiness=true. Elapsed: 14.102747085s Feb 1 14:27:04.970: INFO: Pod "pod-subpath-test-downwardapi-txnm": Phase="Running", Reason="", readiness=true. Elapsed: 16.136837317s Feb 1 14:27:06.978: INFO: Pod "pod-subpath-test-downwardapi-txnm": Phase="Running", Reason="", readiness=true. Elapsed: 18.144791436s Feb 1 14:27:08.990: INFO: Pod "pod-subpath-test-downwardapi-txnm": Phase="Running", Reason="", readiness=true. Elapsed: 20.15716904s Feb 1 14:27:10.999: INFO: Pod "pod-subpath-test-downwardapi-txnm": Phase="Running", Reason="", readiness=true. Elapsed: 22.165676214s Feb 1 14:27:13.016: INFO: Pod "pod-subpath-test-downwardapi-txnm": Phase="Running", Reason="", readiness=true. Elapsed: 24.182542968s Feb 1 14:27:15.032: INFO: Pod "pod-subpath-test-downwardapi-txnm": Phase="Running", Reason="", readiness=true. Elapsed: 26.198560747s Feb 1 14:27:17.051: INFO: Pod "pod-subpath-test-downwardapi-txnm": Phase="Running", Reason="", readiness=true. Elapsed: 28.218324352s Feb 1 14:27:19.058: INFO: Pod "pod-subpath-test-downwardapi-txnm": Phase="Succeeded", Reason="", readiness=false. Elapsed: 30.225453579s STEP: Saw pod success Feb 1 14:27:19.059: INFO: Pod "pod-subpath-test-downwardapi-txnm" satisfied condition "success or failure" Feb 1 14:27:19.064: INFO: Trying to get logs from node iruya-node pod pod-subpath-test-downwardapi-txnm container test-container-subpath-downwardapi-txnm: STEP: delete the pod Feb 1 14:27:19.142: INFO: Waiting for pod pod-subpath-test-downwardapi-txnm to disappear Feb 1 14:27:19.216: INFO: Pod pod-subpath-test-downwardapi-txnm no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-txnm Feb 1 14:27:19.217: INFO: Deleting pod "pod-subpath-test-downwardapi-txnm" in namespace "subpath-8402" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 1 14:27:19.222: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-8402" for this suite. Feb 1 14:27:25.258: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 1 14:27:25.399: INFO: namespace subpath-8402 deletion completed in 6.172210745s • [SLOW TEST:36.731 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 1 14:27:25.400: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir volume type on node default medium Feb 1 14:27:25.557: INFO: Waiting up to 5m0s for pod "pod-7116fff6-02d6-4dd4-8a9a-3b3f0c0aee7d" in namespace "emptydir-7115" to be "success or failure" Feb 1 14:27:25.564: INFO: Pod "pod-7116fff6-02d6-4dd4-8a9a-3b3f0c0aee7d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.549776ms Feb 1 14:27:27.572: INFO: Pod "pod-7116fff6-02d6-4dd4-8a9a-3b3f0c0aee7d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014237396s Feb 1 14:27:29.580: INFO: Pod "pod-7116fff6-02d6-4dd4-8a9a-3b3f0c0aee7d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.022323859s Feb 1 14:27:31.592: INFO: Pod "pod-7116fff6-02d6-4dd4-8a9a-3b3f0c0aee7d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.034884865s Feb 1 14:27:33.624: INFO: Pod "pod-7116fff6-02d6-4dd4-8a9a-3b3f0c0aee7d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.06672439s STEP: Saw pod success Feb 1 14:27:33.624: INFO: Pod "pod-7116fff6-02d6-4dd4-8a9a-3b3f0c0aee7d" satisfied condition "success or failure" Feb 1 14:27:33.629: INFO: Trying to get logs from node iruya-node pod pod-7116fff6-02d6-4dd4-8a9a-3b3f0c0aee7d container test-container: STEP: delete the pod Feb 1 14:27:33.764: INFO: Waiting for pod pod-7116fff6-02d6-4dd4-8a9a-3b3f0c0aee7d to disappear Feb 1 14:27:33.772: INFO: Pod pod-7116fff6-02d6-4dd4-8a9a-3b3f0c0aee7d no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 1 14:27:33.772: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7115" for this suite. Feb 1 14:27:40.043: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 1 14:27:40.145: INFO: namespace emptydir-7115 deletion completed in 6.337884702s • [SLOW TEST:14.745 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-cli] Kubectl client [k8s.io] Proxy server should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 1 14:27:40.145: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Starting the proxy Feb 1 14:27:40.211: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix022742339/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 1 14:27:40.276: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9003" for this suite. Feb 1 14:27:46.324: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 1 14:27:46.467: INFO: namespace kubectl-9003 deletion completed in 6.180389115s • [SLOW TEST:6.322 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Proxy server /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 1 14:27:46.468: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Feb 1 14:27:46.572: INFO: Creating deployment "test-recreate-deployment" Feb 1 14:27:46.586: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 Feb 1 14:27:46.600: INFO: deployment "test-recreate-deployment" doesn't have the required revision set Feb 1 14:27:48.618: INFO: Waiting deployment "test-recreate-deployment" to complete Feb 1 14:27:48.622: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716164066, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716164066, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716164066, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716164066, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 1 14:27:50.631: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716164066, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716164066, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716164066, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716164066, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 1 14:27:52.636: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716164066, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716164066, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716164066, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716164066, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 1 14:27:54.632: INFO: Triggering a new rollout for deployment "test-recreate-deployment" Feb 1 14:27:54.651: INFO: Updating deployment test-recreate-deployment Feb 1 14:27:54.651: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 Feb 1 14:27:55.018: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment,GenerateName:,Namespace:deployment-2629,SelfLink:/apis/apps/v1/namespaces/deployment-2629/deployments/test-recreate-deployment,UID:aca7ca4a-0842-405e-bad2-c0a143959c2d,ResourceVersion:22702095,Generation:2,CreationTimestamp:2020-02-01 14:27:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[{Available False 2020-02-01 14:27:54 +0000 UTC 2020-02-01 14:27:54 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2020-02-01 14:27:54 +0000 UTC 2020-02-01 14:27:46 +0000 UTC ReplicaSetUpdated ReplicaSet "test-recreate-deployment-5c8c9cc69d" is progressing.}],ReadyReplicas:0,CollisionCount:nil,},} Feb 1 14:27:55.085: INFO: New ReplicaSet "test-recreate-deployment-5c8c9cc69d" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5c8c9cc69d,GenerateName:,Namespace:deployment-2629,SelfLink:/apis/apps/v1/namespaces/deployment-2629/replicasets/test-recreate-deployment-5c8c9cc69d,UID:45ed65bd-e156-4eba-a2dd-0b7e286250ce,ResourceVersion:22702094,Generation:1,CreationTimestamp:2020-02-01 14:27:54 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment aca7ca4a-0842-405e-bad2-c0a143959c2d 0xc002fa8527 0xc002fa8528}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Feb 1 14:27:55.085: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": Feb 1 14:27:55.086: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-6df85df6b9,GenerateName:,Namespace:deployment-2629,SelfLink:/apis/apps/v1/namespaces/deployment-2629/replicasets/test-recreate-deployment-6df85df6b9,UID:de82b100-c469-41a9-8f41-4ebf94f78139,ResourceVersion:22702083,Generation:2,CreationTimestamp:2020-02-01 14:27:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment aca7ca4a-0842-405e-bad2-c0a143959c2d 0xc002fa85f7 0xc002fa85f8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Feb 1 14:27:55.103: INFO: Pod "test-recreate-deployment-5c8c9cc69d-6hfqp" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5c8c9cc69d-6hfqp,GenerateName:test-recreate-deployment-5c8c9cc69d-,Namespace:deployment-2629,SelfLink:/api/v1/namespaces/deployment-2629/pods/test-recreate-deployment-5c8c9cc69d-6hfqp,UID:925d203a-b2c8-4812-8a9a-0bc1e2e308e5,ResourceVersion:22702096,Generation:0,CreationTimestamp:2020-02-01 14:27:54 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-recreate-deployment-5c8c9cc69d 45ed65bd-e156-4eba-a2dd-0b7e286250ce 0xc002bc6bc7 0xc002bc6bc8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-kmr6l {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-kmr6l,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-kmr6l true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002bc6c40} {node.kubernetes.io/unreachable Exists NoExecute 0xc002bc6c60}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-01 14:27:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-01 14:27:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-01 14:27:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-01 14:27:54 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2020-02-01 14:27:54 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 1 14:27:55.103: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-2629" for this suite. Feb 1 14:28:01.139: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 1 14:28:01.237: INFO: namespace deployment-2629 deletion completed in 6.126240765s • [SLOW TEST:14.769 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 1 14:28:01.238: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename hostpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37 [It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test hostPath mode Feb 1 14:28:01.639: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-7730" to be "success or failure" Feb 1 14:28:01.801: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 162.195591ms Feb 1 14:28:03.810: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.17160411s Feb 1 14:28:05.817: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.178440489s Feb 1 14:28:07.829: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 6.190157312s Feb 1 14:28:09.843: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 8.204045186s Feb 1 14:28:11.862: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 10.222793594s Feb 1 14:28:13.883: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 12.244454554s Feb 1 14:28:15.896: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.256843809s STEP: Saw pod success Feb 1 14:28:15.896: INFO: Pod "pod-host-path-test" satisfied condition "success or failure" Feb 1 14:28:15.901: INFO: Trying to get logs from node iruya-node pod pod-host-path-test container test-container-1: STEP: delete the pod Feb 1 14:28:15.971: INFO: Waiting for pod pod-host-path-test to disappear Feb 1 14:28:16.008: INFO: Pod pod-host-path-test no longer exists [AfterEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 1 14:28:16.009: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "hostpath-7730" for this suite. Feb 1 14:28:22.046: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 1 14:28:22.169: INFO: namespace hostpath-7730 deletion completed in 6.150001352s • [SLOW TEST:20.931 seconds] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34 should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 1 14:28:22.170: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-3021 [It] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a new StatefulSet Feb 1 14:28:22.348: INFO: Found 0 stateful pods, waiting for 3 Feb 1 14:28:32.486: INFO: Found 2 stateful pods, waiting for 3 Feb 1 14:28:42.361: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Feb 1 14:28:42.361: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Feb 1 14:28:42.361: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Feb 1 14:28:52.358: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Feb 1 14:28:52.358: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Feb 1 14:28:52.358: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true Feb 1 14:28:52.369: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3021 ss2-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Feb 1 14:28:52.933: INFO: stderr: "I0201 14:28:52.604801 1767 log.go:172] (0xc000966420) (0xc0005b2960) Create stream\nI0201 14:28:52.605515 1767 log.go:172] (0xc000966420) (0xc0005b2960) Stream added, broadcasting: 1\nI0201 14:28:52.614264 1767 log.go:172] (0xc000966420) Reply frame received for 1\nI0201 14:28:52.614315 1767 log.go:172] (0xc000966420) (0xc000750000) Create stream\nI0201 14:28:52.614331 1767 log.go:172] (0xc000966420) (0xc000750000) Stream added, broadcasting: 3\nI0201 14:28:52.616225 1767 log.go:172] (0xc000966420) Reply frame received for 3\nI0201 14:28:52.616244 1767 log.go:172] (0xc000966420) (0xc0007500a0) Create stream\nI0201 14:28:52.616252 1767 log.go:172] (0xc000966420) (0xc0007500a0) Stream added, broadcasting: 5\nI0201 14:28:52.618479 1767 log.go:172] (0xc000966420) Reply frame received for 5\nI0201 14:28:52.762253 1767 log.go:172] (0xc000966420) Data frame received for 5\nI0201 14:28:52.762292 1767 log.go:172] (0xc0007500a0) (5) Data frame handling\nI0201 14:28:52.762308 1767 log.go:172] (0xc0007500a0) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0201 14:28:52.841476 1767 log.go:172] (0xc000966420) Data frame received for 3\nI0201 14:28:52.841511 1767 log.go:172] (0xc000750000) (3) Data frame handling\nI0201 14:28:52.841534 1767 log.go:172] (0xc000750000) (3) Data frame sent\nI0201 14:28:52.925885 1767 log.go:172] (0xc000966420) Data frame received for 1\nI0201 14:28:52.925974 1767 log.go:172] (0xc0005b2960) (1) Data frame handling\nI0201 14:28:52.925989 1767 log.go:172] (0xc0005b2960) (1) Data frame sent\nI0201 14:28:52.926078 1767 log.go:172] (0xc000966420) (0xc000750000) Stream removed, broadcasting: 3\nI0201 14:28:52.926135 1767 log.go:172] (0xc000966420) (0xc0005b2960) Stream removed, broadcasting: 1\nI0201 14:28:52.927018 1767 log.go:172] (0xc000966420) (0xc0007500a0) Stream removed, broadcasting: 5\nI0201 14:28:52.927049 1767 log.go:172] (0xc000966420) (0xc0005b2960) Stream removed, broadcasting: 1\nI0201 14:28:52.927057 1767 log.go:172] (0xc000966420) (0xc000750000) Stream removed, broadcasting: 3\nI0201 14:28:52.927105 1767 log.go:172] (0xc000966420) (0xc0007500a0) Stream removed, broadcasting: 5\n" Feb 1 14:28:52.933: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Feb 1 14:28:52.933: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine Feb 1 14:29:02.992: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order Feb 1 14:29:13.073: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3021 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 1 14:29:13.498: INFO: stderr: "I0201 14:29:13.298172 1786 log.go:172] (0xc0009c2420) (0xc0005d2b40) Create stream\nI0201 14:29:13.298345 1786 log.go:172] (0xc0009c2420) (0xc0005d2b40) Stream added, broadcasting: 1\nI0201 14:29:13.302114 1786 log.go:172] (0xc0009c2420) Reply frame received for 1\nI0201 14:29:13.305515 1786 log.go:172] (0xc0009c2420) (0xc0009d0000) Create stream\nI0201 14:29:13.305686 1786 log.go:172] (0xc0009c2420) (0xc0009d0000) Stream added, broadcasting: 3\nI0201 14:29:13.310751 1786 log.go:172] (0xc0009c2420) Reply frame received for 3\nI0201 14:29:13.310988 1786 log.go:172] (0xc0009c2420) (0xc0005ba000) Create stream\nI0201 14:29:13.311081 1786 log.go:172] (0xc0009c2420) (0xc0005ba000) Stream added, broadcasting: 5\nI0201 14:29:13.315573 1786 log.go:172] (0xc0009c2420) Reply frame received for 5\nI0201 14:29:13.401364 1786 log.go:172] (0xc0009c2420) Data frame received for 5\nI0201 14:29:13.401438 1786 log.go:172] (0xc0005ba000) (5) Data frame handling\nI0201 14:29:13.401465 1786 log.go:172] (0xc0005ba000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0201 14:29:13.401960 1786 log.go:172] (0xc0009c2420) Data frame received for 3\nI0201 14:29:13.401983 1786 log.go:172] (0xc0009d0000) (3) Data frame handling\nI0201 14:29:13.401997 1786 log.go:172] (0xc0009d0000) (3) Data frame sent\nI0201 14:29:13.487567 1786 log.go:172] (0xc0009c2420) Data frame received for 1\nI0201 14:29:13.487640 1786 log.go:172] (0xc0009c2420) (0xc0009d0000) Stream removed, broadcasting: 3\nI0201 14:29:13.487683 1786 log.go:172] (0xc0005d2b40) (1) Data frame handling\nI0201 14:29:13.487703 1786 log.go:172] (0xc0005d2b40) (1) Data frame sent\nI0201 14:29:13.487750 1786 log.go:172] (0xc0009c2420) (0xc0005ba000) Stream removed, broadcasting: 5\nI0201 14:29:13.487776 1786 log.go:172] (0xc0009c2420) (0xc0005d2b40) Stream removed, broadcasting: 1\nI0201 14:29:13.487805 1786 log.go:172] (0xc0009c2420) Go away received\nI0201 14:29:13.488799 1786 log.go:172] (0xc0009c2420) (0xc0005d2b40) Stream removed, broadcasting: 1\nI0201 14:29:13.488827 1786 log.go:172] (0xc0009c2420) (0xc0009d0000) Stream removed, broadcasting: 3\nI0201 14:29:13.488837 1786 log.go:172] (0xc0009c2420) (0xc0005ba000) Stream removed, broadcasting: 5\n" Feb 1 14:29:13.498: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Feb 1 14:29:13.498: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Feb 1 14:29:23.535: INFO: Waiting for StatefulSet statefulset-3021/ss2 to complete update Feb 1 14:29:23.535: INFO: Waiting for Pod statefulset-3021/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Feb 1 14:29:23.535: INFO: Waiting for Pod statefulset-3021/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Feb 1 14:29:33.553: INFO: Waiting for StatefulSet statefulset-3021/ss2 to complete update Feb 1 14:29:33.554: INFO: Waiting for Pod statefulset-3021/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Feb 1 14:29:33.554: INFO: Waiting for Pod statefulset-3021/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Feb 1 14:29:43.553: INFO: Waiting for StatefulSet statefulset-3021/ss2 to complete update Feb 1 14:29:43.554: INFO: Waiting for Pod statefulset-3021/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Feb 1 14:29:53.560: INFO: Waiting for StatefulSet statefulset-3021/ss2 to complete update Feb 1 14:29:53.560: INFO: Waiting for Pod statefulset-3021/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Feb 1 14:30:03.555: INFO: Waiting for StatefulSet statefulset-3021/ss2 to complete update STEP: Rolling back to a previous revision Feb 1 14:30:13.562: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3021 ss2-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Feb 1 14:30:14.292: INFO: stderr: "I0201 14:30:13.762482 1809 log.go:172] (0xc0009b4420) (0xc00089c640) Create stream\nI0201 14:30:13.762705 1809 log.go:172] (0xc0009b4420) (0xc00089c640) Stream added, broadcasting: 1\nI0201 14:30:13.767018 1809 log.go:172] (0xc0009b4420) Reply frame received for 1\nI0201 14:30:13.767134 1809 log.go:172] (0xc0009b4420) (0xc0008fc000) Create stream\nI0201 14:30:13.767157 1809 log.go:172] (0xc0009b4420) (0xc0008fc000) Stream added, broadcasting: 3\nI0201 14:30:13.769960 1809 log.go:172] (0xc0009b4420) Reply frame received for 3\nI0201 14:30:13.769999 1809 log.go:172] (0xc0009b4420) (0xc00089c6e0) Create stream\nI0201 14:30:13.770007 1809 log.go:172] (0xc0009b4420) (0xc00089c6e0) Stream added, broadcasting: 5\nI0201 14:30:13.781406 1809 log.go:172] (0xc0009b4420) Reply frame received for 5\nI0201 14:30:13.918360 1809 log.go:172] (0xc0009b4420) Data frame received for 5\nI0201 14:30:13.918464 1809 log.go:172] (0xc00089c6e0) (5) Data frame handling\nI0201 14:30:13.918491 1809 log.go:172] (0xc00089c6e0) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0201 14:30:14.205322 1809 log.go:172] (0xc0009b4420) Data frame received for 3\nI0201 14:30:14.205350 1809 log.go:172] (0xc0008fc000) (3) Data frame handling\nI0201 14:30:14.205364 1809 log.go:172] (0xc0008fc000) (3) Data frame sent\nI0201 14:30:14.284841 1809 log.go:172] (0xc0009b4420) Data frame received for 1\nI0201 14:30:14.284891 1809 log.go:172] (0xc00089c640) (1) Data frame handling\nI0201 14:30:14.284922 1809 log.go:172] (0xc00089c640) (1) Data frame sent\nI0201 14:30:14.285194 1809 log.go:172] (0xc0009b4420) (0xc00089c640) Stream removed, broadcasting: 1\nI0201 14:30:14.285397 1809 log.go:172] (0xc0009b4420) (0xc0008fc000) Stream removed, broadcasting: 3\nI0201 14:30:14.285661 1809 log.go:172] (0xc0009b4420) (0xc00089c6e0) Stream removed, broadcasting: 5\nI0201 14:30:14.285694 1809 log.go:172] (0xc0009b4420) (0xc00089c640) Stream removed, broadcasting: 1\nI0201 14:30:14.285701 1809 log.go:172] (0xc0009b4420) (0xc0008fc000) Stream removed, broadcasting: 3\nI0201 14:30:14.285706 1809 log.go:172] (0xc0009b4420) (0xc00089c6e0) Stream removed, broadcasting: 5\n" Feb 1 14:30:14.292: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Feb 1 14:30:14.292: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Feb 1 14:30:25.072: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order Feb 1 14:30:35.123: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3021 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 1 14:30:37.283: INFO: stderr: "I0201 14:30:37.065123 1830 log.go:172] (0xc00012adc0) (0xc000668820) Create stream\nI0201 14:30:37.065191 1830 log.go:172] (0xc00012adc0) (0xc000668820) Stream added, broadcasting: 1\nI0201 14:30:37.072765 1830 log.go:172] (0xc00012adc0) Reply frame received for 1\nI0201 14:30:37.072839 1830 log.go:172] (0xc00012adc0) (0xc0007240a0) Create stream\nI0201 14:30:37.072858 1830 log.go:172] (0xc00012adc0) (0xc0007240a0) Stream added, broadcasting: 3\nI0201 14:30:37.075878 1830 log.go:172] (0xc00012adc0) Reply frame received for 3\nI0201 14:30:37.075947 1830 log.go:172] (0xc00012adc0) (0xc00089c000) Create stream\nI0201 14:30:37.075967 1830 log.go:172] (0xc00012adc0) (0xc00089c000) Stream added, broadcasting: 5\nI0201 14:30:37.077890 1830 log.go:172] (0xc00012adc0) Reply frame received for 5\nI0201 14:30:37.187284 1830 log.go:172] (0xc00012adc0) Data frame received for 3\nI0201 14:30:37.187685 1830 log.go:172] (0xc0007240a0) (3) Data frame handling\nI0201 14:30:37.187724 1830 log.go:172] (0xc0007240a0) (3) Data frame sent\nI0201 14:30:37.187880 1830 log.go:172] (0xc00012adc0) Data frame received for 5\nI0201 14:30:37.187896 1830 log.go:172] (0xc00089c000) (5) Data frame handling\nI0201 14:30:37.187931 1830 log.go:172] (0xc00089c000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0201 14:30:37.274381 1830 log.go:172] (0xc00012adc0) (0xc0007240a0) Stream removed, broadcasting: 3\nI0201 14:30:37.274527 1830 log.go:172] (0xc00012adc0) Data frame received for 1\nI0201 14:30:37.274604 1830 log.go:172] (0xc00012adc0) (0xc00089c000) Stream removed, broadcasting: 5\nI0201 14:30:37.274678 1830 log.go:172] (0xc000668820) (1) Data frame handling\nI0201 14:30:37.274717 1830 log.go:172] (0xc000668820) (1) Data frame sent\nI0201 14:30:37.274733 1830 log.go:172] (0xc00012adc0) (0xc000668820) Stream removed, broadcasting: 1\nI0201 14:30:37.274764 1830 log.go:172] (0xc00012adc0) Go away received\nI0201 14:30:37.275532 1830 log.go:172] (0xc00012adc0) (0xc000668820) Stream removed, broadcasting: 1\nI0201 14:30:37.275549 1830 log.go:172] (0xc00012adc0) (0xc0007240a0) Stream removed, broadcasting: 3\nI0201 14:30:37.275567 1830 log.go:172] (0xc00012adc0) (0xc00089c000) Stream removed, broadcasting: 5\n" Feb 1 14:30:37.284: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Feb 1 14:30:37.284: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Feb 1 14:30:47.329: INFO: Waiting for StatefulSet statefulset-3021/ss2 to complete update Feb 1 14:30:47.329: INFO: Waiting for Pod statefulset-3021/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Feb 1 14:30:47.329: INFO: Waiting for Pod statefulset-3021/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Feb 1 14:30:57.345: INFO: Waiting for StatefulSet statefulset-3021/ss2 to complete update Feb 1 14:30:57.346: INFO: Waiting for Pod statefulset-3021/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Feb 1 14:30:57.346: INFO: Waiting for Pod statefulset-3021/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Feb 1 14:31:07.350: INFO: Waiting for StatefulSet statefulset-3021/ss2 to complete update Feb 1 14:31:07.350: INFO: Waiting for Pod statefulset-3021/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Feb 1 14:31:17.357: INFO: Waiting for StatefulSet statefulset-3021/ss2 to complete update [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 Feb 1 14:31:27.348: INFO: Deleting all statefulset in ns statefulset-3021 Feb 1 14:31:27.353: INFO: Scaling statefulset ss2 to 0 Feb 1 14:31:57.394: INFO: Waiting for statefulset status.replicas updated to 0 Feb 1 14:31:57.400: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 1 14:31:57.460: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-3021" for this suite. Feb 1 14:32:05.488: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 1 14:32:05.601: INFO: namespace statefulset-3021 deletion completed in 8.135887186s • [SLOW TEST:223.432 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 1 14:32:05.602: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes Feb 1 14:32:05.790: INFO: observed the pod list STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice STEP: verifying pod deletion was observed [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 1 14:32:20.898: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-3599" for this suite. Feb 1 14:32:26.936: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 1 14:32:27.083: INFO: namespace pods-3599 deletion completed in 6.174852227s • [SLOW TEST:21.481 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 1 14:32:27.084: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-5598 [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace statefulset-5598 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-5598 Feb 1 14:32:27.243: INFO: Found 0 stateful pods, waiting for 1 Feb 1 14:32:37.252: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod Feb 1 14:32:37.262: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5598 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Feb 1 14:32:37.834: INFO: stderr: "I0201 14:32:37.530903 1861 log.go:172] (0xc000970370) (0xc0007c4780) Create stream\nI0201 14:32:37.531079 1861 log.go:172] (0xc000970370) (0xc0007c4780) Stream added, broadcasting: 1\nI0201 14:32:37.538671 1861 log.go:172] (0xc000970370) Reply frame received for 1\nI0201 14:32:37.538721 1861 log.go:172] (0xc000970370) (0xc0007abcc0) Create stream\nI0201 14:32:37.538737 1861 log.go:172] (0xc000970370) (0xc0007abcc0) Stream added, broadcasting: 3\nI0201 14:32:37.541241 1861 log.go:172] (0xc000970370) Reply frame received for 3\nI0201 14:32:37.541308 1861 log.go:172] (0xc000970370) (0xc0006361e0) Create stream\nI0201 14:32:37.541324 1861 log.go:172] (0xc000970370) (0xc0006361e0) Stream added, broadcasting: 5\nI0201 14:32:37.544355 1861 log.go:172] (0xc000970370) Reply frame received for 5\nI0201 14:32:37.672627 1861 log.go:172] (0xc000970370) Data frame received for 5\nI0201 14:32:37.672734 1861 log.go:172] (0xc0006361e0) (5) Data frame handling\nI0201 14:32:37.672768 1861 log.go:172] (0xc0006361e0) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0201 14:32:37.696294 1861 log.go:172] (0xc000970370) Data frame received for 3\nI0201 14:32:37.696336 1861 log.go:172] (0xc0007abcc0) (3) Data frame handling\nI0201 14:32:37.696351 1861 log.go:172] (0xc0007abcc0) (3) Data frame sent\nI0201 14:32:37.814403 1861 log.go:172] (0xc000970370) Data frame received for 1\nI0201 14:32:37.814537 1861 log.go:172] (0xc000970370) (0xc0007abcc0) Stream removed, broadcasting: 3\nI0201 14:32:37.814608 1861 log.go:172] (0xc0007c4780) (1) Data frame handling\nI0201 14:32:37.814630 1861 log.go:172] (0xc0007c4780) (1) Data frame sent\nI0201 14:32:37.814871 1861 log.go:172] (0xc000970370) (0xc0006361e0) Stream removed, broadcasting: 5\nI0201 14:32:37.814923 1861 log.go:172] (0xc000970370) (0xc0007c4780) Stream removed, broadcasting: 1\nI0201 14:32:37.814945 1861 log.go:172] (0xc000970370) Go away received\nI0201 14:32:37.815980 1861 log.go:172] (0xc000970370) (0xc0007c4780) Stream removed, broadcasting: 1\nI0201 14:32:37.816010 1861 log.go:172] (0xc000970370) (0xc0007abcc0) Stream removed, broadcasting: 3\nI0201 14:32:37.816027 1861 log.go:172] (0xc000970370) (0xc0006361e0) Stream removed, broadcasting: 5\n" Feb 1 14:32:37.834: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Feb 1 14:32:37.834: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Feb 1 14:32:37.841: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Feb 1 14:32:47.863: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Feb 1 14:32:47.863: INFO: Waiting for statefulset status.replicas updated to 0 Feb 1 14:32:47.894: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999425s Feb 1 14:32:48.906: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.991613077s Feb 1 14:32:49.915: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.979546048s Feb 1 14:32:50.928: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.971201691s Feb 1 14:32:51.937: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.95810626s Feb 1 14:32:52.951: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.948867172s Feb 1 14:32:53.973: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.934324144s Feb 1 14:32:54.990: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.913160436s Feb 1 14:32:56.009: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.895897336s Feb 1 14:32:57.018: INFO: Verifying statefulset ss doesn't scale past 1 for another 876.833269ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-5598 Feb 1 14:32:58.028: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5598 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 1 14:32:58.690: INFO: stderr: "I0201 14:32:58.281504 1883 log.go:172] (0xc0006de630) (0xc000628aa0) Create stream\nI0201 14:32:58.281617 1883 log.go:172] (0xc0006de630) (0xc000628aa0) Stream added, broadcasting: 1\nI0201 14:32:58.288711 1883 log.go:172] (0xc0006de630) Reply frame received for 1\nI0201 14:32:58.288755 1883 log.go:172] (0xc0006de630) (0xc00091e000) Create stream\nI0201 14:32:58.288770 1883 log.go:172] (0xc0006de630) (0xc00091e000) Stream added, broadcasting: 3\nI0201 14:32:58.290403 1883 log.go:172] (0xc0006de630) Reply frame received for 3\nI0201 14:32:58.290435 1883 log.go:172] (0xc0006de630) (0xc000628b40) Create stream\nI0201 14:32:58.290449 1883 log.go:172] (0xc0006de630) (0xc000628b40) Stream added, broadcasting: 5\nI0201 14:32:58.295712 1883 log.go:172] (0xc0006de630) Reply frame received for 5\nI0201 14:32:58.397705 1883 log.go:172] (0xc0006de630) Data frame received for 5\nI0201 14:32:58.397759 1883 log.go:172] (0xc000628b40) (5) Data frame handling\nI0201 14:32:58.397789 1883 log.go:172] (0xc000628b40) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0201 14:32:58.397877 1883 log.go:172] (0xc0006de630) Data frame received for 3\nI0201 14:32:58.397910 1883 log.go:172] (0xc00091e000) (3) Data frame handling\nI0201 14:32:58.397949 1883 log.go:172] (0xc00091e000) (3) Data frame sent\nI0201 14:32:58.656769 1883 log.go:172] (0xc0006de630) (0xc00091e000) Stream removed, broadcasting: 3\nI0201 14:32:58.657319 1883 log.go:172] (0xc0006de630) Data frame received for 1\nI0201 14:32:58.669914 1883 log.go:172] (0xc0006de630) (0xc000628b40) Stream removed, broadcasting: 5\nI0201 14:32:58.670412 1883 log.go:172] (0xc000628aa0) (1) Data frame handling\nI0201 14:32:58.670563 1883 log.go:172] (0xc000628aa0) (1) Data frame sent\nI0201 14:32:58.670649 1883 log.go:172] (0xc0006de630) (0xc000628aa0) Stream removed, broadcasting: 1\nI0201 14:32:58.670701 1883 log.go:172] (0xc0006de630) Go away received\nI0201 14:32:58.673026 1883 log.go:172] (0xc0006de630) (0xc000628aa0) Stream removed, broadcasting: 1\nI0201 14:32:58.673172 1883 log.go:172] (0xc0006de630) (0xc00091e000) Stream removed, broadcasting: 3\nI0201 14:32:58.673281 1883 log.go:172] (0xc0006de630) (0xc000628b40) Stream removed, broadcasting: 5\n" Feb 1 14:32:58.691: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Feb 1 14:32:58.691: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Feb 1 14:32:58.708: INFO: Found 1 stateful pods, waiting for 3 Feb 1 14:33:08.751: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Feb 1 14:33:08.752: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Feb 1 14:33:08.752: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Pending - Ready=false Feb 1 14:33:18.718: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Feb 1 14:33:18.718: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Feb 1 14:33:18.718: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod Feb 1 14:33:18.731: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5598 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Feb 1 14:33:19.285: INFO: stderr: "I0201 14:33:18.980234 1903 log.go:172] (0xc0007e8a50) (0xc0007dcc80) Create stream\nI0201 14:33:18.980300 1903 log.go:172] (0xc0007e8a50) (0xc0007dcc80) Stream added, broadcasting: 1\nI0201 14:33:18.993022 1903 log.go:172] (0xc0007e8a50) Reply frame received for 1\nI0201 14:33:18.993090 1903 log.go:172] (0xc0007e8a50) (0xc0007dc000) Create stream\nI0201 14:33:18.993103 1903 log.go:172] (0xc0007e8a50) (0xc0007dc000) Stream added, broadcasting: 3\nI0201 14:33:18.994733 1903 log.go:172] (0xc0007e8a50) Reply frame received for 3\nI0201 14:33:18.994789 1903 log.go:172] (0xc0007e8a50) (0xc0007dc0a0) Create stream\nI0201 14:33:18.994802 1903 log.go:172] (0xc0007e8a50) (0xc0007dc0a0) Stream added, broadcasting: 5\nI0201 14:33:18.997354 1903 log.go:172] (0xc0007e8a50) Reply frame received for 5\nI0201 14:33:19.138084 1903 log.go:172] (0xc0007e8a50) Data frame received for 5\nI0201 14:33:19.138184 1903 log.go:172] (0xc0007dc0a0) (5) Data frame handling\nI0201 14:33:19.138203 1903 log.go:172] (0xc0007dc0a0) (5) Data frame sent\nI0201 14:33:19.138212 1903 log.go:172] (0xc0007e8a50) Data frame received for 3\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0201 14:33:19.138229 1903 log.go:172] (0xc0007dc000) (3) Data frame handling\nI0201 14:33:19.138239 1903 log.go:172] (0xc0007dc000) (3) Data frame sent\nI0201 14:33:19.277510 1903 log.go:172] (0xc0007e8a50) Data frame received for 1\nI0201 14:33:19.277572 1903 log.go:172] (0xc0007e8a50) (0xc0007dc000) Stream removed, broadcasting: 3\nI0201 14:33:19.277708 1903 log.go:172] (0xc0007dcc80) (1) Data frame handling\nI0201 14:33:19.277745 1903 log.go:172] (0xc0007dcc80) (1) Data frame sent\nI0201 14:33:19.277763 1903 log.go:172] (0xc0007e8a50) (0xc0007dcc80) Stream removed, broadcasting: 1\nI0201 14:33:19.278599 1903 log.go:172] (0xc0007e8a50) (0xc0007dc0a0) Stream removed, broadcasting: 5\nI0201 14:33:19.278673 1903 log.go:172] (0xc0007e8a50) (0xc0007dcc80) Stream removed, broadcasting: 1\nI0201 14:33:19.278697 1903 log.go:172] (0xc0007e8a50) (0xc0007dc000) Stream removed, broadcasting: 3\nI0201 14:33:19.278716 1903 log.go:172] (0xc0007e8a50) (0xc0007dc0a0) Stream removed, broadcasting: 5\nI0201 14:33:19.278737 1903 log.go:172] (0xc0007e8a50) Go away received\n" Feb 1 14:33:19.286: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Feb 1 14:33:19.286: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Feb 1 14:33:19.286: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5598 ss-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Feb 1 14:33:19.745: INFO: stderr: "I0201 14:33:19.470769 1918 log.go:172] (0xc000702a50) (0xc0005a6780) Create stream\nI0201 14:33:19.471065 1918 log.go:172] (0xc000702a50) (0xc0005a6780) Stream added, broadcasting: 1\nI0201 14:33:19.476274 1918 log.go:172] (0xc000702a50) Reply frame received for 1\nI0201 14:33:19.476330 1918 log.go:172] (0xc000702a50) (0xc0007ea000) Create stream\nI0201 14:33:19.476343 1918 log.go:172] (0xc000702a50) (0xc0007ea000) Stream added, broadcasting: 3\nI0201 14:33:19.477611 1918 log.go:172] (0xc000702a50) Reply frame received for 3\nI0201 14:33:19.477634 1918 log.go:172] (0xc000702a50) (0xc0005a6820) Create stream\nI0201 14:33:19.477647 1918 log.go:172] (0xc000702a50) (0xc0005a6820) Stream added, broadcasting: 5\nI0201 14:33:19.478872 1918 log.go:172] (0xc000702a50) Reply frame received for 5\nI0201 14:33:19.560784 1918 log.go:172] (0xc000702a50) Data frame received for 5\nI0201 14:33:19.560937 1918 log.go:172] (0xc0005a6820) (5) Data frame handling\nI0201 14:33:19.560970 1918 log.go:172] (0xc0005a6820) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0201 14:33:19.603675 1918 log.go:172] (0xc000702a50) Data frame received for 3\nI0201 14:33:19.603893 1918 log.go:172] (0xc0007ea000) (3) Data frame handling\nI0201 14:33:19.603961 1918 log.go:172] (0xc0007ea000) (3) Data frame sent\nI0201 14:33:19.729460 1918 log.go:172] (0xc000702a50) Data frame received for 1\nI0201 14:33:19.729656 1918 log.go:172] (0xc000702a50) (0xc0005a6820) Stream removed, broadcasting: 5\nI0201 14:33:19.729709 1918 log.go:172] (0xc000702a50) (0xc0007ea000) Stream removed, broadcasting: 3\nI0201 14:33:19.729972 1918 log.go:172] (0xc0005a6780) (1) Data frame handling\nI0201 14:33:19.729998 1918 log.go:172] (0xc0005a6780) (1) Data frame sent\nI0201 14:33:19.730006 1918 log.go:172] (0xc000702a50) (0xc0005a6780) Stream removed, broadcasting: 1\nI0201 14:33:19.730018 1918 log.go:172] (0xc000702a50) Go away received\nI0201 14:33:19.730953 1918 log.go:172] (0xc000702a50) (0xc0005a6780) Stream removed, broadcasting: 1\nI0201 14:33:19.731012 1918 log.go:172] (0xc000702a50) (0xc0007ea000) Stream removed, broadcasting: 3\nI0201 14:33:19.731029 1918 log.go:172] (0xc000702a50) (0xc0005a6820) Stream removed, broadcasting: 5\n" Feb 1 14:33:19.746: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Feb 1 14:33:19.746: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Feb 1 14:33:19.747: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5598 ss-2 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Feb 1 14:33:20.387: INFO: stderr: "I0201 14:33:19.988122 1938 log.go:172] (0xc00077a420) (0xc000836460) Create stream\nI0201 14:33:19.988280 1938 log.go:172] (0xc00077a420) (0xc000836460) Stream added, broadcasting: 1\nI0201 14:33:19.994513 1938 log.go:172] (0xc00077a420) Reply frame received for 1\nI0201 14:33:19.994591 1938 log.go:172] (0xc00077a420) (0xc000320000) Create stream\nI0201 14:33:19.994601 1938 log.go:172] (0xc00077a420) (0xc000320000) Stream added, broadcasting: 3\nI0201 14:33:19.998914 1938 log.go:172] (0xc00077a420) Reply frame received for 3\nI0201 14:33:19.998958 1938 log.go:172] (0xc00077a420) (0xc0007a83c0) Create stream\nI0201 14:33:19.998995 1938 log.go:172] (0xc00077a420) (0xc0007a83c0) Stream added, broadcasting: 5\nI0201 14:33:20.000367 1938 log.go:172] (0xc00077a420) Reply frame received for 5\nI0201 14:33:20.119834 1938 log.go:172] (0xc00077a420) Data frame received for 5\nI0201 14:33:20.119879 1938 log.go:172] (0xc0007a83c0) (5) Data frame handling\nI0201 14:33:20.119895 1938 log.go:172] (0xc0007a83c0) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0201 14:33:20.178468 1938 log.go:172] (0xc00077a420) Data frame received for 3\nI0201 14:33:20.178509 1938 log.go:172] (0xc000320000) (3) Data frame handling\nI0201 14:33:20.178531 1938 log.go:172] (0xc000320000) (3) Data frame sent\nI0201 14:33:20.381262 1938 log.go:172] (0xc00077a420) (0xc000320000) Stream removed, broadcasting: 3\nI0201 14:33:20.381365 1938 log.go:172] (0xc00077a420) Data frame received for 1\nI0201 14:33:20.381385 1938 log.go:172] (0xc000836460) (1) Data frame handling\nI0201 14:33:20.381401 1938 log.go:172] (0xc000836460) (1) Data frame sent\nI0201 14:33:20.381410 1938 log.go:172] (0xc00077a420) (0xc000836460) Stream removed, broadcasting: 1\nI0201 14:33:20.381540 1938 log.go:172] (0xc00077a420) (0xc0007a83c0) Stream removed, broadcasting: 5\nI0201 14:33:20.381623 1938 log.go:172] (0xc00077a420) Go away received\nI0201 14:33:20.381904 1938 log.go:172] (0xc00077a420) (0xc000836460) Stream removed, broadcasting: 1\nI0201 14:33:20.381923 1938 log.go:172] (0xc00077a420) (0xc000320000) Stream removed, broadcasting: 3\nI0201 14:33:20.381935 1938 log.go:172] (0xc00077a420) (0xc0007a83c0) Stream removed, broadcasting: 5\n" Feb 1 14:33:20.387: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Feb 1 14:33:20.387: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Feb 1 14:33:20.387: INFO: Waiting for statefulset status.replicas updated to 0 Feb 1 14:33:20.394: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1 Feb 1 14:33:30.414: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Feb 1 14:33:30.414: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Feb 1 14:33:30.414: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Feb 1 14:33:30.522: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999343s Feb 1 14:33:31.538: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.926132842s Feb 1 14:33:32.556: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.910160714s Feb 1 14:33:33.569: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.892457895s Feb 1 14:33:34.584: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.879312787s Feb 1 14:33:36.103: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.864187692s Feb 1 14:33:37.117: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.344869229s Feb 1 14:33:38.141: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.331767456s Feb 1 14:33:39.154: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.307059639s Feb 1 14:33:40.168: INFO: Verifying statefulset ss doesn't scale past 3 for another 294.728291ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-5598 Feb 1 14:33:41.182: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5598 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 1 14:33:41.792: INFO: stderr: "I0201 14:33:41.488135 1952 log.go:172] (0xc000980630) (0xc00090e820) Create stream\nI0201 14:33:41.488341 1952 log.go:172] (0xc000980630) (0xc00090e820) Stream added, broadcasting: 1\nI0201 14:33:41.510679 1952 log.go:172] (0xc000980630) Reply frame received for 1\nI0201 14:33:41.510838 1952 log.go:172] (0xc000980630) (0xc00090e000) Create stream\nI0201 14:33:41.510883 1952 log.go:172] (0xc000980630) (0xc00090e000) Stream added, broadcasting: 3\nI0201 14:33:41.518268 1952 log.go:172] (0xc000980630) Reply frame received for 3\nI0201 14:33:41.518366 1952 log.go:172] (0xc000980630) (0xc00090e0a0) Create stream\nI0201 14:33:41.518396 1952 log.go:172] (0xc000980630) (0xc00090e0a0) Stream added, broadcasting: 5\nI0201 14:33:41.521151 1952 log.go:172] (0xc000980630) Reply frame received for 5\nI0201 14:33:41.625473 1952 log.go:172] (0xc000980630) Data frame received for 5\nI0201 14:33:41.625570 1952 log.go:172] (0xc00090e0a0) (5) Data frame handling\nI0201 14:33:41.625599 1952 log.go:172] (0xc00090e0a0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0201 14:33:41.626714 1952 log.go:172] (0xc000980630) Data frame received for 3\nI0201 14:33:41.626787 1952 log.go:172] (0xc00090e000) (3) Data frame handling\nI0201 14:33:41.626824 1952 log.go:172] (0xc00090e000) (3) Data frame sent\nI0201 14:33:41.767216 1952 log.go:172] (0xc000980630) Data frame received for 1\nI0201 14:33:41.767359 1952 log.go:172] (0xc00090e820) (1) Data frame handling\nI0201 14:33:41.767392 1952 log.go:172] (0xc00090e820) (1) Data frame sent\nI0201 14:33:41.770066 1952 log.go:172] (0xc000980630) (0xc00090e820) Stream removed, broadcasting: 1\nI0201 14:33:41.770723 1952 log.go:172] (0xc000980630) (0xc00090e000) Stream removed, broadcasting: 3\nI0201 14:33:41.770930 1952 log.go:172] (0xc000980630) (0xc00090e0a0) Stream removed, broadcasting: 5\nI0201 14:33:41.770998 1952 log.go:172] (0xc000980630) Go away received\nI0201 14:33:41.771698 1952 log.go:172] (0xc000980630) (0xc00090e820) Stream removed, broadcasting: 1\nI0201 14:33:41.771729 1952 log.go:172] (0xc000980630) (0xc00090e000) Stream removed, broadcasting: 3\nI0201 14:33:41.771755 1952 log.go:172] (0xc000980630) (0xc00090e0a0) Stream removed, broadcasting: 5\n" Feb 1 14:33:41.792: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Feb 1 14:33:41.793: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Feb 1 14:33:41.793: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5598 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 1 14:33:42.290: INFO: stderr: "I0201 14:33:41.973346 1971 log.go:172] (0xc000924420) (0xc0008e66e0) Create stream\nI0201 14:33:41.973496 1971 log.go:172] (0xc000924420) (0xc0008e66e0) Stream added, broadcasting: 1\nI0201 14:33:41.977643 1971 log.go:172] (0xc000924420) Reply frame received for 1\nI0201 14:33:41.977697 1971 log.go:172] (0xc000924420) (0xc00059c1e0) Create stream\nI0201 14:33:41.977707 1971 log.go:172] (0xc000924420) (0xc00059c1e0) Stream added, broadcasting: 3\nI0201 14:33:41.979043 1971 log.go:172] (0xc000924420) Reply frame received for 3\nI0201 14:33:41.979062 1971 log.go:172] (0xc000924420) (0xc0008e6780) Create stream\nI0201 14:33:41.979070 1971 log.go:172] (0xc000924420) (0xc0008e6780) Stream added, broadcasting: 5\nI0201 14:33:41.980347 1971 log.go:172] (0xc000924420) Reply frame received for 5\nI0201 14:33:42.087327 1971 log.go:172] (0xc000924420) Data frame received for 3\nI0201 14:33:42.087824 1971 log.go:172] (0xc00059c1e0) (3) Data frame handling\nI0201 14:33:42.087922 1971 log.go:172] (0xc000924420) Data frame received for 5\nI0201 14:33:42.087979 1971 log.go:172] (0xc0008e6780) (5) Data frame handling\nI0201 14:33:42.088013 1971 log.go:172] (0xc0008e6780) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0201 14:33:42.088044 1971 log.go:172] (0xc00059c1e0) (3) Data frame sent\nI0201 14:33:42.273656 1971 log.go:172] (0xc000924420) (0xc00059c1e0) Stream removed, broadcasting: 3\nI0201 14:33:42.273806 1971 log.go:172] (0xc000924420) Data frame received for 1\nI0201 14:33:42.273822 1971 log.go:172] (0xc0008e66e0) (1) Data frame handling\nI0201 14:33:42.273844 1971 log.go:172] (0xc0008e66e0) (1) Data frame sent\nI0201 14:33:42.273972 1971 log.go:172] (0xc000924420) (0xc0008e66e0) Stream removed, broadcasting: 1\nI0201 14:33:42.274344 1971 log.go:172] (0xc000924420) (0xc0008e6780) Stream removed, broadcasting: 5\nI0201 14:33:42.274429 1971 log.go:172] (0xc000924420) Go away received\nI0201 14:33:42.274594 1971 log.go:172] (0xc000924420) (0xc0008e66e0) Stream removed, broadcasting: 1\nI0201 14:33:42.274653 1971 log.go:172] (0xc000924420) (0xc00059c1e0) Stream removed, broadcasting: 3\nI0201 14:33:42.274679 1971 log.go:172] (0xc000924420) (0xc0008e6780) Stream removed, broadcasting: 5\n" Feb 1 14:33:42.291: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Feb 1 14:33:42.291: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Feb 1 14:33:42.292: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5598 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 1 14:33:43.071: INFO: stderr: "I0201 14:33:42.588982 1991 log.go:172] (0xc0008e0420) (0xc0008c6780) Create stream\nI0201 14:33:42.590038 1991 log.go:172] (0xc0008e0420) (0xc0008c6780) Stream added, broadcasting: 1\nI0201 14:33:42.611607 1991 log.go:172] (0xc0008e0420) Reply frame received for 1\nI0201 14:33:42.611731 1991 log.go:172] (0xc0008e0420) (0xc0008e6000) Create stream\nI0201 14:33:42.611756 1991 log.go:172] (0xc0008e0420) (0xc0008e6000) Stream added, broadcasting: 3\nI0201 14:33:42.614401 1991 log.go:172] (0xc0008e0420) Reply frame received for 3\nI0201 14:33:42.614506 1991 log.go:172] (0xc0008e0420) (0xc0008c6000) Create stream\nI0201 14:33:42.614531 1991 log.go:172] (0xc0008e0420) (0xc0008c6000) Stream added, broadcasting: 5\nI0201 14:33:42.617545 1991 log.go:172] (0xc0008e0420) Reply frame received for 5\nI0201 14:33:42.800549 1991 log.go:172] (0xc0008e0420) Data frame received for 5\nI0201 14:33:42.800613 1991 log.go:172] (0xc0008c6000) (5) Data frame handling\nI0201 14:33:42.800633 1991 log.go:172] (0xc0008c6000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0201 14:33:42.800680 1991 log.go:172] (0xc0008e0420) Data frame received for 3\nI0201 14:33:42.800691 1991 log.go:172] (0xc0008e6000) (3) Data frame handling\nI0201 14:33:42.800701 1991 log.go:172] (0xc0008e6000) (3) Data frame sent\nI0201 14:33:43.054531 1991 log.go:172] (0xc0008e0420) Data frame received for 1\nI0201 14:33:43.055363 1991 log.go:172] (0xc0008c6780) (1) Data frame handling\nI0201 14:33:43.055450 1991 log.go:172] (0xc0008c6780) (1) Data frame sent\nI0201 14:33:43.056529 1991 log.go:172] (0xc0008e0420) (0xc0008c6780) Stream removed, broadcasting: 1\nI0201 14:33:43.057559 1991 log.go:172] (0xc0008e0420) (0xc0008e6000) Stream removed, broadcasting: 3\nI0201 14:33:43.057773 1991 log.go:172] (0xc0008e0420) (0xc0008c6000) Stream removed, broadcasting: 5\nI0201 14:33:43.057869 1991 log.go:172] (0xc0008e0420) Go away received\nI0201 14:33:43.058001 1991 log.go:172] (0xc0008e0420) (0xc0008c6780) Stream removed, broadcasting: 1\nI0201 14:33:43.058018 1991 log.go:172] (0xc0008e0420) (0xc0008e6000) Stream removed, broadcasting: 3\nI0201 14:33:43.058032 1991 log.go:172] (0xc0008e0420) (0xc0008c6000) Stream removed, broadcasting: 5\n" Feb 1 14:33:43.072: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Feb 1 14:33:43.072: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Feb 1 14:33:43.072: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 Feb 1 14:34:13.100: INFO: Deleting all statefulset in ns statefulset-5598 Feb 1 14:34:13.103: INFO: Scaling statefulset ss to 0 Feb 1 14:34:13.112: INFO: Waiting for statefulset status.replicas updated to 0 Feb 1 14:34:13.114: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 1 14:34:13.150: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-5598" for this suite. Feb 1 14:34:19.172: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 1 14:34:19.260: INFO: namespace statefulset-5598 deletion completed in 6.107144376s • [SLOW TEST:112.177 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 1 14:34:19.261: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: getting the auto-created API token Feb 1 14:34:19.876: INFO: created pod pod-service-account-defaultsa Feb 1 14:34:19.876: INFO: pod pod-service-account-defaultsa service account token volume mount: true Feb 1 14:34:19.891: INFO: created pod pod-service-account-mountsa Feb 1 14:34:19.892: INFO: pod pod-service-account-mountsa service account token volume mount: true Feb 1 14:34:19.970: INFO: created pod pod-service-account-nomountsa Feb 1 14:34:19.970: INFO: pod pod-service-account-nomountsa service account token volume mount: false Feb 1 14:34:19.993: INFO: created pod pod-service-account-defaultsa-mountspec Feb 1 14:34:19.993: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true Feb 1 14:34:20.022: INFO: created pod pod-service-account-mountsa-mountspec Feb 1 14:34:20.022: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true Feb 1 14:34:20.782: INFO: created pod pod-service-account-nomountsa-mountspec Feb 1 14:34:20.782: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true Feb 1 14:34:21.136: INFO: created pod pod-service-account-defaultsa-nomountspec Feb 1 14:34:21.136: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false Feb 1 14:34:21.466: INFO: created pod pod-service-account-mountsa-nomountspec Feb 1 14:34:21.466: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false Feb 1 14:34:21.558: INFO: created pod pod-service-account-nomountsa-nomountspec Feb 1 14:34:21.558: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 1 14:34:21.558: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-354" for this suite. Feb 1 14:35:11.627: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 1 14:35:11.752: INFO: namespace svcaccounts-354 deletion completed in 49.988493144s • [SLOW TEST:52.491 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 1 14:35:11.753: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 1 14:36:11.952: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-8935" for this suite. Feb 1 14:36:35.981: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 1 14:36:36.079: INFO: namespace container-probe-8935 deletion completed in 24.120959814s • [SLOW TEST:84.327 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 1 14:36:36.080: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: getting the auto-created API token STEP: reading a file in the container Feb 1 14:36:44.804: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-1511 pod-service-account-7bee61b2-05db-494a-bd25-2d42c1ba16f1 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token' STEP: reading a file in the container Feb 1 14:36:45.309: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-1511 pod-service-account-7bee61b2-05db-494a-bd25-2d42c1ba16f1 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt' STEP: reading a file in the container Feb 1 14:36:45.701: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-1511 pod-service-account-7bee61b2-05db-494a-bd25-2d42c1ba16f1 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace' [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 1 14:36:46.215: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-1511" for this suite. Feb 1 14:36:52.245: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 1 14:36:52.367: INFO: namespace svcaccounts-1511 deletion completed in 6.145810194s • [SLOW TEST:16.288 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 1 14:36:52.368: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-2348 [It] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace statefulset-2348 STEP: Creating statefulset with conflicting port in namespace statefulset-2348 STEP: Waiting until pod test-pod will start running in namespace statefulset-2348 STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-2348 Feb 1 14:37:04.636: INFO: Observed stateful pod in namespace: statefulset-2348, name: ss-0, uid: 306bb8ea-2aa6-4003-b94f-b6ff1381f836, status phase: Pending. Waiting for statefulset controller to delete. Feb 1 14:37:06.504: INFO: Observed stateful pod in namespace: statefulset-2348, name: ss-0, uid: 306bb8ea-2aa6-4003-b94f-b6ff1381f836, status phase: Failed. Waiting for statefulset controller to delete. Feb 1 14:37:06.604: INFO: Observed stateful pod in namespace: statefulset-2348, name: ss-0, uid: 306bb8ea-2aa6-4003-b94f-b6ff1381f836, status phase: Failed. Waiting for statefulset controller to delete. Feb 1 14:37:06.628: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-2348 STEP: Removing pod with conflicting port in namespace statefulset-2348 STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-2348 and will be in running state [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 Feb 1 14:37:18.832: INFO: Deleting all statefulset in ns statefulset-2348 Feb 1 14:37:18.837: INFO: Scaling statefulset ss to 0 Feb 1 14:37:28.900: INFO: Waiting for statefulset status.replicas updated to 0 Feb 1 14:37:28.905: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 1 14:37:28.936: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-2348" for this suite. Feb 1 14:37:35.029: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 1 14:37:35.176: INFO: namespace statefulset-2348 deletion completed in 6.235286634s • [SLOW TEST:42.808 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 1 14:37:35.176: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-secret-2vfb STEP: Creating a pod to test atomic-volume-subpath Feb 1 14:37:35.287: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-2vfb" in namespace "subpath-28" to be "success or failure" Feb 1 14:37:35.301: INFO: Pod "pod-subpath-test-secret-2vfb": Phase="Pending", Reason="", readiness=false. Elapsed: 14.332108ms Feb 1 14:37:37.317: INFO: Pod "pod-subpath-test-secret-2vfb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030468239s Feb 1 14:37:39.331: INFO: Pod "pod-subpath-test-secret-2vfb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.043996526s Feb 1 14:37:41.346: INFO: Pod "pod-subpath-test-secret-2vfb": Phase="Pending", Reason="", readiness=false. Elapsed: 6.059386713s Feb 1 14:37:43.355: INFO: Pod "pod-subpath-test-secret-2vfb": Phase="Pending", Reason="", readiness=false. Elapsed: 8.067898845s Feb 1 14:37:45.365: INFO: Pod "pod-subpath-test-secret-2vfb": Phase="Running", Reason="", readiness=true. Elapsed: 10.078210334s Feb 1 14:37:47.374: INFO: Pod "pod-subpath-test-secret-2vfb": Phase="Running", Reason="", readiness=true. Elapsed: 12.087392429s Feb 1 14:37:49.384: INFO: Pod "pod-subpath-test-secret-2vfb": Phase="Running", Reason="", readiness=true. Elapsed: 14.097583798s Feb 1 14:37:51.399: INFO: Pod "pod-subpath-test-secret-2vfb": Phase="Running", Reason="", readiness=true. Elapsed: 16.111994778s Feb 1 14:37:53.411: INFO: Pod "pod-subpath-test-secret-2vfb": Phase="Running", Reason="", readiness=true. Elapsed: 18.123961286s Feb 1 14:37:55.429: INFO: Pod "pod-subpath-test-secret-2vfb": Phase="Running", Reason="", readiness=true. Elapsed: 20.142561304s Feb 1 14:37:57.444: INFO: Pod "pod-subpath-test-secret-2vfb": Phase="Running", Reason="", readiness=true. Elapsed: 22.157671106s Feb 1 14:37:59.457: INFO: Pod "pod-subpath-test-secret-2vfb": Phase="Running", Reason="", readiness=true. Elapsed: 24.170199218s Feb 1 14:38:01.464: INFO: Pod "pod-subpath-test-secret-2vfb": Phase="Running", Reason="", readiness=true. Elapsed: 26.177776954s Feb 1 14:38:03.475: INFO: Pod "pod-subpath-test-secret-2vfb": Phase="Running", Reason="", readiness=true. Elapsed: 28.187987994s Feb 1 14:38:05.487: INFO: Pod "pod-subpath-test-secret-2vfb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 30.199843225s STEP: Saw pod success Feb 1 14:38:05.487: INFO: Pod "pod-subpath-test-secret-2vfb" satisfied condition "success or failure" Feb 1 14:38:05.493: INFO: Trying to get logs from node iruya-node pod pod-subpath-test-secret-2vfb container test-container-subpath-secret-2vfb: STEP: delete the pod Feb 1 14:38:05.603: INFO: Waiting for pod pod-subpath-test-secret-2vfb to disappear Feb 1 14:38:05.641: INFO: Pod pod-subpath-test-secret-2vfb no longer exists STEP: Deleting pod pod-subpath-test-secret-2vfb Feb 1 14:38:05.641: INFO: Deleting pod "pod-subpath-test-secret-2vfb" in namespace "subpath-28" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 1 14:38:05.649: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-28" for this suite. Feb 1 14:38:11.686: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 1 14:38:11.953: INFO: namespace subpath-28 deletion completed in 6.296582325s • [SLOW TEST:36.776 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 1 14:38:11.953: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 1 14:38:45.420: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-6623" for this suite. Feb 1 14:38:51.539: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 1 14:38:51.763: INFO: namespace namespaces-6623 deletion completed in 6.263151916s STEP: Destroying namespace "nsdeletetest-9113" for this suite. Feb 1 14:38:51.766: INFO: Namespace nsdeletetest-9113 was already deleted STEP: Destroying namespace "nsdeletetest-5769" for this suite. Feb 1 14:38:57.801: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 1 14:38:57.937: INFO: namespace nsdeletetest-5769 deletion completed in 6.17137061s • [SLOW TEST:45.984 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 1 14:38:57.938: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod busybox-761a6b84-1fc9-4423-8886-56adf441b926 in namespace container-probe-5441 Feb 1 14:39:08.103: INFO: Started pod busybox-761a6b84-1fc9-4423-8886-56adf441b926 in namespace container-probe-5441 STEP: checking the pod's current state and verifying that restartCount is present Feb 1 14:39:08.110: INFO: Initial restart count of pod busybox-761a6b84-1fc9-4423-8886-56adf441b926 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 1 14:43:10.006: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-5441" for this suite. Feb 1 14:43:16.066: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 1 14:43:16.215: INFO: namespace container-probe-5441 deletion completed in 6.196116236s • [SLOW TEST:258.277 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 1 14:43:16.216: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test override all Feb 1 14:43:16.285: INFO: Waiting up to 5m0s for pod "client-containers-b24a3256-1bdc-4e97-8dc8-c7bba749aef2" in namespace "containers-1931" to be "success or failure" Feb 1 14:43:16.352: INFO: Pod "client-containers-b24a3256-1bdc-4e97-8dc8-c7bba749aef2": Phase="Pending", Reason="", readiness=false. Elapsed: 66.901907ms Feb 1 14:43:18.366: INFO: Pod "client-containers-b24a3256-1bdc-4e97-8dc8-c7bba749aef2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.081265441s Feb 1 14:43:20.380: INFO: Pod "client-containers-b24a3256-1bdc-4e97-8dc8-c7bba749aef2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.095010252s Feb 1 14:43:22.397: INFO: Pod "client-containers-b24a3256-1bdc-4e97-8dc8-c7bba749aef2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.11219053s Feb 1 14:43:24.407: INFO: Pod "client-containers-b24a3256-1bdc-4e97-8dc8-c7bba749aef2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.121994107s STEP: Saw pod success Feb 1 14:43:24.407: INFO: Pod "client-containers-b24a3256-1bdc-4e97-8dc8-c7bba749aef2" satisfied condition "success or failure" Feb 1 14:43:24.411: INFO: Trying to get logs from node iruya-node pod client-containers-b24a3256-1bdc-4e97-8dc8-c7bba749aef2 container test-container: STEP: delete the pod Feb 1 14:43:24.480: INFO: Waiting for pod client-containers-b24a3256-1bdc-4e97-8dc8-c7bba749aef2 to disappear Feb 1 14:43:24.489: INFO: Pod client-containers-b24a3256-1bdc-4e97-8dc8-c7bba749aef2 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 1 14:43:24.489: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-1931" for this suite. Feb 1 14:43:30.627: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 1 14:43:30.826: INFO: namespace containers-1931 deletion completed in 6.327028993s • [SLOW TEST:14.610 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 1 14:43:30.826: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Feb 1 14:43:49.078: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Feb 1 14:43:49.091: INFO: Pod pod-with-prestop-exec-hook still exists Feb 1 14:43:51.092: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Feb 1 14:43:51.102: INFO: Pod pod-with-prestop-exec-hook still exists Feb 1 14:43:53.092: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Feb 1 14:43:53.119: INFO: Pod pod-with-prestop-exec-hook still exists Feb 1 14:43:55.092: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Feb 1 14:43:55.101: INFO: Pod pod-with-prestop-exec-hook still exists Feb 1 14:43:57.092: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Feb 1 14:43:57.098: INFO: Pod pod-with-prestop-exec-hook still exists Feb 1 14:43:59.092: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Feb 1 14:43:59.107: INFO: Pod pod-with-prestop-exec-hook still exists Feb 1 14:44:01.093: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Feb 1 14:44:01.110: INFO: Pod pod-with-prestop-exec-hook still exists Feb 1 14:44:03.092: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Feb 1 14:44:03.102: INFO: Pod pod-with-prestop-exec-hook still exists Feb 1 14:44:05.092: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Feb 1 14:44:05.132: INFO: Pod pod-with-prestop-exec-hook still exists Feb 1 14:44:07.092: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Feb 1 14:44:07.102: INFO: Pod pod-with-prestop-exec-hook still exists Feb 1 14:44:09.092: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Feb 1 14:44:09.102: INFO: Pod pod-with-prestop-exec-hook still exists Feb 1 14:44:11.092: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Feb 1 14:44:11.100: INFO: Pod pod-with-prestop-exec-hook still exists Feb 1 14:44:13.092: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Feb 1 14:44:13.101: INFO: Pod pod-with-prestop-exec-hook still exists Feb 1 14:44:15.092: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Feb 1 14:44:15.101: INFO: Pod pod-with-prestop-exec-hook still exists Feb 1 14:44:17.092: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Feb 1 14:44:17.102: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 1 14:44:17.137: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-1126" for this suite. Feb 1 14:44:39.178: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 1 14:44:39.304: INFO: namespace container-lifecycle-hook-1126 deletion completed in 22.160621598s • [SLOW TEST:68.477 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 1 14:44:39.304: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 Feb 1 14:44:39.465: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Feb 1 14:44:39.473: INFO: Waiting for terminating namespaces to be deleted... Feb 1 14:44:39.475: INFO: Logging pods the kubelet thinks is on node iruya-node before test Feb 1 14:44:39.489: INFO: weave-net-rlp57 from kube-system started at 2019-10-12 11:56:39 +0000 UTC (2 container statuses recorded) Feb 1 14:44:39.489: INFO: Container weave ready: true, restart count 0 Feb 1 14:44:39.489: INFO: Container weave-npc ready: true, restart count 0 Feb 1 14:44:39.489: INFO: kube-proxy-976zl from kube-system started at 2019-08-04 09:01:39 +0000 UTC (1 container statuses recorded) Feb 1 14:44:39.489: INFO: Container kube-proxy ready: true, restart count 0 Feb 1 14:44:39.489: INFO: Logging pods the kubelet thinks is on node iruya-server-sfge57q7djm7 before test Feb 1 14:44:39.503: INFO: kube-apiserver-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:39 +0000 UTC (1 container statuses recorded) Feb 1 14:44:39.503: INFO: Container kube-apiserver ready: true, restart count 0 Feb 1 14:44:39.503: INFO: kube-scheduler-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:43 +0000 UTC (1 container statuses recorded) Feb 1 14:44:39.503: INFO: Container kube-scheduler ready: true, restart count 13 Feb 1 14:44:39.503: INFO: coredns-5c98db65d4-xx8w8 from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded) Feb 1 14:44:39.503: INFO: Container coredns ready: true, restart count 0 Feb 1 14:44:39.503: INFO: etcd-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:38 +0000 UTC (1 container statuses recorded) Feb 1 14:44:39.503: INFO: Container etcd ready: true, restart count 0 Feb 1 14:44:39.503: INFO: weave-net-bzl4d from kube-system started at 2019-08-04 08:52:37 +0000 UTC (2 container statuses recorded) Feb 1 14:44:39.503: INFO: Container weave ready: true, restart count 0 Feb 1 14:44:39.503: INFO: Container weave-npc ready: true, restart count 0 Feb 1 14:44:39.503: INFO: coredns-5c98db65d4-bm4gs from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded) Feb 1 14:44:39.503: INFO: Container coredns ready: true, restart count 0 Feb 1 14:44:39.504: INFO: kube-controller-manager-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:42 +0000 UTC (1 container statuses recorded) Feb 1 14:44:39.504: INFO: Container kube-controller-manager ready: true, restart count 19 Feb 1 14:44:39.504: INFO: kube-proxy-58v95 from kube-system started at 2019-08-04 08:52:37 +0000 UTC (1 container statuses recorded) Feb 1 14:44:39.504: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.15ef4e79df84b2c2], Reason = [FailedScheduling], Message = [0/2 nodes are available: 2 node(s) didn't match node selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 1 14:44:40.593: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-1596" for this suite. Feb 1 14:44:46.630: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 1 14:44:46.769: INFO: namespace sched-pred-1596 deletion completed in 6.167656056s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72 • [SLOW TEST:7.465 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23 validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 1 14:44:46.770: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod liveness-920ef66c-3608-459c-ba3e-3b37f136e047 in namespace container-probe-3207 Feb 1 14:44:54.964: INFO: Started pod liveness-920ef66c-3608-459c-ba3e-3b37f136e047 in namespace container-probe-3207 STEP: checking the pod's current state and verifying that restartCount is present Feb 1 14:44:54.969: INFO: Initial restart count of pod liveness-920ef66c-3608-459c-ba3e-3b37f136e047 is 0 Feb 1 14:45:13.054: INFO: Restart count of pod container-probe-3207/liveness-920ef66c-3608-459c-ba3e-3b37f136e047 is now 1 (18.085502076s elapsed) Feb 1 14:45:33.188: INFO: Restart count of pod container-probe-3207/liveness-920ef66c-3608-459c-ba3e-3b37f136e047 is now 2 (38.219567901s elapsed) Feb 1 14:45:51.290: INFO: Restart count of pod container-probe-3207/liveness-920ef66c-3608-459c-ba3e-3b37f136e047 is now 3 (56.320854684s elapsed) Feb 1 14:46:11.396: INFO: Restart count of pod container-probe-3207/liveness-920ef66c-3608-459c-ba3e-3b37f136e047 is now 4 (1m16.427517044s elapsed) Feb 1 14:47:15.792: INFO: Restart count of pod container-probe-3207/liveness-920ef66c-3608-459c-ba3e-3b37f136e047 is now 5 (2m20.823233713s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 1 14:47:15.829: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-3207" for this suite. Feb 1 14:47:21.892: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 1 14:47:22.017: INFO: namespace container-probe-3207 deletion completed in 6.169097247s • [SLOW TEST:155.248 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-network] Services should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 1 14:47:22.018: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88 [It] should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating service endpoint-test2 in namespace services-3715 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-3715 to expose endpoints map[] Feb 1 14:47:22.222: INFO: Get endpoints failed (13.653814ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found Feb 1 14:47:23.229: INFO: successfully validated that service endpoint-test2 in namespace services-3715 exposes endpoints map[] (1.020503647s elapsed) STEP: Creating pod pod1 in namespace services-3715 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-3715 to expose endpoints map[pod1:[80]] Feb 1 14:47:27.398: INFO: Unexpected endpoints: found map[], expected map[pod1:[80]] (4.15620306s elapsed, will retry) Feb 1 14:47:32.479: INFO: successfully validated that service endpoint-test2 in namespace services-3715 exposes endpoints map[pod1:[80]] (9.237614729s elapsed) STEP: Creating pod pod2 in namespace services-3715 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-3715 to expose endpoints map[pod1:[80] pod2:[80]] Feb 1 14:47:37.600: INFO: Unexpected endpoints: found map[2089842d-72d5-4e2a-b5fc-d024a994c1f5:[80]], expected map[pod1:[80] pod2:[80]] (5.113963148s elapsed, will retry) Feb 1 14:47:41.692: INFO: successfully validated that service endpoint-test2 in namespace services-3715 exposes endpoints map[pod1:[80] pod2:[80]] (9.206072874s elapsed) STEP: Deleting pod pod1 in namespace services-3715 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-3715 to expose endpoints map[pod2:[80]] Feb 1 14:47:42.752: INFO: successfully validated that service endpoint-test2 in namespace services-3715 exposes endpoints map[pod2:[80]] (1.05231937s elapsed) STEP: Deleting pod pod2 in namespace services-3715 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-3715 to expose endpoints map[] Feb 1 14:47:42.848: INFO: successfully validated that service endpoint-test2 in namespace services-3715 exposes endpoints map[] (70.144631ms elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 1 14:47:42.996: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-3715" for this suite. Feb 1 14:48:05.049: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 1 14:48:05.171: INFO: namespace services-3715 deletion completed in 22.160528164s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92 • [SLOW TEST:43.153 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 1 14:48:05.172: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name cm-test-opt-del-2464f322-4da3-4009-bb11-5f6f3b6b2d1d STEP: Creating configMap with name cm-test-opt-upd-c544d439-0374-4904-b197-c0c150e37237 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-2464f322-4da3-4009-bb11-5f6f3b6b2d1d STEP: Updating configmap cm-test-opt-upd-c544d439-0374-4904-b197-c0c150e37237 STEP: Creating configMap with name cm-test-opt-create-41193e15-c6da-46e5-9cb8-69ffee52bf17 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 1 14:49:52.045: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9662" for this suite. Feb 1 14:50:14.076: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 1 14:50:14.175: INFO: namespace configmap-9662 deletion completed in 22.125285867s • [SLOW TEST:129.003 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 1 14:50:14.176: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Feb 1 14:50:14.242: INFO: Waiting up to 5m0s for pod "downwardapi-volume-337d817e-a56b-4f99-a3d9-d95f3fc20816" in namespace "projected-9013" to be "success or failure" Feb 1 14:50:14.247: INFO: Pod "downwardapi-volume-337d817e-a56b-4f99-a3d9-d95f3fc20816": Phase="Pending", Reason="", readiness=false. Elapsed: 4.881072ms Feb 1 14:50:16.256: INFO: Pod "downwardapi-volume-337d817e-a56b-4f99-a3d9-d95f3fc20816": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014225865s Feb 1 14:50:18.298: INFO: Pod "downwardapi-volume-337d817e-a56b-4f99-a3d9-d95f3fc20816": Phase="Pending", Reason="", readiness=false. Elapsed: 4.055763073s Feb 1 14:50:20.308: INFO: Pod "downwardapi-volume-337d817e-a56b-4f99-a3d9-d95f3fc20816": Phase="Pending", Reason="", readiness=false. Elapsed: 6.06605556s Feb 1 14:50:22.330: INFO: Pod "downwardapi-volume-337d817e-a56b-4f99-a3d9-d95f3fc20816": Phase="Pending", Reason="", readiness=false. Elapsed: 8.087915477s Feb 1 14:50:24.346: INFO: Pod "downwardapi-volume-337d817e-a56b-4f99-a3d9-d95f3fc20816": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.103683341s STEP: Saw pod success Feb 1 14:50:24.346: INFO: Pod "downwardapi-volume-337d817e-a56b-4f99-a3d9-d95f3fc20816" satisfied condition "success or failure" Feb 1 14:50:24.351: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-337d817e-a56b-4f99-a3d9-d95f3fc20816 container client-container: STEP: delete the pod Feb 1 14:50:24.463: INFO: Waiting for pod downwardapi-volume-337d817e-a56b-4f99-a3d9-d95f3fc20816 to disappear Feb 1 14:50:24.492: INFO: Pod downwardapi-volume-337d817e-a56b-4f99-a3d9-d95f3fc20816 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 1 14:50:24.492: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9013" for this suite. Feb 1 14:50:30.572: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 1 14:50:30.796: INFO: namespace projected-9013 deletion completed in 6.291335036s • [SLOW TEST:16.620 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 1 14:50:30.797: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0777 on node default medium Feb 1 14:50:30.954: INFO: Waiting up to 5m0s for pod "pod-916cb9fe-99bd-4de8-92f3-8f573fa8464e" in namespace "emptydir-4077" to be "success or failure" Feb 1 14:50:30.961: INFO: Pod "pod-916cb9fe-99bd-4de8-92f3-8f573fa8464e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.561558ms Feb 1 14:50:32.975: INFO: Pod "pod-916cb9fe-99bd-4de8-92f3-8f573fa8464e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020908025s Feb 1 14:50:35.040: INFO: Pod "pod-916cb9fe-99bd-4de8-92f3-8f573fa8464e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.085286594s Feb 1 14:50:37.668: INFO: Pod "pod-916cb9fe-99bd-4de8-92f3-8f573fa8464e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.713786006s Feb 1 14:50:39.685: INFO: Pod "pod-916cb9fe-99bd-4de8-92f3-8f573fa8464e": Phase="Pending", Reason="", readiness=false. Elapsed: 8.730764479s Feb 1 14:50:41.695: INFO: Pod "pod-916cb9fe-99bd-4de8-92f3-8f573fa8464e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.740597807s STEP: Saw pod success Feb 1 14:50:41.695: INFO: Pod "pod-916cb9fe-99bd-4de8-92f3-8f573fa8464e" satisfied condition "success or failure" Feb 1 14:50:41.700: INFO: Trying to get logs from node iruya-node pod pod-916cb9fe-99bd-4de8-92f3-8f573fa8464e container test-container: STEP: delete the pod Feb 1 14:50:41.812: INFO: Waiting for pod pod-916cb9fe-99bd-4de8-92f3-8f573fa8464e to disappear Feb 1 14:50:41.888: INFO: Pod pod-916cb9fe-99bd-4de8-92f3-8f573fa8464e no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 1 14:50:41.889: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4077" for this suite. Feb 1 14:50:47.952: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 1 14:50:48.067: INFO: namespace emptydir-4077 deletion completed in 6.169135022s • [SLOW TEST:17.270 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 1 14:50:48.068: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name secret-emptykey-test-56f34fed-269c-4ae2-b9ce-0ca403f5bc85 [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 1 14:50:48.137: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1654" for this suite. Feb 1 14:50:54.198: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 1 14:50:54.346: INFO: namespace secrets-1654 deletion completed in 6.206166902s • [SLOW TEST:6.279 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31 should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 1 14:50:54.347: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-5f3d8224-d77c-4bda-98ea-0570c4de4467 STEP: Creating a pod to test consume secrets Feb 1 14:50:54.487: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-e77d400e-fabe-4b6d-9a43-3a48e8ad0f78" in namespace "projected-1703" to be "success or failure" Feb 1 14:50:54.497: INFO: Pod "pod-projected-secrets-e77d400e-fabe-4b6d-9a43-3a48e8ad0f78": Phase="Pending", Reason="", readiness=false. Elapsed: 9.000065ms Feb 1 14:50:56.511: INFO: Pod "pod-projected-secrets-e77d400e-fabe-4b6d-9a43-3a48e8ad0f78": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023576351s Feb 1 14:50:58.531: INFO: Pod "pod-projected-secrets-e77d400e-fabe-4b6d-9a43-3a48e8ad0f78": Phase="Pending", Reason="", readiness=false. Elapsed: 4.04321812s Feb 1 14:51:00.548: INFO: Pod "pod-projected-secrets-e77d400e-fabe-4b6d-9a43-3a48e8ad0f78": Phase="Pending", Reason="", readiness=false. Elapsed: 6.060010053s Feb 1 14:51:02.561: INFO: Pod "pod-projected-secrets-e77d400e-fabe-4b6d-9a43-3a48e8ad0f78": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.073482084s STEP: Saw pod success Feb 1 14:51:02.561: INFO: Pod "pod-projected-secrets-e77d400e-fabe-4b6d-9a43-3a48e8ad0f78" satisfied condition "success or failure" Feb 1 14:51:02.568: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-e77d400e-fabe-4b6d-9a43-3a48e8ad0f78 container projected-secret-volume-test: STEP: delete the pod Feb 1 14:51:02.704: INFO: Waiting for pod pod-projected-secrets-e77d400e-fabe-4b6d-9a43-3a48e8ad0f78 to disappear Feb 1 14:51:02.731: INFO: Pod pod-projected-secrets-e77d400e-fabe-4b6d-9a43-3a48e8ad0f78 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 1 14:51:02.731: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1703" for this suite. Feb 1 14:51:10.762: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 1 14:51:10.859: INFO: namespace projected-1703 deletion completed in 8.121296587s • [SLOW TEST:16.512 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 1 14:51:10.860: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Performing setup for networking test in namespace pod-network-test-8630 STEP: creating a selector STEP: Creating the service pods in kubernetes Feb 1 14:51:10.943: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Feb 1 14:51:51.122: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.44.0.1 8081 | grep -v '^\s*$'] Namespace:pod-network-test-8630 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 1 14:51:51.122: INFO: >>> kubeConfig: /root/.kube/config I0201 14:51:51.194515 8 log.go:172] (0xc0008be420) (0xc001b58f00) Create stream I0201 14:51:51.194695 8 log.go:172] (0xc0008be420) (0xc001b58f00) Stream added, broadcasting: 1 I0201 14:51:51.203812 8 log.go:172] (0xc0008be420) Reply frame received for 1 I0201 14:51:51.203879 8 log.go:172] (0xc0008be420) (0xc00246a820) Create stream I0201 14:51:51.203897 8 log.go:172] (0xc0008be420) (0xc00246a820) Stream added, broadcasting: 3 I0201 14:51:51.206023 8 log.go:172] (0xc0008be420) Reply frame received for 3 I0201 14:51:51.206048 8 log.go:172] (0xc0008be420) (0xc001b590e0) Create stream I0201 14:51:51.206066 8 log.go:172] (0xc0008be420) (0xc001b590e0) Stream added, broadcasting: 5 I0201 14:51:51.216192 8 log.go:172] (0xc0008be420) Reply frame received for 5 I0201 14:51:52.481057 8 log.go:172] (0xc0008be420) Data frame received for 3 I0201 14:51:52.481229 8 log.go:172] (0xc00246a820) (3) Data frame handling I0201 14:51:52.481279 8 log.go:172] (0xc00246a820) (3) Data frame sent I0201 14:51:52.758176 8 log.go:172] (0xc0008be420) Data frame received for 1 I0201 14:51:52.758336 8 log.go:172] (0xc0008be420) (0xc00246a820) Stream removed, broadcasting: 3 I0201 14:51:52.758502 8 log.go:172] (0xc001b58f00) (1) Data frame handling I0201 14:51:52.758537 8 log.go:172] (0xc001b58f00) (1) Data frame sent I0201 14:51:52.758582 8 log.go:172] (0xc0008be420) (0xc001b58f00) Stream removed, broadcasting: 1 I0201 14:51:52.759492 8 log.go:172] (0xc0008be420) (0xc001b590e0) Stream removed, broadcasting: 5 I0201 14:51:52.759609 8 log.go:172] (0xc0008be420) (0xc001b58f00) Stream removed, broadcasting: 1 I0201 14:51:52.759634 8 log.go:172] (0xc0008be420) (0xc00246a820) Stream removed, broadcasting: 3 I0201 14:51:52.759647 8 log.go:172] (0xc0008be420) (0xc001b590e0) Stream removed, broadcasting: 5 Feb 1 14:51:52.760: INFO: Found all expected endpoints: [netserver-0] Feb 1 14:51:52.770: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.32.0.4 8081 | grep -v '^\s*$'] Namespace:pod-network-test-8630 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 1 14:51:52.770: INFO: >>> kubeConfig: /root/.kube/config I0201 14:51:52.837236 8 log.go:172] (0xc000240d10) (0xc00246ab40) Create stream I0201 14:51:52.837352 8 log.go:172] (0xc000240d10) (0xc00246ab40) Stream added, broadcasting: 1 I0201 14:51:52.848360 8 log.go:172] (0xc000240d10) Reply frame received for 1 I0201 14:51:52.848492 8 log.go:172] (0xc000240d10) (0xc002f96000) Create stream I0201 14:51:52.848524 8 log.go:172] (0xc000240d10) (0xc002f96000) Stream added, broadcasting: 3 I0201 14:51:52.855708 8 log.go:172] (0xc000240d10) Reply frame received for 3 I0201 14:51:52.855851 8 log.go:172] (0xc000240d10) (0xc0012aa8c0) Create stream I0201 14:51:52.855875 8 log.go:172] (0xc000240d10) (0xc0012aa8c0) Stream added, broadcasting: 5 I0201 14:51:52.860454 8 log.go:172] (0xc000240d10) Reply frame received for 5 I0201 14:51:54.010376 8 log.go:172] (0xc000240d10) Data frame received for 3 I0201 14:51:54.010515 8 log.go:172] (0xc002f96000) (3) Data frame handling I0201 14:51:54.010661 8 log.go:172] (0xc002f96000) (3) Data frame sent I0201 14:51:54.203947 8 log.go:172] (0xc000240d10) Data frame received for 1 I0201 14:51:54.204188 8 log.go:172] (0xc00246ab40) (1) Data frame handling I0201 14:51:54.204266 8 log.go:172] (0xc00246ab40) (1) Data frame sent I0201 14:51:54.205782 8 log.go:172] (0xc000240d10) (0xc00246ab40) Stream removed, broadcasting: 1 I0201 14:51:54.207004 8 log.go:172] (0xc000240d10) (0xc002f96000) Stream removed, broadcasting: 3 I0201 14:51:54.208811 8 log.go:172] (0xc000240d10) (0xc0012aa8c0) Stream removed, broadcasting: 5 I0201 14:51:54.208957 8 log.go:172] (0xc000240d10) (0xc00246ab40) Stream removed, broadcasting: 1 I0201 14:51:54.211388 8 log.go:172] (0xc000240d10) (0xc002f96000) Stream removed, broadcasting: 3 I0201 14:51:54.211523 8 log.go:172] (0xc000240d10) (0xc0012aa8c0) Stream removed, broadcasting: 5 I0201 14:51:54.211627 8 log.go:172] (0xc000240d10) Go away received Feb 1 14:51:54.211: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 1 14:51:54.212: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-8630" for this suite. Feb 1 14:52:18.290: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 1 14:52:18.433: INFO: namespace pod-network-test-8630 deletion completed in 24.204304529s • [SLOW TEST:67.573 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run pod should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 1 14:52:18.434: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1685 [It] should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Feb 1 14:52:18.526: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-5235' Feb 1 14:52:20.647: INFO: stderr: "" Feb 1 14:52:20.647: INFO: stdout: "pod/e2e-test-nginx-pod created\n" STEP: verifying the pod e2e-test-nginx-pod was created [AfterEach] [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1690 Feb 1 14:52:20.736: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=kubectl-5235' Feb 1 14:52:24.930: INFO: stderr: "" Feb 1 14:52:24.931: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 1 14:52:24.931: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5235" for this suite. Feb 1 14:52:31.039: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 1 14:52:31.151: INFO: namespace kubectl-5235 deletion completed in 6.156151944s • [SLOW TEST:12.717 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 1 14:52:31.152: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-map-76320f83-f504-4219-b7dd-31c7d4b4bf41 STEP: Creating a pod to test consume configMaps Feb 1 14:52:31.233: INFO: Waiting up to 5m0s for pod "pod-configmaps-6f05833e-e3db-49c6-a1e9-4c9dffb5dd74" in namespace "configmap-7875" to be "success or failure" Feb 1 14:52:31.278: INFO: Pod "pod-configmaps-6f05833e-e3db-49c6-a1e9-4c9dffb5dd74": Phase="Pending", Reason="", readiness=false. Elapsed: 45.573629ms Feb 1 14:52:33.290: INFO: Pod "pod-configmaps-6f05833e-e3db-49c6-a1e9-4c9dffb5dd74": Phase="Pending", Reason="", readiness=false. Elapsed: 2.057259929s Feb 1 14:52:36.545: INFO: Pod "pod-configmaps-6f05833e-e3db-49c6-a1e9-4c9dffb5dd74": Phase="Pending", Reason="", readiness=false. Elapsed: 5.312000464s Feb 1 14:52:38.563: INFO: Pod "pod-configmaps-6f05833e-e3db-49c6-a1e9-4c9dffb5dd74": Phase="Pending", Reason="", readiness=false. Elapsed: 7.330221443s Feb 1 14:52:40.578: INFO: Pod "pod-configmaps-6f05833e-e3db-49c6-a1e9-4c9dffb5dd74": Phase="Succeeded", Reason="", readiness=false. Elapsed: 9.344848342s STEP: Saw pod success Feb 1 14:52:40.578: INFO: Pod "pod-configmaps-6f05833e-e3db-49c6-a1e9-4c9dffb5dd74" satisfied condition "success or failure" Feb 1 14:52:40.585: INFO: Trying to get logs from node iruya-node pod pod-configmaps-6f05833e-e3db-49c6-a1e9-4c9dffb5dd74 container configmap-volume-test: STEP: delete the pod Feb 1 14:52:40.691: INFO: Waiting for pod pod-configmaps-6f05833e-e3db-49c6-a1e9-4c9dffb5dd74 to disappear Feb 1 14:52:40.729: INFO: Pod pod-configmaps-6f05833e-e3db-49c6-a1e9-4c9dffb5dd74 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 1 14:52:40.730: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7875" for this suite. Feb 1 14:52:46.783: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 1 14:52:46.930: INFO: namespace configmap-7875 deletion completed in 6.180136978s • [SLOW TEST:15.779 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 1 14:52:46.931: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Feb 1 14:52:47.064: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a7c133f7-bab4-41e7-952b-550c5e07a06f" in namespace "projected-4013" to be "success or failure" Feb 1 14:52:47.093: INFO: Pod "downwardapi-volume-a7c133f7-bab4-41e7-952b-550c5e07a06f": Phase="Pending", Reason="", readiness=false. Elapsed: 29.54481ms Feb 1 14:52:49.103: INFO: Pod "downwardapi-volume-a7c133f7-bab4-41e7-952b-550c5e07a06f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.039301783s Feb 1 14:52:51.112: INFO: Pod "downwardapi-volume-a7c133f7-bab4-41e7-952b-550c5e07a06f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.047813004s Feb 1 14:52:53.120: INFO: Pod "downwardapi-volume-a7c133f7-bab4-41e7-952b-550c5e07a06f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.056181123s Feb 1 14:52:55.132: INFO: Pod "downwardapi-volume-a7c133f7-bab4-41e7-952b-550c5e07a06f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.067888116s STEP: Saw pod success Feb 1 14:52:55.132: INFO: Pod "downwardapi-volume-a7c133f7-bab4-41e7-952b-550c5e07a06f" satisfied condition "success or failure" Feb 1 14:52:55.134: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-a7c133f7-bab4-41e7-952b-550c5e07a06f container client-container: STEP: delete the pod Feb 1 14:52:55.211: INFO: Waiting for pod downwardapi-volume-a7c133f7-bab4-41e7-952b-550c5e07a06f to disappear Feb 1 14:52:55.296: INFO: Pod downwardapi-volume-a7c133f7-bab4-41e7-952b-550c5e07a06f no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 1 14:52:55.297: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4013" for this suite. Feb 1 14:53:01.334: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 1 14:53:01.493: INFO: namespace projected-4013 deletion completed in 6.181915376s • [SLOW TEST:14.562 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 1 14:53:01.493: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-11296d93-87f4-4607-8a42-594249954921 STEP: Creating a pod to test consume secrets Feb 1 14:53:01.604: INFO: Waiting up to 5m0s for pod "pod-secrets-d524e3ae-c81d-49d7-9bbb-c15bad33bb07" in namespace "secrets-7781" to be "success or failure" Feb 1 14:53:01.635: INFO: Pod "pod-secrets-d524e3ae-c81d-49d7-9bbb-c15bad33bb07": Phase="Pending", Reason="", readiness=false. Elapsed: 30.85856ms Feb 1 14:53:03.654: INFO: Pod "pod-secrets-d524e3ae-c81d-49d7-9bbb-c15bad33bb07": Phase="Pending", Reason="", readiness=false. Elapsed: 2.050070755s Feb 1 14:53:05.674: INFO: Pod "pod-secrets-d524e3ae-c81d-49d7-9bbb-c15bad33bb07": Phase="Pending", Reason="", readiness=false. Elapsed: 4.06973362s Feb 1 14:53:08.372: INFO: Pod "pod-secrets-d524e3ae-c81d-49d7-9bbb-c15bad33bb07": Phase="Pending", Reason="", readiness=false. Elapsed: 6.767777239s Feb 1 14:53:10.389: INFO: Pod "pod-secrets-d524e3ae-c81d-49d7-9bbb-c15bad33bb07": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.785212544s STEP: Saw pod success Feb 1 14:53:10.389: INFO: Pod "pod-secrets-d524e3ae-c81d-49d7-9bbb-c15bad33bb07" satisfied condition "success or failure" Feb 1 14:53:10.396: INFO: Trying to get logs from node iruya-node pod pod-secrets-d524e3ae-c81d-49d7-9bbb-c15bad33bb07 container secret-env-test: STEP: delete the pod Feb 1 14:53:10.519: INFO: Waiting for pod pod-secrets-d524e3ae-c81d-49d7-9bbb-c15bad33bb07 to disappear Feb 1 14:53:10.524: INFO: Pod pod-secrets-d524e3ae-c81d-49d7-9bbb-c15bad33bb07 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 1 14:53:10.524: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7781" for this suite. Feb 1 14:53:16.548: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 1 14:53:16.669: INFO: namespace secrets-7781 deletion completed in 6.140363215s • [SLOW TEST:15.176 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31 should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run default should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 1 14:53:16.669: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1420 [It] should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Feb 1 14:53:16.722: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-4657' Feb 1 14:53:16.899: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Feb 1 14:53:16.900: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n" STEP: verifying the pod controlled by e2e-test-nginx-deployment gets created [AfterEach] [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1426 Feb 1 14:53:16.972: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=kubectl-4657' Feb 1 14:53:17.176: INFO: stderr: "" Feb 1 14:53:17.176: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 1 14:53:17.176: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4657" for this suite. Feb 1 14:53:23.244: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 1 14:53:23.365: INFO: namespace kubectl-4657 deletion completed in 6.184408239s • [SLOW TEST:6.696 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 1 14:53:23.366: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-f9f6efea-e0b6-44e8-a15b-82cb07c601f6 STEP: Creating a pod to test consume secrets Feb 1 14:53:23.457: INFO: Waiting up to 5m0s for pod "pod-secrets-af610702-58a6-4cd8-8055-4cdf27507d34" in namespace "secrets-7574" to be "success or failure" Feb 1 14:53:23.464: INFO: Pod "pod-secrets-af610702-58a6-4cd8-8055-4cdf27507d34": Phase="Pending", Reason="", readiness=false. Elapsed: 5.917285ms Feb 1 14:53:25.473: INFO: Pod "pod-secrets-af610702-58a6-4cd8-8055-4cdf27507d34": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01535977s Feb 1 14:53:27.482: INFO: Pod "pod-secrets-af610702-58a6-4cd8-8055-4cdf27507d34": Phase="Pending", Reason="", readiness=false. Elapsed: 4.02443103s Feb 1 14:53:29.492: INFO: Pod "pod-secrets-af610702-58a6-4cd8-8055-4cdf27507d34": Phase="Pending", Reason="", readiness=false. Elapsed: 6.034865171s Feb 1 14:53:31.501: INFO: Pod "pod-secrets-af610702-58a6-4cd8-8055-4cdf27507d34": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.042982052s STEP: Saw pod success Feb 1 14:53:31.501: INFO: Pod "pod-secrets-af610702-58a6-4cd8-8055-4cdf27507d34" satisfied condition "success or failure" Feb 1 14:53:31.506: INFO: Trying to get logs from node iruya-node pod pod-secrets-af610702-58a6-4cd8-8055-4cdf27507d34 container secret-volume-test: STEP: delete the pod Feb 1 14:53:31.612: INFO: Waiting for pod pod-secrets-af610702-58a6-4cd8-8055-4cdf27507d34 to disappear Feb 1 14:53:31.629: INFO: Pod pod-secrets-af610702-58a6-4cd8-8055-4cdf27507d34 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 1 14:53:31.629: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7574" for this suite. Feb 1 14:53:37.661: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 1 14:53:37.844: INFO: namespace secrets-7574 deletion completed in 6.209175821s • [SLOW TEST:14.478 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 1 14:53:37.845: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Feb 1 14:53:37.969: INFO: Waiting up to 5m0s for pod "downwardapi-volume-af346298-cf0f-49d4-8f18-45c2e47b5f8f" in namespace "projected-2816" to be "success or failure" Feb 1 14:53:37.978: INFO: Pod "downwardapi-volume-af346298-cf0f-49d4-8f18-45c2e47b5f8f": Phase="Pending", Reason="", readiness=false. Elapsed: 9.369772ms Feb 1 14:53:40.255: INFO: Pod "downwardapi-volume-af346298-cf0f-49d4-8f18-45c2e47b5f8f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.286304803s Feb 1 14:53:42.270: INFO: Pod "downwardapi-volume-af346298-cf0f-49d4-8f18-45c2e47b5f8f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.301547406s Feb 1 14:53:44.280: INFO: Pod "downwardapi-volume-af346298-cf0f-49d4-8f18-45c2e47b5f8f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.31092957s Feb 1 14:53:46.288: INFO: Pod "downwardapi-volume-af346298-cf0f-49d4-8f18-45c2e47b5f8f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.31970379s STEP: Saw pod success Feb 1 14:53:46.289: INFO: Pod "downwardapi-volume-af346298-cf0f-49d4-8f18-45c2e47b5f8f" satisfied condition "success or failure" Feb 1 14:53:46.297: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-af346298-cf0f-49d4-8f18-45c2e47b5f8f container client-container: STEP: delete the pod Feb 1 14:53:46.350: INFO: Waiting for pod downwardapi-volume-af346298-cf0f-49d4-8f18-45c2e47b5f8f to disappear Feb 1 14:53:46.365: INFO: Pod downwardapi-volume-af346298-cf0f-49d4-8f18-45c2e47b5f8f no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 1 14:53:46.365: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2816" for this suite. Feb 1 14:53:52.547: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 1 14:53:52.681: INFO: namespace projected-2816 deletion completed in 6.187463763s • [SLOW TEST:14.837 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 1 14:53:52.683: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 1 14:53:52.899: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-7431" for this suite. Feb 1 14:53:59.062: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 1 14:53:59.298: INFO: namespace kubelet-test-7431 deletion completed in 6.276683812s • [SLOW TEST:6.615 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78 should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 1 14:53:59.298: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Feb 1 14:53:59.455: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f2317fe9-770a-4e8d-a449-f1ea724c76b4" in namespace "projected-5581" to be "success or failure" Feb 1 14:53:59.482: INFO: Pod "downwardapi-volume-f2317fe9-770a-4e8d-a449-f1ea724c76b4": Phase="Pending", Reason="", readiness=false. Elapsed: 27.246678ms Feb 1 14:54:01.497: INFO: Pod "downwardapi-volume-f2317fe9-770a-4e8d-a449-f1ea724c76b4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.041927488s Feb 1 14:54:03.503: INFO: Pod "downwardapi-volume-f2317fe9-770a-4e8d-a449-f1ea724c76b4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.048210291s Feb 1 14:54:05.511: INFO: Pod "downwardapi-volume-f2317fe9-770a-4e8d-a449-f1ea724c76b4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.056595306s Feb 1 14:54:07.521: INFO: Pod "downwardapi-volume-f2317fe9-770a-4e8d-a449-f1ea724c76b4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.065974458s STEP: Saw pod success Feb 1 14:54:07.521: INFO: Pod "downwardapi-volume-f2317fe9-770a-4e8d-a449-f1ea724c76b4" satisfied condition "success or failure" Feb 1 14:54:07.525: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-f2317fe9-770a-4e8d-a449-f1ea724c76b4 container client-container: STEP: delete the pod Feb 1 14:54:07.926: INFO: Waiting for pod downwardapi-volume-f2317fe9-770a-4e8d-a449-f1ea724c76b4 to disappear Feb 1 14:54:07.933: INFO: Pod downwardapi-volume-f2317fe9-770a-4e8d-a449-f1ea724c76b4 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 1 14:54:07.933: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5581" for this suite. Feb 1 14:54:13.967: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 1 14:54:14.080: INFO: namespace projected-5581 deletion completed in 6.138257281s • [SLOW TEST:14.782 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 1 14:54:14.080: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Feb 1 14:54:14.195: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d17cb188-0135-4b72-9c43-ea8a29a1d21a" in namespace "downward-api-9714" to be "success or failure" Feb 1 14:54:14.217: INFO: Pod "downwardapi-volume-d17cb188-0135-4b72-9c43-ea8a29a1d21a": Phase="Pending", Reason="", readiness=false. Elapsed: 20.883388ms Feb 1 14:54:16.231: INFO: Pod "downwardapi-volume-d17cb188-0135-4b72-9c43-ea8a29a1d21a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035629787s Feb 1 14:54:18.240: INFO: Pod "downwardapi-volume-d17cb188-0135-4b72-9c43-ea8a29a1d21a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.044787732s Feb 1 14:54:20.247: INFO: Pod "downwardapi-volume-d17cb188-0135-4b72-9c43-ea8a29a1d21a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.05150361s Feb 1 14:54:22.254: INFO: Pod "downwardapi-volume-d17cb188-0135-4b72-9c43-ea8a29a1d21a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.05788988s STEP: Saw pod success Feb 1 14:54:22.254: INFO: Pod "downwardapi-volume-d17cb188-0135-4b72-9c43-ea8a29a1d21a" satisfied condition "success or failure" Feb 1 14:54:22.257: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-d17cb188-0135-4b72-9c43-ea8a29a1d21a container client-container: STEP: delete the pod Feb 1 14:54:22.396: INFO: Waiting for pod downwardapi-volume-d17cb188-0135-4b72-9c43-ea8a29a1d21a to disappear Feb 1 14:54:22.412: INFO: Pod downwardapi-volume-d17cb188-0135-4b72-9c43-ea8a29a1d21a no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 1 14:54:22.413: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9714" for this suite. Feb 1 14:54:28.536: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 1 14:54:28.656: INFO: namespace downward-api-9714 deletion completed in 6.225534009s • [SLOW TEST:14.576 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 1 14:54:28.657: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Feb 1 14:54:28.826: INFO: Waiting up to 5m0s for pod "downwardapi-volume-58857c10-259f-4188-8640-6a9c40c92a37" in namespace "downward-api-8970" to be "success or failure" Feb 1 14:54:28.837: INFO: Pod "downwardapi-volume-58857c10-259f-4188-8640-6a9c40c92a37": Phase="Pending", Reason="", readiness=false. Elapsed: 11.514009ms Feb 1 14:54:30.846: INFO: Pod "downwardapi-volume-58857c10-259f-4188-8640-6a9c40c92a37": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019991179s Feb 1 14:54:32.876: INFO: Pod "downwardapi-volume-58857c10-259f-4188-8640-6a9c40c92a37": Phase="Pending", Reason="", readiness=false. Elapsed: 4.05067147s Feb 1 14:54:34.891: INFO: Pod "downwardapi-volume-58857c10-259f-4188-8640-6a9c40c92a37": Phase="Pending", Reason="", readiness=false. Elapsed: 6.06495042s Feb 1 14:54:36.903: INFO: Pod "downwardapi-volume-58857c10-259f-4188-8640-6a9c40c92a37": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.07751124s STEP: Saw pod success Feb 1 14:54:36.903: INFO: Pod "downwardapi-volume-58857c10-259f-4188-8640-6a9c40c92a37" satisfied condition "success or failure" Feb 1 14:54:36.910: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-58857c10-259f-4188-8640-6a9c40c92a37 container client-container: STEP: delete the pod Feb 1 14:54:37.022: INFO: Waiting for pod downwardapi-volume-58857c10-259f-4188-8640-6a9c40c92a37 to disappear Feb 1 14:54:37.036: INFO: Pod downwardapi-volume-58857c10-259f-4188-8640-6a9c40c92a37 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 1 14:54:37.036: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8970" for this suite. Feb 1 14:54:43.097: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 1 14:54:43.216: INFO: namespace downward-api-8970 deletion completed in 6.17261s • [SLOW TEST:14.559 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 1 14:54:43.216: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-3c5fa44d-69b9-4158-a041-dc298a608973 STEP: Creating a pod to test consume configMaps Feb 1 14:54:43.489: INFO: Waiting up to 5m0s for pod "pod-configmaps-0409e92a-17f3-4dba-8c30-c160a99c476c" in namespace "configmap-1129" to be "success or failure" Feb 1 14:54:43.498: INFO: Pod "pod-configmaps-0409e92a-17f3-4dba-8c30-c160a99c476c": Phase="Pending", Reason="", readiness=false. Elapsed: 8.228942ms Feb 1 14:54:45.508: INFO: Pod "pod-configmaps-0409e92a-17f3-4dba-8c30-c160a99c476c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018288729s Feb 1 14:54:47.517: INFO: Pod "pod-configmaps-0409e92a-17f3-4dba-8c30-c160a99c476c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.02735288s Feb 1 14:54:49.524: INFO: Pod "pod-configmaps-0409e92a-17f3-4dba-8c30-c160a99c476c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.034771847s Feb 1 14:54:51.536: INFO: Pod "pod-configmaps-0409e92a-17f3-4dba-8c30-c160a99c476c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.046466976s STEP: Saw pod success Feb 1 14:54:51.536: INFO: Pod "pod-configmaps-0409e92a-17f3-4dba-8c30-c160a99c476c" satisfied condition "success or failure" Feb 1 14:54:51.540: INFO: Trying to get logs from node iruya-node pod pod-configmaps-0409e92a-17f3-4dba-8c30-c160a99c476c container configmap-volume-test: STEP: delete the pod Feb 1 14:54:51.838: INFO: Waiting for pod pod-configmaps-0409e92a-17f3-4dba-8c30-c160a99c476c to disappear Feb 1 14:54:51.844: INFO: Pod pod-configmaps-0409e92a-17f3-4dba-8c30-c160a99c476c no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 1 14:54:51.844: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1129" for this suite. Feb 1 14:54:57.881: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 1 14:54:58.013: INFO: namespace configmap-1129 deletion completed in 6.162341259s • [SLOW TEST:14.797 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 1 14:54:58.014: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [It] should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a replication controller Feb 1 14:54:58.078: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1170' Feb 1 14:54:58.665: INFO: stderr: "" Feb 1 14:54:58.665: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Feb 1 14:54:58.666: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1170' Feb 1 14:54:58.948: INFO: stderr: "" Feb 1 14:54:58.948: INFO: stdout: "update-demo-nautilus-dpsw5 update-demo-nautilus-j8vsr " Feb 1 14:54:58.949: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-dpsw5 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1170' Feb 1 14:54:59.144: INFO: stderr: "" Feb 1 14:54:59.145: INFO: stdout: "" Feb 1 14:54:59.145: INFO: update-demo-nautilus-dpsw5 is created but not running Feb 1 14:55:04.145: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1170' Feb 1 14:55:05.662: INFO: stderr: "" Feb 1 14:55:05.662: INFO: stdout: "update-demo-nautilus-dpsw5 update-demo-nautilus-j8vsr " Feb 1 14:55:05.663: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-dpsw5 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1170' Feb 1 14:55:05.983: INFO: stderr: "" Feb 1 14:55:05.983: INFO: stdout: "" Feb 1 14:55:05.983: INFO: update-demo-nautilus-dpsw5 is created but not running Feb 1 14:55:10.984: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1170' Feb 1 14:55:11.177: INFO: stderr: "" Feb 1 14:55:11.177: INFO: stdout: "update-demo-nautilus-dpsw5 update-demo-nautilus-j8vsr " Feb 1 14:55:11.177: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-dpsw5 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1170' Feb 1 14:55:11.289: INFO: stderr: "" Feb 1 14:55:11.289: INFO: stdout: "true" Feb 1 14:55:11.289: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-dpsw5 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1170' Feb 1 14:55:11.470: INFO: stderr: "" Feb 1 14:55:11.470: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Feb 1 14:55:11.470: INFO: validating pod update-demo-nautilus-dpsw5 Feb 1 14:55:11.596: INFO: got data: { "image": "nautilus.jpg" } Feb 1 14:55:11.596: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Feb 1 14:55:11.596: INFO: update-demo-nautilus-dpsw5 is verified up and running Feb 1 14:55:11.597: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-j8vsr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1170' Feb 1 14:55:11.708: INFO: stderr: "" Feb 1 14:55:11.708: INFO: stdout: "true" Feb 1 14:55:11.709: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-j8vsr -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1170' Feb 1 14:55:11.807: INFO: stderr: "" Feb 1 14:55:11.807: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Feb 1 14:55:11.807: INFO: validating pod update-demo-nautilus-j8vsr Feb 1 14:55:11.829: INFO: got data: { "image": "nautilus.jpg" } Feb 1 14:55:11.830: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Feb 1 14:55:11.830: INFO: update-demo-nautilus-j8vsr is verified up and running STEP: using delete to clean up resources Feb 1 14:55:11.830: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1170' Feb 1 14:55:12.071: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Feb 1 14:55:12.071: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Feb 1 14:55:12.072: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-1170' Feb 1 14:55:12.230: INFO: stderr: "No resources found.\n" Feb 1 14:55:12.230: INFO: stdout: "" Feb 1 14:55:12.230: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-1170 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Feb 1 14:55:12.411: INFO: stderr: "" Feb 1 14:55:12.411: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 1 14:55:12.411: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1170" for this suite. Feb 1 14:55:34.555: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 1 14:55:34.666: INFO: namespace kubectl-1170 deletion completed in 22.157519316s • [SLOW TEST:36.652 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 1 14:55:34.667: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [It] should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the initial replication controller Feb 1 14:55:34.787: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9220' Feb 1 14:55:35.272: INFO: stderr: "" Feb 1 14:55:35.273: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Feb 1 14:55:35.273: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9220' Feb 1 14:55:35.495: INFO: stderr: "" Feb 1 14:55:35.496: INFO: stdout: "update-demo-nautilus-22k78 update-demo-nautilus-lvz64 " Feb 1 14:55:35.496: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-22k78 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9220' Feb 1 14:55:35.647: INFO: stderr: "" Feb 1 14:55:35.647: INFO: stdout: "" Feb 1 14:55:35.647: INFO: update-demo-nautilus-22k78 is created but not running Feb 1 14:55:40.648: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9220' Feb 1 14:55:40.749: INFO: stderr: "" Feb 1 14:55:40.749: INFO: stdout: "update-demo-nautilus-22k78 update-demo-nautilus-lvz64 " Feb 1 14:55:40.749: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-22k78 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9220' Feb 1 14:55:42.690: INFO: stderr: "" Feb 1 14:55:42.691: INFO: stdout: "" Feb 1 14:55:42.691: INFO: update-demo-nautilus-22k78 is created but not running Feb 1 14:55:47.691: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9220' Feb 1 14:55:47.917: INFO: stderr: "" Feb 1 14:55:47.918: INFO: stdout: "update-demo-nautilus-22k78 update-demo-nautilus-lvz64 " Feb 1 14:55:47.918: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-22k78 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9220' Feb 1 14:55:48.037: INFO: stderr: "" Feb 1 14:55:48.037: INFO: stdout: "true" Feb 1 14:55:48.037: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-22k78 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9220' Feb 1 14:55:48.153: INFO: stderr: "" Feb 1 14:55:48.153: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Feb 1 14:55:48.153: INFO: validating pod update-demo-nautilus-22k78 Feb 1 14:55:48.162: INFO: got data: { "image": "nautilus.jpg" } Feb 1 14:55:48.162: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Feb 1 14:55:48.162: INFO: update-demo-nautilus-22k78 is verified up and running Feb 1 14:55:48.162: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-lvz64 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9220' Feb 1 14:55:48.247: INFO: stderr: "" Feb 1 14:55:48.247: INFO: stdout: "true" Feb 1 14:55:48.247: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-lvz64 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9220' Feb 1 14:55:48.335: INFO: stderr: "" Feb 1 14:55:48.335: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Feb 1 14:55:48.335: INFO: validating pod update-demo-nautilus-lvz64 Feb 1 14:55:48.348: INFO: got data: { "image": "nautilus.jpg" } Feb 1 14:55:48.348: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Feb 1 14:55:48.348: INFO: update-demo-nautilus-lvz64 is verified up and running STEP: rolling-update to new replication controller Feb 1 14:55:48.351: INFO: scanned /root for discovery docs: Feb 1 14:55:48.351: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=kubectl-9220' Feb 1 14:56:20.572: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" Feb 1 14:56:20.572: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n" STEP: waiting for all containers in name=update-demo pods to come up. Feb 1 14:56:20.573: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9220' Feb 1 14:56:20.663: INFO: stderr: "" Feb 1 14:56:20.664: INFO: stdout: "update-demo-kitten-g8k5h update-demo-kitten-q2nwj " Feb 1 14:56:20.664: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-g8k5h -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9220' Feb 1 14:56:20.738: INFO: stderr: "" Feb 1 14:56:20.738: INFO: stdout: "true" Feb 1 14:56:20.738: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-g8k5h -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9220' Feb 1 14:56:20.814: INFO: stderr: "" Feb 1 14:56:20.814: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" Feb 1 14:56:20.814: INFO: validating pod update-demo-kitten-g8k5h Feb 1 14:56:20.828: INFO: got data: { "image": "kitten.jpg" } Feb 1 14:56:20.828: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . Feb 1 14:56:20.829: INFO: update-demo-kitten-g8k5h is verified up and running Feb 1 14:56:20.829: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-q2nwj -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9220' Feb 1 14:56:20.928: INFO: stderr: "" Feb 1 14:56:20.928: INFO: stdout: "true" Feb 1 14:56:20.929: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-q2nwj -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9220' Feb 1 14:56:21.042: INFO: stderr: "" Feb 1 14:56:21.042: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" Feb 1 14:56:21.042: INFO: validating pod update-demo-kitten-q2nwj Feb 1 14:56:21.061: INFO: got data: { "image": "kitten.jpg" } Feb 1 14:56:21.061: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . Feb 1 14:56:21.061: INFO: update-demo-kitten-q2nwj is verified up and running [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 1 14:56:21.061: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9220" for this suite. Feb 1 14:56:45.110: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 1 14:56:45.429: INFO: namespace kubectl-9220 deletion completed in 24.362262155s • [SLOW TEST:70.762 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 1 14:56:45.429: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod Feb 1 14:56:45.500: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 1 14:56:57.798: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-3819" for this suite. Feb 1 14:57:03.945: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 1 14:57:04.032: INFO: namespace init-container-3819 deletion completed in 6.16887428s • [SLOW TEST:18.603 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 1 14:57:04.032: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Feb 1 14:57:04.157: INFO: (0) /api/v1/nodes/iruya-node:10250/proxy/logs/:
alternatives.log
alternatives.l... (200; 12.195388ms)
Feb  1 14:57:04.162: INFO: (1) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.973203ms)
Feb  1 14:57:04.166: INFO: (2) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.915739ms)
Feb  1 14:57:04.172: INFO: (3) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.904045ms)
Feb  1 14:57:04.178: INFO: (4) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.070221ms)
Feb  1 14:57:04.183: INFO: (5) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.163406ms)
Feb  1 14:57:04.187: INFO: (6) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.089716ms)
Feb  1 14:57:04.190: INFO: (7) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.667232ms)
Feb  1 14:57:04.194: INFO: (8) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.448634ms)
Feb  1 14:57:04.231: INFO: (9) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 37.067317ms)
Feb  1 14:57:04.240: INFO: (10) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.671843ms)
Feb  1 14:57:04.246: INFO: (11) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.223785ms)
Feb  1 14:57:04.252: INFO: (12) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.56078ms)
Feb  1 14:57:04.257: INFO: (13) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.965212ms)
Feb  1 14:57:04.261: INFO: (14) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.973054ms)
Feb  1 14:57:04.267: INFO: (15) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.374473ms)
Feb  1 14:57:04.274: INFO: (16) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.407412ms)
Feb  1 14:57:04.281: INFO: (17) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.764314ms)
Feb  1 14:57:04.288: INFO: (18) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.910008ms)
Feb  1 14:57:04.296: INFO: (19) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.352092ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  1 14:57:04.296: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-2326" for this suite.
Feb  1 14:57:10.332: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  1 14:57:10.432: INFO: namespace proxy-2326 deletion completed in 6.132043893s

• [SLOW TEST:6.400 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:58
    should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  1 14:57:10.433: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Cleaning up the secret
STEP: Cleaning up the configmap
STEP: Cleaning up the pod
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  1 14:57:18.799: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-wrapper-6221" for this suite.
Feb  1 14:57:24.908: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  1 14:57:25.076: INFO: namespace emptydir-wrapper-6221 deletion completed in 6.251931718s

• [SLOW TEST:14.643 seconds]
[sig-storage] EmptyDir wrapper volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run --rm job 
  should create a job from an image, then delete the job  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  1 14:57:25.076: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should create a job from an image, then delete the job  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: executing a command with run --rm and attach with stdin
Feb  1 14:57:25.169: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6607 run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed''
Feb  1 14:57:33.307: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\nI0201 14:57:32.187803    2702 log.go:172] (0xc000132790) (0xc0006d0140) Create stream\nI0201 14:57:32.188076    2702 log.go:172] (0xc000132790) (0xc0006d0140) Stream added, broadcasting: 1\nI0201 14:57:32.205096    2702 log.go:172] (0xc000132790) Reply frame received for 1\nI0201 14:57:32.205164    2702 log.go:172] (0xc000132790) (0xc0006d0000) Create stream\nI0201 14:57:32.205184    2702 log.go:172] (0xc000132790) (0xc0006d0000) Stream added, broadcasting: 3\nI0201 14:57:32.207168    2702 log.go:172] (0xc000132790) Reply frame received for 3\nI0201 14:57:32.207195    2702 log.go:172] (0xc000132790) (0xc00066c1e0) Create stream\nI0201 14:57:32.207202    2702 log.go:172] (0xc000132790) (0xc00066c1e0) Stream added, broadcasting: 5\nI0201 14:57:32.208808    2702 log.go:172] (0xc000132790) Reply frame received for 5\nI0201 14:57:32.208923    2702 log.go:172] (0xc000132790) (0xc00066c280) Create stream\nI0201 14:57:32.208936    2702 log.go:172] (0xc000132790) (0xc00066c280) Stream added, broadcasting: 7\nI0201 14:57:32.211933    2702 log.go:172] (0xc000132790) Reply frame received for 7\nI0201 14:57:32.212190    2702 log.go:172] (0xc0006d0000) (3) Writing data frame\nI0201 14:57:32.212392    2702 log.go:172] (0xc0006d0000) (3) Writing data frame\nI0201 14:57:32.232583    2702 log.go:172] (0xc000132790) Data frame received for 5\nI0201 14:57:32.232625    2702 log.go:172] (0xc00066c1e0) (5) Data frame handling\nI0201 14:57:32.232665    2702 log.go:172] (0xc00066c1e0) (5) Data frame sent\nI0201 14:57:32.238603    2702 log.go:172] (0xc000132790) Data frame received for 5\nI0201 14:57:32.238625    2702 log.go:172] (0xc00066c1e0) (5) Data frame handling\nI0201 14:57:32.238640    2702 log.go:172] (0xc00066c1e0) (5) Data frame sent\nI0201 14:57:33.260824    2702 log.go:172] (0xc000132790) (0xc0006d0000) Stream removed, broadcasting: 3\nI0201 14:57:33.261160    2702 log.go:172] (0xc000132790) (0xc00066c1e0) Stream removed, broadcasting: 5\nI0201 14:57:33.261196    2702 log.go:172] (0xc000132790) Data frame received for 1\nI0201 14:57:33.261229    2702 log.go:172] (0xc0006d0140) (1) Data frame handling\nI0201 14:57:33.261250    2702 log.go:172] (0xc0006d0140) (1) Data frame sent\nI0201 14:57:33.261290    2702 log.go:172] (0xc000132790) (0xc00066c280) Stream removed, broadcasting: 7\nI0201 14:57:33.261474    2702 log.go:172] (0xc000132790) (0xc0006d0140) Stream removed, broadcasting: 1\nI0201 14:57:33.261518    2702 log.go:172] (0xc000132790) Go away received\nI0201 14:57:33.262224    2702 log.go:172] (0xc000132790) (0xc0006d0140) Stream removed, broadcasting: 1\nI0201 14:57:33.262255    2702 log.go:172] (0xc000132790) (0xc0006d0000) Stream removed, broadcasting: 3\nI0201 14:57:33.262265    2702 log.go:172] (0xc000132790) (0xc00066c1e0) Stream removed, broadcasting: 5\nI0201 14:57:33.262274    2702 log.go:172] (0xc000132790) (0xc00066c280) Stream removed, broadcasting: 7\n"
Feb  1 14:57:33.308: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n"
STEP: verifying the job e2e-test-rm-busybox-job was deleted
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  1 14:57:35.319: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6607" for this suite.
Feb  1 14:57:41.348: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  1 14:57:41.470: INFO: namespace kubectl-6607 deletion completed in 6.145095483s

• [SLOW TEST:16.394 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run --rm job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create a job from an image, then delete the job  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for ExternalName services [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  1 14:57:41.471: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for ExternalName services [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a test externalName service
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-1354.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-1354.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-1354.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-1354.svc.cluster.local; sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Feb  1 14:57:51.654: INFO: File wheezy_udp@dns-test-service-3.dns-1354.svc.cluster.local from pod  dns-1354/dns-test-d9c5e87e-34a3-4cec-bd3c-93fb12f7ef3b contains '' instead of 'foo.example.com.'
Feb  1 14:57:51.662: INFO: File jessie_udp@dns-test-service-3.dns-1354.svc.cluster.local from pod  dns-1354/dns-test-d9c5e87e-34a3-4cec-bd3c-93fb12f7ef3b contains '' instead of 'foo.example.com.'
Feb  1 14:57:51.662: INFO: Lookups using dns-1354/dns-test-d9c5e87e-34a3-4cec-bd3c-93fb12f7ef3b failed for: [wheezy_udp@dns-test-service-3.dns-1354.svc.cluster.local jessie_udp@dns-test-service-3.dns-1354.svc.cluster.local]

Feb  1 14:57:56.704: INFO: DNS probes using dns-test-d9c5e87e-34a3-4cec-bd3c-93fb12f7ef3b succeeded

STEP: deleting the pod
STEP: changing the externalName to bar.example.com
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-1354.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-1354.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-1354.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-1354.svc.cluster.local; sleep 1; done

STEP: creating a second pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Feb  1 14:58:11.055: INFO: File wheezy_udp@dns-test-service-3.dns-1354.svc.cluster.local from pod  dns-1354/dns-test-641dadc6-a022-4f1e-8d45-294c32307b25 contains '' instead of 'bar.example.com.'
Feb  1 14:58:11.060: INFO: File jessie_udp@dns-test-service-3.dns-1354.svc.cluster.local from pod  dns-1354/dns-test-641dadc6-a022-4f1e-8d45-294c32307b25 contains '' instead of 'bar.example.com.'
Feb  1 14:58:11.060: INFO: Lookups using dns-1354/dns-test-641dadc6-a022-4f1e-8d45-294c32307b25 failed for: [wheezy_udp@dns-test-service-3.dns-1354.svc.cluster.local jessie_udp@dns-test-service-3.dns-1354.svc.cluster.local]

Feb  1 14:58:16.086: INFO: File wheezy_udp@dns-test-service-3.dns-1354.svc.cluster.local from pod  dns-1354/dns-test-641dadc6-a022-4f1e-8d45-294c32307b25 contains 'foo.example.com.
' instead of 'bar.example.com.'
Feb  1 14:58:16.104: INFO: File jessie_udp@dns-test-service-3.dns-1354.svc.cluster.local from pod  dns-1354/dns-test-641dadc6-a022-4f1e-8d45-294c32307b25 contains 'foo.example.com.
' instead of 'bar.example.com.'
Feb  1 14:58:16.104: INFO: Lookups using dns-1354/dns-test-641dadc6-a022-4f1e-8d45-294c32307b25 failed for: [wheezy_udp@dns-test-service-3.dns-1354.svc.cluster.local jessie_udp@dns-test-service-3.dns-1354.svc.cluster.local]

Feb  1 14:58:21.080: INFO: File wheezy_udp@dns-test-service-3.dns-1354.svc.cluster.local from pod  dns-1354/dns-test-641dadc6-a022-4f1e-8d45-294c32307b25 contains 'foo.example.com.
' instead of 'bar.example.com.'
Feb  1 14:58:21.090: INFO: File jessie_udp@dns-test-service-3.dns-1354.svc.cluster.local from pod  dns-1354/dns-test-641dadc6-a022-4f1e-8d45-294c32307b25 contains 'foo.example.com.
' instead of 'bar.example.com.'
Feb  1 14:58:21.090: INFO: Lookups using dns-1354/dns-test-641dadc6-a022-4f1e-8d45-294c32307b25 failed for: [wheezy_udp@dns-test-service-3.dns-1354.svc.cluster.local jessie_udp@dns-test-service-3.dns-1354.svc.cluster.local]

Feb  1 14:58:26.072: INFO: File wheezy_udp@dns-test-service-3.dns-1354.svc.cluster.local from pod  dns-1354/dns-test-641dadc6-a022-4f1e-8d45-294c32307b25 contains 'foo.example.com.
' instead of 'bar.example.com.'
Feb  1 14:58:26.081: INFO: Lookups using dns-1354/dns-test-641dadc6-a022-4f1e-8d45-294c32307b25 failed for: [wheezy_udp@dns-test-service-3.dns-1354.svc.cluster.local]

Feb  1 14:58:31.088: INFO: DNS probes using dns-test-641dadc6-a022-4f1e-8d45-294c32307b25 succeeded

STEP: deleting the pod
STEP: changing the service to type=ClusterIP
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-1354.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-1354.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-1354.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-1354.svc.cluster.local; sleep 1; done

STEP: creating a third pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Feb  1 14:58:45.525: INFO: File wheezy_udp@dns-test-service-3.dns-1354.svc.cluster.local from pod  dns-1354/dns-test-4a3e1ed5-820c-48b3-b296-574d58b88ac1 contains '' instead of '10.104.149.6'
Feb  1 14:58:45.556: INFO: File jessie_udp@dns-test-service-3.dns-1354.svc.cluster.local from pod  dns-1354/dns-test-4a3e1ed5-820c-48b3-b296-574d58b88ac1 contains '' instead of '10.104.149.6'
Feb  1 14:58:45.556: INFO: Lookups using dns-1354/dns-test-4a3e1ed5-820c-48b3-b296-574d58b88ac1 failed for: [wheezy_udp@dns-test-service-3.dns-1354.svc.cluster.local jessie_udp@dns-test-service-3.dns-1354.svc.cluster.local]

Feb  1 14:58:50.605: INFO: DNS probes using dns-test-4a3e1ed5-820c-48b3-b296-574d58b88ac1 succeeded

STEP: deleting the pod
STEP: deleting the test externalName service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  1 14:58:50.872: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-1354" for this suite.
Feb  1 14:58:58.938: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  1 14:58:59.065: INFO: namespace dns-1354 deletion completed in 8.185060222s

• [SLOW TEST:77.594 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for ExternalName services [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  1 14:58:59.066: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72
[It] deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb  1 14:58:59.213: INFO: Pod name rollover-pod: Found 0 pods out of 1
Feb  1 14:59:04.221: INFO: Pod name rollover-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Feb  1 14:59:08.384: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready
Feb  1 14:59:10.391: INFO: Creating deployment "test-rollover-deployment"
Feb  1 14:59:10.411: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations
Feb  1 14:59:12.453: INFO: Check revision of new replica set for deployment "test-rollover-deployment"
Feb  1 14:59:12.462: INFO: Ensure that both replica sets have 1 created replica
Feb  1 14:59:12.470: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update
Feb  1 14:59:12.489: INFO: Updating deployment test-rollover-deployment
Feb  1 14:59:12.489: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller
Feb  1 14:59:14.523: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2
Feb  1 14:59:14.532: INFO: Make sure deployment "test-rollover-deployment" is complete
Feb  1 14:59:14.548: INFO: all replica sets need to contain the pod-template-hash label
Feb  1 14:59:14.548: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716165950, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716165950, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716165952, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716165950, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  1 14:59:16.584: INFO: all replica sets need to contain the pod-template-hash label
Feb  1 14:59:16.584: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716165950, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716165950, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716165952, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716165950, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  1 14:59:18.583: INFO: all replica sets need to contain the pod-template-hash label
Feb  1 14:59:18.584: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716165950, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716165950, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716165952, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716165950, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  1 14:59:20.566: INFO: all replica sets need to contain the pod-template-hash label
Feb  1 14:59:20.566: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716165950, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716165950, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716165952, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716165950, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  1 14:59:22.636: INFO: all replica sets need to contain the pod-template-hash label
Feb  1 14:59:22.637: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716165950, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716165950, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716165961, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716165950, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  1 14:59:24.576: INFO: all replica sets need to contain the pod-template-hash label
Feb  1 14:59:24.576: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716165950, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716165950, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716165961, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716165950, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  1 14:59:26.576: INFO: all replica sets need to contain the pod-template-hash label
Feb  1 14:59:26.576: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716165950, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716165950, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716165961, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716165950, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  1 14:59:28.571: INFO: all replica sets need to contain the pod-template-hash label
Feb  1 14:59:28.571: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716165950, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716165950, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716165961, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716165950, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  1 14:59:30.566: INFO: all replica sets need to contain the pod-template-hash label
Feb  1 14:59:30.566: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716165950, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716165950, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716165961, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716165950, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  1 14:59:32.571: INFO: 
Feb  1 14:59:32.571: INFO: Ensure that both old replica sets have no replicas
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66
Feb  1 14:59:32.585: INFO: Deployment "test-rollover-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment,GenerateName:,Namespace:deployment-6804,SelfLink:/apis/apps/v1/namespaces/deployment-6804/deployments/test-rollover-deployment,UID:15baef7b-0cac-4ac1-b2b0-bae508a98d28,ResourceVersion:22706715,Generation:2,CreationTimestamp:2020-02-01 14:59:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-02-01 14:59:10 +0000 UTC 2020-02-01 14:59:10 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-02-01 14:59:31 +0000 UTC 2020-02-01 14:59:10 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rollover-deployment-854595fc44" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},}

Feb  1 14:59:32.595: INFO: New ReplicaSet "test-rollover-deployment-854595fc44" of Deployment "test-rollover-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-854595fc44,GenerateName:,Namespace:deployment-6804,SelfLink:/apis/apps/v1/namespaces/deployment-6804/replicasets/test-rollover-deployment-854595fc44,UID:902622ac-e7b0-421c-afc2-a49ea1d595b3,ResourceVersion:22706705,Generation:2,CreationTimestamp:2020-02-01 14:59:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 15baef7b-0cac-4ac1-b2b0-bae508a98d28 0xc0021088d7 0xc0021088d8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},}
Feb  1 14:59:32.595: INFO: All old ReplicaSets of Deployment "test-rollover-deployment":
Feb  1 14:59:32.596: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-controller,GenerateName:,Namespace:deployment-6804,SelfLink:/apis/apps/v1/namespaces/deployment-6804/replicasets/test-rollover-controller,UID:c965029c-1d00-4984-9dde-681ceade54f9,ResourceVersion:22706714,Generation:2,CreationTimestamp:2020-02-01 14:58:59 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 15baef7b-0cac-4ac1-b2b0-bae508a98d28 0xc0021087ef 0xc002108800}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Feb  1 14:59:32.596: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-9b8b997cf,GenerateName:,Namespace:deployment-6804,SelfLink:/apis/apps/v1/namespaces/deployment-6804/replicasets/test-rollover-deployment-9b8b997cf,UID:1fd1dfde-55da-46e5-b54f-b49fc3eef808,ResourceVersion:22706665,Generation:2,CreationTimestamp:2020-02-01 14:59:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 15baef7b-0cac-4ac1-b2b0-bae508a98d28 0xc0021089c0 0xc0021089c1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Feb  1 14:59:32.604: INFO: Pod "test-rollover-deployment-854595fc44-pwqfz" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-854595fc44-pwqfz,GenerateName:test-rollover-deployment-854595fc44-,Namespace:deployment-6804,SelfLink:/api/v1/namespaces/deployment-6804/pods/test-rollover-deployment-854595fc44-pwqfz,UID:10c096f3-4f44-415c-a238-0f9bf76c49d5,ResourceVersion:22706689,Generation:0,CreationTimestamp:2020-02-01 14:59:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rollover-deployment-854595fc44 902622ac-e7b0-421c-afc2-a49ea1d595b3 0xc002109607 0xc002109608}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-sx755 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-sx755,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [{default-token-sx755 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002109690} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0021096b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-01 14:59:13 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-01 14:59:21 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-01 14:59:21 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-01 14:59:12 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.2,StartTime:2020-02-01 14:59:13 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-02-01 14:59:20 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 docker://801df436511295a721f55b6fb46fb7665b3a83430abf4f495ad8dff02d6e514e}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  1 14:59:32.604: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-6804" for this suite.
Feb  1 14:59:40.638: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  1 14:59:40.717: INFO: namespace deployment-6804 deletion completed in 8.104870286s

• [SLOW TEST:41.651 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[k8s.io] Probing container 
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  1 14:59:40.717: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb  1 15:00:05.043: INFO: Container started at 2020-02-01 14:59:46 +0000 UTC, pod became ready at 2020-02-01 15:00:04 +0000 UTC
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  1 15:00:05.043: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-3614" for this suite.
Feb  1 15:00:27.076: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  1 15:00:27.198: INFO: namespace container-probe-3614 deletion completed in 22.149126376s

• [SLOW TEST:46.481 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  1 15:00:27.198: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Feb  1 15:00:27.360: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4b9eee4d-f621-4ee0-b746-d3a64875d54c" in namespace "projected-175" to be "success or failure"
Feb  1 15:00:27.407: INFO: Pod "downwardapi-volume-4b9eee4d-f621-4ee0-b746-d3a64875d54c": Phase="Pending", Reason="", readiness=false. Elapsed: 46.667972ms
Feb  1 15:00:29.423: INFO: Pod "downwardapi-volume-4b9eee4d-f621-4ee0-b746-d3a64875d54c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.062890598s
Feb  1 15:00:31.431: INFO: Pod "downwardapi-volume-4b9eee4d-f621-4ee0-b746-d3a64875d54c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.07029422s
Feb  1 15:00:33.438: INFO: Pod "downwardapi-volume-4b9eee4d-f621-4ee0-b746-d3a64875d54c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.077298093s
Feb  1 15:00:35.446: INFO: Pod "downwardapi-volume-4b9eee4d-f621-4ee0-b746-d3a64875d54c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.085842772s
STEP: Saw pod success
Feb  1 15:00:35.446: INFO: Pod "downwardapi-volume-4b9eee4d-f621-4ee0-b746-d3a64875d54c" satisfied condition "success or failure"
Feb  1 15:00:35.450: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-4b9eee4d-f621-4ee0-b746-d3a64875d54c container client-container: 
STEP: delete the pod
Feb  1 15:00:35.685: INFO: Waiting for pod downwardapi-volume-4b9eee4d-f621-4ee0-b746-d3a64875d54c to disappear
Feb  1 15:00:35.697: INFO: Pod downwardapi-volume-4b9eee4d-f621-4ee0-b746-d3a64875d54c no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  1 15:00:35.697: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-175" for this suite.
Feb  1 15:00:41.810: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  1 15:00:41.945: INFO: namespace projected-175 deletion completed in 6.241684133s

• [SLOW TEST:14.747 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-network] Services 
  should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  1 15:00:41.946: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88
[It] should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating service multi-endpoint-test in namespace services-233
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-233 to expose endpoints map[]
Feb  1 15:00:42.113: INFO: successfully validated that service multi-endpoint-test in namespace services-233 exposes endpoints map[] (30.231577ms elapsed)
STEP: Creating pod pod1 in namespace services-233
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-233 to expose endpoints map[pod1:[100]]
Feb  1 15:00:46.228: INFO: Unexpected endpoints: found map[], expected map[pod1:[100]] (4.096397043s elapsed, will retry)
Feb  1 15:00:49.272: INFO: successfully validated that service multi-endpoint-test in namespace services-233 exposes endpoints map[pod1:[100]] (7.140303229s elapsed)
STEP: Creating pod pod2 in namespace services-233
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-233 to expose endpoints map[pod1:[100] pod2:[101]]
Feb  1 15:00:54.903: INFO: Unexpected endpoints: found map[a0b394c8-a7ca-4eb1-93ba-c134d215ce34:[100]], expected map[pod1:[100] pod2:[101]] (5.625560121s elapsed, will retry)
Feb  1 15:00:57.967: INFO: successfully validated that service multi-endpoint-test in namespace services-233 exposes endpoints map[pod1:[100] pod2:[101]] (8.689948991s elapsed)
STEP: Deleting pod pod1 in namespace services-233
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-233 to expose endpoints map[pod2:[101]]
Feb  1 15:00:59.047: INFO: successfully validated that service multi-endpoint-test in namespace services-233 exposes endpoints map[pod2:[101]] (1.073105027s elapsed)
STEP: Deleting pod pod2 in namespace services-233
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-233 to expose endpoints map[]
Feb  1 15:01:01.087: INFO: successfully validated that service multi-endpoint-test in namespace services-233 exposes endpoints map[] (2.034689663s elapsed)
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  1 15:01:01.507: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-233" for this suite.
Feb  1 15:01:07.542: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  1 15:01:07.642: INFO: namespace services-233 deletion completed in 6.119220653s
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92

• [SLOW TEST:25.697 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[k8s.io] KubeletManagedEtcHosts 
  should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] KubeletManagedEtcHosts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  1 15:01:07.643: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Setting up the test
STEP: Creating hostNetwork=false pod
STEP: Creating hostNetwork=true pod
STEP: Running the test
STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false
Feb  1 15:01:27.921: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-7750 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb  1 15:01:27.921: INFO: >>> kubeConfig: /root/.kube/config
I0201 15:01:28.005854       8 log.go:172] (0xc000aef550) (0xc00165b220) Create stream
I0201 15:01:28.006003       8 log.go:172] (0xc000aef550) (0xc00165b220) Stream added, broadcasting: 1
I0201 15:01:28.024480       8 log.go:172] (0xc000aef550) Reply frame received for 1
I0201 15:01:28.024633       8 log.go:172] (0xc000aef550) (0xc001364280) Create stream
I0201 15:01:28.024658       8 log.go:172] (0xc000aef550) (0xc001364280) Stream added, broadcasting: 3
I0201 15:01:28.026947       8 log.go:172] (0xc000aef550) Reply frame received for 3
I0201 15:01:28.026999       8 log.go:172] (0xc000aef550) (0xc001fd0140) Create stream
I0201 15:01:28.027013       8 log.go:172] (0xc000aef550) (0xc001fd0140) Stream added, broadcasting: 5
I0201 15:01:28.029806       8 log.go:172] (0xc000aef550) Reply frame received for 5
I0201 15:01:28.186156       8 log.go:172] (0xc000aef550) Data frame received for 3
I0201 15:01:28.186325       8 log.go:172] (0xc001364280) (3) Data frame handling
I0201 15:01:28.186381       8 log.go:172] (0xc001364280) (3) Data frame sent
I0201 15:01:28.413675       8 log.go:172] (0xc000aef550) Data frame received for 1
I0201 15:01:28.413773       8 log.go:172] (0xc000aef550) (0xc001364280) Stream removed, broadcasting: 3
I0201 15:01:28.413950       8 log.go:172] (0xc00165b220) (1) Data frame handling
I0201 15:01:28.414021       8 log.go:172] (0xc00165b220) (1) Data frame sent
I0201 15:01:28.414314       8 log.go:172] (0xc000aef550) (0xc001fd0140) Stream removed, broadcasting: 5
I0201 15:01:28.414367       8 log.go:172] (0xc000aef550) (0xc00165b220) Stream removed, broadcasting: 1
I0201 15:01:28.414383       8 log.go:172] (0xc000aef550) Go away received
I0201 15:01:28.414887       8 log.go:172] (0xc000aef550) (0xc00165b220) Stream removed, broadcasting: 1
I0201 15:01:28.414906       8 log.go:172] (0xc000aef550) (0xc001364280) Stream removed, broadcasting: 3
I0201 15:01:28.414918       8 log.go:172] (0xc000aef550) (0xc001fd0140) Stream removed, broadcasting: 5
Feb  1 15:01:28.414: INFO: Exec stderr: ""
Feb  1 15:01:28.415: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-7750 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb  1 15:01:28.415: INFO: >>> kubeConfig: /root/.kube/config
I0201 15:01:28.517604       8 log.go:172] (0xc0025c2790) (0xc00022ebe0) Create stream
I0201 15:01:28.518029       8 log.go:172] (0xc0025c2790) (0xc00022ebe0) Stream added, broadcasting: 1
I0201 15:01:28.531411       8 log.go:172] (0xc0025c2790) Reply frame received for 1
I0201 15:01:28.531498       8 log.go:172] (0xc0025c2790) (0xc001364460) Create stream
I0201 15:01:28.531519       8 log.go:172] (0xc0025c2790) (0xc001364460) Stream added, broadcasting: 3
I0201 15:01:28.534752       8 log.go:172] (0xc0025c2790) Reply frame received for 3
I0201 15:01:28.534915       8 log.go:172] (0xc0025c2790) (0xc001364500) Create stream
I0201 15:01:28.534941       8 log.go:172] (0xc0025c2790) (0xc001364500) Stream added, broadcasting: 5
I0201 15:01:28.539450       8 log.go:172] (0xc0025c2790) Reply frame received for 5
I0201 15:01:28.707599       8 log.go:172] (0xc0025c2790) Data frame received for 3
I0201 15:01:28.707791       8 log.go:172] (0xc001364460) (3) Data frame handling
I0201 15:01:28.707839       8 log.go:172] (0xc001364460) (3) Data frame sent
I0201 15:01:28.823836       8 log.go:172] (0xc0025c2790) Data frame received for 1
I0201 15:01:28.824063       8 log.go:172] (0xc0025c2790) (0xc001364460) Stream removed, broadcasting: 3
I0201 15:01:28.824181       8 log.go:172] (0xc00022ebe0) (1) Data frame handling
I0201 15:01:28.824218       8 log.go:172] (0xc00022ebe0) (1) Data frame sent
I0201 15:01:28.824702       8 log.go:172] (0xc0025c2790) (0xc001364500) Stream removed, broadcasting: 5
I0201 15:01:28.824788       8 log.go:172] (0xc0025c2790) (0xc00022ebe0) Stream removed, broadcasting: 1
I0201 15:01:28.824857       8 log.go:172] (0xc0025c2790) Go away received
I0201 15:01:28.825614       8 log.go:172] (0xc0025c2790) (0xc00022ebe0) Stream removed, broadcasting: 1
I0201 15:01:28.825633       8 log.go:172] (0xc0025c2790) (0xc001364460) Stream removed, broadcasting: 3
I0201 15:01:28.825643       8 log.go:172] (0xc0025c2790) (0xc001364500) Stream removed, broadcasting: 5
Feb  1 15:01:28.825: INFO: Exec stderr: ""
Feb  1 15:01:28.825: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-7750 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb  1 15:01:28.825: INFO: >>> kubeConfig: /root/.kube/config
I0201 15:01:28.887228       8 log.go:172] (0xc0013a8630) (0xc0001d6960) Create stream
I0201 15:01:28.887385       8 log.go:172] (0xc0013a8630) (0xc0001d6960) Stream added, broadcasting: 1
I0201 15:01:28.893519       8 log.go:172] (0xc0013a8630) Reply frame received for 1
I0201 15:01:28.893577       8 log.go:172] (0xc0013a8630) (0xc00165b2c0) Create stream
I0201 15:01:28.893591       8 log.go:172] (0xc0013a8630) (0xc00165b2c0) Stream added, broadcasting: 3
I0201 15:01:28.894664       8 log.go:172] (0xc0013a8630) Reply frame received for 3
I0201 15:01:28.894688       8 log.go:172] (0xc0013a8630) (0xc001fd01e0) Create stream
I0201 15:01:28.894698       8 log.go:172] (0xc0013a8630) (0xc001fd01e0) Stream added, broadcasting: 5
I0201 15:01:28.895712       8 log.go:172] (0xc0013a8630) Reply frame received for 5
I0201 15:01:28.976932       8 log.go:172] (0xc0013a8630) Data frame received for 3
I0201 15:01:28.977087       8 log.go:172] (0xc00165b2c0) (3) Data frame handling
I0201 15:01:28.977146       8 log.go:172] (0xc00165b2c0) (3) Data frame sent
I0201 15:01:29.096976       8 log.go:172] (0xc0013a8630) (0xc00165b2c0) Stream removed, broadcasting: 3
I0201 15:01:29.097221       8 log.go:172] (0xc0013a8630) Data frame received for 1
I0201 15:01:29.097251       8 log.go:172] (0xc0001d6960) (1) Data frame handling
I0201 15:01:29.097293       8 log.go:172] (0xc0001d6960) (1) Data frame sent
I0201 15:01:29.097341       8 log.go:172] (0xc0013a8630) (0xc0001d6960) Stream removed, broadcasting: 1
I0201 15:01:29.097605       8 log.go:172] (0xc0013a8630) (0xc001fd01e0) Stream removed, broadcasting: 5
I0201 15:01:29.097654       8 log.go:172] (0xc0013a8630) Go away received
I0201 15:01:29.097708       8 log.go:172] (0xc0013a8630) (0xc0001d6960) Stream removed, broadcasting: 1
I0201 15:01:29.097721       8 log.go:172] (0xc0013a8630) (0xc00165b2c0) Stream removed, broadcasting: 3
I0201 15:01:29.097733       8 log.go:172] (0xc0013a8630) (0xc001fd01e0) Stream removed, broadcasting: 5
Feb  1 15:01:29.097: INFO: Exec stderr: ""
Feb  1 15:01:29.097: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-7750 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb  1 15:01:29.097: INFO: >>> kubeConfig: /root/.kube/config
I0201 15:01:29.151847       8 log.go:172] (0xc00150c790) (0xc00165b720) Create stream
I0201 15:01:29.151919       8 log.go:172] (0xc00150c790) (0xc00165b720) Stream added, broadcasting: 1
I0201 15:01:29.159836       8 log.go:172] (0xc00150c790) Reply frame received for 1
I0201 15:01:29.159896       8 log.go:172] (0xc00150c790) (0xc00022ec80) Create stream
I0201 15:01:29.159909       8 log.go:172] (0xc00150c790) (0xc00022ec80) Stream added, broadcasting: 3
I0201 15:01:29.162610       8 log.go:172] (0xc00150c790) Reply frame received for 3
I0201 15:01:29.162676       8 log.go:172] (0xc00150c790) (0xc00022f5e0) Create stream
I0201 15:01:29.162697       8 log.go:172] (0xc00150c790) (0xc00022f5e0) Stream added, broadcasting: 5
I0201 15:01:29.167864       8 log.go:172] (0xc00150c790) Reply frame received for 5
I0201 15:01:29.273102       8 log.go:172] (0xc00150c790) Data frame received for 3
I0201 15:01:29.273324       8 log.go:172] (0xc00022ec80) (3) Data frame handling
I0201 15:01:29.273375       8 log.go:172] (0xc00022ec80) (3) Data frame sent
I0201 15:01:29.422115       8 log.go:172] (0xc00150c790) Data frame received for 1
I0201 15:01:29.422271       8 log.go:172] (0xc00150c790) (0xc00022ec80) Stream removed, broadcasting: 3
I0201 15:01:29.422442       8 log.go:172] (0xc00165b720) (1) Data frame handling
I0201 15:01:29.422495       8 log.go:172] (0xc00150c790) (0xc00022f5e0) Stream removed, broadcasting: 5
I0201 15:01:29.422539       8 log.go:172] (0xc00165b720) (1) Data frame sent
I0201 15:01:29.422578       8 log.go:172] (0xc00150c790) (0xc00165b720) Stream removed, broadcasting: 1
I0201 15:01:29.422602       8 log.go:172] (0xc00150c790) Go away received
I0201 15:01:29.423158       8 log.go:172] (0xc00150c790) (0xc00165b720) Stream removed, broadcasting: 1
I0201 15:01:29.423179       8 log.go:172] (0xc00150c790) (0xc00022ec80) Stream removed, broadcasting: 3
I0201 15:01:29.423200       8 log.go:172] (0xc00150c790) (0xc00022f5e0) Stream removed, broadcasting: 5
Feb  1 15:01:29.423: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount
Feb  1 15:01:29.423: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-7750 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb  1 15:01:29.423: INFO: >>> kubeConfig: /root/.kube/config
I0201 15:01:29.494805       8 log.go:172] (0xc000240fd0) (0xc001fd0640) Create stream
I0201 15:01:29.494908       8 log.go:172] (0xc000240fd0) (0xc001fd0640) Stream added, broadcasting: 1
I0201 15:01:29.503548       8 log.go:172] (0xc000240fd0) Reply frame received for 1
I0201 15:01:29.503619       8 log.go:172] (0xc000240fd0) (0xc00165b7c0) Create stream
I0201 15:01:29.503639       8 log.go:172] (0xc000240fd0) (0xc00165b7c0) Stream added, broadcasting: 3
I0201 15:01:29.506945       8 log.go:172] (0xc000240fd0) Reply frame received for 3
I0201 15:01:29.506972       8 log.go:172] (0xc000240fd0) (0xc00165b900) Create stream
I0201 15:01:29.506985       8 log.go:172] (0xc000240fd0) (0xc00165b900) Stream added, broadcasting: 5
I0201 15:01:29.508743       8 log.go:172] (0xc000240fd0) Reply frame received for 5
I0201 15:01:29.611871       8 log.go:172] (0xc000240fd0) Data frame received for 3
I0201 15:01:29.611963       8 log.go:172] (0xc00165b7c0) (3) Data frame handling
I0201 15:01:29.612019       8 log.go:172] (0xc00165b7c0) (3) Data frame sent
I0201 15:01:29.750943       8 log.go:172] (0xc000240fd0) (0xc00165b7c0) Stream removed, broadcasting: 3
I0201 15:01:29.751127       8 log.go:172] (0xc000240fd0) Data frame received for 1
I0201 15:01:29.751154       8 log.go:172] (0xc000240fd0) (0xc00165b900) Stream removed, broadcasting: 5
I0201 15:01:29.751252       8 log.go:172] (0xc001fd0640) (1) Data frame handling
I0201 15:01:29.751288       8 log.go:172] (0xc001fd0640) (1) Data frame sent
I0201 15:01:29.751301       8 log.go:172] (0xc000240fd0) (0xc001fd0640) Stream removed, broadcasting: 1
I0201 15:01:29.751327       8 log.go:172] (0xc000240fd0) Go away received
I0201 15:01:29.751787       8 log.go:172] (0xc000240fd0) (0xc001fd0640) Stream removed, broadcasting: 1
I0201 15:01:29.751847       8 log.go:172] (0xc000240fd0) (0xc00165b7c0) Stream removed, broadcasting: 3
I0201 15:01:29.751867       8 log.go:172] (0xc000240fd0) (0xc00165b900) Stream removed, broadcasting: 5
Feb  1 15:01:29.751: INFO: Exec stderr: ""
Feb  1 15:01:29.752: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-7750 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb  1 15:01:29.752: INFO: >>> kubeConfig: /root/.kube/config
I0201 15:01:29.829367       8 log.go:172] (0xc0025c38c0) (0xc00172c0a0) Create stream
I0201 15:01:29.829552       8 log.go:172] (0xc0025c38c0) (0xc00172c0a0) Stream added, broadcasting: 1
I0201 15:01:29.844832       8 log.go:172] (0xc0025c38c0) Reply frame received for 1
I0201 15:01:29.845104       8 log.go:172] (0xc0025c38c0) (0xc00172c280) Create stream
I0201 15:01:29.845148       8 log.go:172] (0xc0025c38c0) (0xc00172c280) Stream added, broadcasting: 3
I0201 15:01:29.850988       8 log.go:172] (0xc0025c38c0) Reply frame received for 3
I0201 15:01:29.851099       8 log.go:172] (0xc0025c38c0) (0xc001fd06e0) Create stream
I0201 15:01:29.851132       8 log.go:172] (0xc0025c38c0) (0xc001fd06e0) Stream added, broadcasting: 5
I0201 15:01:29.853707       8 log.go:172] (0xc0025c38c0) Reply frame received for 5
I0201 15:01:30.022894       8 log.go:172] (0xc0025c38c0) Data frame received for 3
I0201 15:01:30.023081       8 log.go:172] (0xc00172c280) (3) Data frame handling
I0201 15:01:30.023130       8 log.go:172] (0xc00172c280) (3) Data frame sent
I0201 15:01:30.203677       8 log.go:172] (0xc0025c38c0) (0xc00172c280) Stream removed, broadcasting: 3
I0201 15:01:30.204286       8 log.go:172] (0xc0025c38c0) Data frame received for 1
I0201 15:01:30.204483       8 log.go:172] (0xc0025c38c0) (0xc001fd06e0) Stream removed, broadcasting: 5
I0201 15:01:30.204682       8 log.go:172] (0xc00172c0a0) (1) Data frame handling
I0201 15:01:30.204724       8 log.go:172] (0xc00172c0a0) (1) Data frame sent
I0201 15:01:30.204770       8 log.go:172] (0xc0025c38c0) (0xc00172c0a0) Stream removed, broadcasting: 1
I0201 15:01:30.204808       8 log.go:172] (0xc0025c38c0) Go away received
I0201 15:01:30.205274       8 log.go:172] (0xc0025c38c0) (0xc00172c0a0) Stream removed, broadcasting: 1
I0201 15:01:30.205308       8 log.go:172] (0xc0025c38c0) (0xc00172c280) Stream removed, broadcasting: 3
I0201 15:01:30.205317       8 log.go:172] (0xc0025c38c0) (0xc001fd06e0) Stream removed, broadcasting: 5
Feb  1 15:01:30.205: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true
Feb  1 15:01:30.205: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-7750 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb  1 15:01:30.205: INFO: >>> kubeConfig: /root/.kube/config
I0201 15:01:30.320406       8 log.go:172] (0xc0013a9810) (0xc0001d7720) Create stream
I0201 15:01:30.320651       8 log.go:172] (0xc0013a9810) (0xc0001d7720) Stream added, broadcasting: 1
I0201 15:01:30.344047       8 log.go:172] (0xc0013a9810) Reply frame received for 1
I0201 15:01:30.344145       8 log.go:172] (0xc0013a9810) (0xc0001d7860) Create stream
I0201 15:01:30.344161       8 log.go:172] (0xc0013a9810) (0xc0001d7860) Stream added, broadcasting: 3
I0201 15:01:30.346235       8 log.go:172] (0xc0013a9810) Reply frame received for 3
I0201 15:01:30.346275       8 log.go:172] (0xc0013a9810) (0xc001fd0780) Create stream
I0201 15:01:30.346287       8 log.go:172] (0xc0013a9810) (0xc001fd0780) Stream added, broadcasting: 5
I0201 15:01:30.347794       8 log.go:172] (0xc0013a9810) Reply frame received for 5
I0201 15:01:30.457901       8 log.go:172] (0xc0013a9810) Data frame received for 3
I0201 15:01:30.458101       8 log.go:172] (0xc0001d7860) (3) Data frame handling
I0201 15:01:30.458129       8 log.go:172] (0xc0001d7860) (3) Data frame sent
I0201 15:01:30.628195       8 log.go:172] (0xc0013a9810) (0xc001fd0780) Stream removed, broadcasting: 5
I0201 15:01:30.628279       8 log.go:172] (0xc0013a9810) (0xc0001d7860) Stream removed, broadcasting: 3
I0201 15:01:30.628322       8 log.go:172] (0xc0013a9810) Data frame received for 1
I0201 15:01:30.628336       8 log.go:172] (0xc0001d7720) (1) Data frame handling
I0201 15:01:30.628350       8 log.go:172] (0xc0001d7720) (1) Data frame sent
I0201 15:01:30.628377       8 log.go:172] (0xc0013a9810) (0xc0001d7720) Stream removed, broadcasting: 1
I0201 15:01:30.628397       8 log.go:172] (0xc0013a9810) Go away received
I0201 15:01:30.628618       8 log.go:172] (0xc0013a9810) (0xc0001d7720) Stream removed, broadcasting: 1
I0201 15:01:30.628692       8 log.go:172] (0xc0013a9810) (0xc0001d7860) Stream removed, broadcasting: 3
I0201 15:01:30.628714       8 log.go:172] (0xc0013a9810) (0xc001fd0780) Stream removed, broadcasting: 5
Feb  1 15:01:30.628: INFO: Exec stderr: ""
Feb  1 15:01:30.628: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-7750 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb  1 15:01:30.628: INFO: >>> kubeConfig: /root/.kube/config
I0201 15:01:30.721572       8 log.go:172] (0xc00198c000) (0xc001fd0aa0) Create stream
I0201 15:01:30.721757       8 log.go:172] (0xc00198c000) (0xc001fd0aa0) Stream added, broadcasting: 1
I0201 15:01:30.734902       8 log.go:172] (0xc00198c000) Reply frame received for 1
I0201 15:01:30.734982       8 log.go:172] (0xc00198c000) (0xc0001d79a0) Create stream
I0201 15:01:30.734996       8 log.go:172] (0xc00198c000) (0xc0001d79a0) Stream added, broadcasting: 3
I0201 15:01:30.738254       8 log.go:172] (0xc00198c000) Reply frame received for 3
I0201 15:01:30.738281       8 log.go:172] (0xc00198c000) (0xc001364640) Create stream
I0201 15:01:30.738297       8 log.go:172] (0xc00198c000) (0xc001364640) Stream added, broadcasting: 5
I0201 15:01:30.741267       8 log.go:172] (0xc00198c000) Reply frame received for 5
I0201 15:01:30.860986       8 log.go:172] (0xc00198c000) Data frame received for 3
I0201 15:01:30.861118       8 log.go:172] (0xc0001d79a0) (3) Data frame handling
I0201 15:01:30.861148       8 log.go:172] (0xc0001d79a0) (3) Data frame sent
I0201 15:01:30.976405       8 log.go:172] (0xc00198c000) (0xc0001d79a0) Stream removed, broadcasting: 3
I0201 15:01:30.976612       8 log.go:172] (0xc00198c000) Data frame received for 1
I0201 15:01:30.976630       8 log.go:172] (0xc001fd0aa0) (1) Data frame handling
I0201 15:01:30.976674       8 log.go:172] (0xc001fd0aa0) (1) Data frame sent
I0201 15:01:30.976732       8 log.go:172] (0xc00198c000) (0xc001fd0aa0) Stream removed, broadcasting: 1
I0201 15:01:30.976967       8 log.go:172] (0xc00198c000) (0xc001364640) Stream removed, broadcasting: 5
I0201 15:01:30.977036       8 log.go:172] (0xc00198c000) (0xc001fd0aa0) Stream removed, broadcasting: 1
I0201 15:01:30.977052       8 log.go:172] (0xc00198c000) (0xc0001d79a0) Stream removed, broadcasting: 3
I0201 15:01:30.977065       8 log.go:172] (0xc00198c000) (0xc001364640) Stream removed, broadcasting: 5
Feb  1 15:01:30.977: INFO: Exec stderr: ""
Feb  1 15:01:30.977: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-7750 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb  1 15:01:30.977: INFO: >>> kubeConfig: /root/.kube/config
I0201 15:01:30.978832       8 log.go:172] (0xc00198c000) Go away received
I0201 15:01:31.041548       8 log.go:172] (0xc00198cd10) (0xc001fd0dc0) Create stream
I0201 15:01:31.041659       8 log.go:172] (0xc00198cd10) (0xc001fd0dc0) Stream added, broadcasting: 1
I0201 15:01:31.046386       8 log.go:172] (0xc00198cd10) Reply frame received for 1
I0201 15:01:31.046440       8 log.go:172] (0xc00198cd10) (0xc00165b9a0) Create stream
I0201 15:01:31.046461       8 log.go:172] (0xc00198cd10) (0xc00165b9a0) Stream added, broadcasting: 3
I0201 15:01:31.056609       8 log.go:172] (0xc00198cd10) Reply frame received for 3
I0201 15:01:31.056682       8 log.go:172] (0xc00198cd10) (0xc00172c320) Create stream
I0201 15:01:31.056695       8 log.go:172] (0xc00198cd10) (0xc00172c320) Stream added, broadcasting: 5
I0201 15:01:31.060164       8 log.go:172] (0xc00198cd10) Reply frame received for 5
I0201 15:01:31.167015       8 log.go:172] (0xc00198cd10) Data frame received for 3
I0201 15:01:31.167124       8 log.go:172] (0xc00165b9a0) (3) Data frame handling
I0201 15:01:31.167169       8 log.go:172] (0xc00165b9a0) (3) Data frame sent
I0201 15:01:31.274788       8 log.go:172] (0xc00198cd10) Data frame received for 1
I0201 15:01:31.274864       8 log.go:172] (0xc00198cd10) (0xc00165b9a0) Stream removed, broadcasting: 3
I0201 15:01:31.274932       8 log.go:172] (0xc001fd0dc0) (1) Data frame handling
I0201 15:01:31.274959       8 log.go:172] (0xc001fd0dc0) (1) Data frame sent
I0201 15:01:31.274966       8 log.go:172] (0xc00198cd10) (0xc00172c320) Stream removed, broadcasting: 5
I0201 15:01:31.275006       8 log.go:172] (0xc00198cd10) (0xc001fd0dc0) Stream removed, broadcasting: 1
I0201 15:01:31.275035       8 log.go:172] (0xc00198cd10) Go away received
I0201 15:01:31.275263       8 log.go:172] (0xc00198cd10) (0xc001fd0dc0) Stream removed, broadcasting: 1
I0201 15:01:31.275270       8 log.go:172] (0xc00198cd10) (0xc00165b9a0) Stream removed, broadcasting: 3
I0201 15:01:31.275274       8 log.go:172] (0xc00198cd10) (0xc00172c320) Stream removed, broadcasting: 5
Feb  1 15:01:31.275: INFO: Exec stderr: ""
Feb  1 15:01:31.275: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-7750 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb  1 15:01:31.275: INFO: >>> kubeConfig: /root/.kube/config
I0201 15:01:31.328864       8 log.go:172] (0xc001ae8840) (0xc00172c780) Create stream
I0201 15:01:31.328942       8 log.go:172] (0xc001ae8840) (0xc00172c780) Stream added, broadcasting: 1
I0201 15:01:31.333540       8 log.go:172] (0xc001ae8840) Reply frame received for 1
I0201 15:01:31.333593       8 log.go:172] (0xc001ae8840) (0xc001fd0e60) Create stream
I0201 15:01:31.333601       8 log.go:172] (0xc001ae8840) (0xc001fd0e60) Stream added, broadcasting: 3
I0201 15:01:31.337103       8 log.go:172] (0xc001ae8840) Reply frame received for 3
I0201 15:01:31.337240       8 log.go:172] (0xc001ae8840) (0xc0001d7b80) Create stream
I0201 15:01:31.337256       8 log.go:172] (0xc001ae8840) (0xc0001d7b80) Stream added, broadcasting: 5
I0201 15:01:31.339480       8 log.go:172] (0xc001ae8840) Reply frame received for 5
I0201 15:01:31.436130       8 log.go:172] (0xc001ae8840) Data frame received for 3
I0201 15:01:31.436199       8 log.go:172] (0xc001fd0e60) (3) Data frame handling
I0201 15:01:31.436236       8 log.go:172] (0xc001fd0e60) (3) Data frame sent
I0201 15:01:31.582103       8 log.go:172] (0xc001ae8840) Data frame received for 1
I0201 15:01:31.582201       8 log.go:172] (0xc001ae8840) (0xc001fd0e60) Stream removed, broadcasting: 3
I0201 15:01:31.582279       8 log.go:172] (0xc00172c780) (1) Data frame handling
I0201 15:01:31.582299       8 log.go:172] (0xc00172c780) (1) Data frame sent
I0201 15:01:31.582327       8 log.go:172] (0xc001ae8840) (0xc0001d7b80) Stream removed, broadcasting: 5
I0201 15:01:31.582388       8 log.go:172] (0xc001ae8840) (0xc00172c780) Stream removed, broadcasting: 1
I0201 15:01:31.582419       8 log.go:172] (0xc001ae8840) Go away received
I0201 15:01:31.582688       8 log.go:172] (0xc001ae8840) (0xc00172c780) Stream removed, broadcasting: 1
I0201 15:01:31.582711       8 log.go:172] (0xc001ae8840) (0xc001fd0e60) Stream removed, broadcasting: 3
I0201 15:01:31.582719       8 log.go:172] (0xc001ae8840) (0xc0001d7b80) Stream removed, broadcasting: 5
Feb  1 15:01:31.582: INFO: Exec stderr: ""
[AfterEach] [k8s.io] KubeletManagedEtcHosts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  1 15:01:31.582: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-kubelet-etc-hosts-7750" for this suite.
Feb  1 15:02:19.621: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  1 15:02:19.898: INFO: namespace e2e-kubelet-etc-hosts-7750 deletion completed in 48.307854115s

• [SLOW TEST:72.255 seconds]
[k8s.io] KubeletManagedEtcHosts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl api-versions 
  should check if v1 is in available api versions  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  1 15:02:19.899: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should check if v1 is in available api versions  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: validating api versions
Feb  1 15:02:19.952: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions'
Feb  1 15:02:20.099: INFO: stderr: ""
Feb  1 15:02:20.100: INFO: stdout: "admissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\napps/v1beta1\napps/v1beta2\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  1 15:02:20.100: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-1461" for this suite.
Feb  1 15:02:26.137: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  1 15:02:26.398: INFO: namespace kubectl-1461 deletion completed in 6.288024461s

• [SLOW TEST:6.499 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl api-versions
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should check if v1 is in available api versions  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  1 15:02:26.398: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Performing setup for networking test in namespace pod-network-test-7319
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Feb  1 15:02:26.527: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Feb  1 15:03:04.863: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostName&protocol=udp&host=10.44.0.1&port=8081&tries=1'] Namespace:pod-network-test-7319 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb  1 15:03:04.863: INFO: >>> kubeConfig: /root/.kube/config
I0201 15:03:04.968642       8 log.go:172] (0xc002963ef0) (0xc0014e7a40) Create stream
I0201 15:03:04.968785       8 log.go:172] (0xc002963ef0) (0xc0014e7a40) Stream added, broadcasting: 1
I0201 15:03:04.992198       8 log.go:172] (0xc002963ef0) Reply frame received for 1
I0201 15:03:04.992355       8 log.go:172] (0xc002963ef0) (0xc002c610e0) Create stream
I0201 15:03:04.992414       8 log.go:172] (0xc002963ef0) (0xc002c610e0) Stream added, broadcasting: 3
I0201 15:03:04.995440       8 log.go:172] (0xc002963ef0) Reply frame received for 3
I0201 15:03:04.995499       8 log.go:172] (0xc002963ef0) (0xc0014e7ae0) Create stream
I0201 15:03:04.995519       8 log.go:172] (0xc002963ef0) (0xc0014e7ae0) Stream added, broadcasting: 5
I0201 15:03:04.997269       8 log.go:172] (0xc002963ef0) Reply frame received for 5
I0201 15:03:05.168540       8 log.go:172] (0xc002963ef0) Data frame received for 3
I0201 15:03:05.168682       8 log.go:172] (0xc002c610e0) (3) Data frame handling
I0201 15:03:05.168711       8 log.go:172] (0xc002c610e0) (3) Data frame sent
I0201 15:03:05.311401       8 log.go:172] (0xc002963ef0) Data frame received for 1
I0201 15:03:05.311568       8 log.go:172] (0xc0014e7a40) (1) Data frame handling
I0201 15:03:05.311640       8 log.go:172] (0xc0014e7a40) (1) Data frame sent
I0201 15:03:05.311948       8 log.go:172] (0xc002963ef0) (0xc0014e7a40) Stream removed, broadcasting: 1
I0201 15:03:05.312076       8 log.go:172] (0xc002963ef0) (0xc002c610e0) Stream removed, broadcasting: 3
I0201 15:03:05.312272       8 log.go:172] (0xc002963ef0) (0xc0014e7ae0) Stream removed, broadcasting: 5
I0201 15:03:05.312328       8 log.go:172] (0xc002963ef0) Go away received
I0201 15:03:05.312442       8 log.go:172] (0xc002963ef0) (0xc0014e7a40) Stream removed, broadcasting: 1
I0201 15:03:05.312464       8 log.go:172] (0xc002963ef0) (0xc002c610e0) Stream removed, broadcasting: 3
I0201 15:03:05.312483       8 log.go:172] (0xc002963ef0) (0xc0014e7ae0) Stream removed, broadcasting: 5
Feb  1 15:03:05.312: INFO: Waiting for endpoints: map[]
Feb  1 15:03:05.323: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostName&protocol=udp&host=10.32.0.4&port=8081&tries=1'] Namespace:pod-network-test-7319 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb  1 15:03:05.323: INFO: >>> kubeConfig: /root/.kube/config
I0201 15:03:05.397341       8 log.go:172] (0xc001ae9600) (0xc00324ef00) Create stream
I0201 15:03:05.397466       8 log.go:172] (0xc001ae9600) (0xc00324ef00) Stream added, broadcasting: 1
I0201 15:03:05.408777       8 log.go:172] (0xc001ae9600) Reply frame received for 1
I0201 15:03:05.408862       8 log.go:172] (0xc001ae9600) (0xc00193bb80) Create stream
I0201 15:03:05.408880       8 log.go:172] (0xc001ae9600) (0xc00193bb80) Stream added, broadcasting: 3
I0201 15:03:05.410505       8 log.go:172] (0xc001ae9600) Reply frame received for 3
I0201 15:03:05.410543       8 log.go:172] (0xc001ae9600) (0xc00324efa0) Create stream
I0201 15:03:05.410576       8 log.go:172] (0xc001ae9600) (0xc00324efa0) Stream added, broadcasting: 5
I0201 15:03:05.412226       8 log.go:172] (0xc001ae9600) Reply frame received for 5
I0201 15:03:05.520863       8 log.go:172] (0xc001ae9600) Data frame received for 3
I0201 15:03:05.520964       8 log.go:172] (0xc00193bb80) (3) Data frame handling
I0201 15:03:05.520988       8 log.go:172] (0xc00193bb80) (3) Data frame sent
I0201 15:03:05.669880       8 log.go:172] (0xc001ae9600) Data frame received for 1
I0201 15:03:05.669956       8 log.go:172] (0xc001ae9600) (0xc00193bb80) Stream removed, broadcasting: 3
I0201 15:03:05.670032       8 log.go:172] (0xc00324ef00) (1) Data frame handling
I0201 15:03:05.670056       8 log.go:172] (0xc00324ef00) (1) Data frame sent
I0201 15:03:05.670090       8 log.go:172] (0xc001ae9600) (0xc00324efa0) Stream removed, broadcasting: 5
I0201 15:03:05.670170       8 log.go:172] (0xc001ae9600) (0xc00324ef00) Stream removed, broadcasting: 1
I0201 15:03:05.670204       8 log.go:172] (0xc001ae9600) Go away received
I0201 15:03:05.670540       8 log.go:172] (0xc001ae9600) (0xc00324ef00) Stream removed, broadcasting: 1
I0201 15:03:05.670645       8 log.go:172] (0xc001ae9600) (0xc00193bb80) Stream removed, broadcasting: 3
I0201 15:03:05.670717       8 log.go:172] (0xc001ae9600) (0xc00324efa0) Stream removed, broadcasting: 5
Feb  1 15:03:05.671: INFO: Waiting for endpoints: map[]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  1 15:03:05.671: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-7319" for this suite.
Feb  1 15:03:21.706: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  1 15:03:21.903: INFO: namespace pod-network-test-7319 deletion completed in 16.220170585s

• [SLOW TEST:55.505 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  1 15:03:21.904: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod busybox-5ad86691-3dae-4fa5-be16-4323fa2a0660 in namespace container-probe-2410
Feb  1 15:03:30.048: INFO: Started pod busybox-5ad86691-3dae-4fa5-be16-4323fa2a0660 in namespace container-probe-2410
STEP: checking the pod's current state and verifying that restartCount is present
Feb  1 15:03:30.051: INFO: Initial restart count of pod busybox-5ad86691-3dae-4fa5-be16-4323fa2a0660 is 0
Feb  1 15:04:22.334: INFO: Restart count of pod container-probe-2410/busybox-5ad86691-3dae-4fa5-be16-4323fa2a0660 is now 1 (52.283087183s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  1 15:04:22.352: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-2410" for this suite.
Feb  1 15:04:28.409: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  1 15:04:28.563: INFO: namespace container-probe-2410 deletion completed in 6.195379279s

• [SLOW TEST:66.660 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  1 15:04:28.566: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0777 on node default medium
Feb  1 15:04:28.722: INFO: Waiting up to 5m0s for pod "pod-28ddced1-91c1-4bda-a47b-e06eca20e462" in namespace "emptydir-6736" to be "success or failure"
Feb  1 15:04:28.740: INFO: Pod "pod-28ddced1-91c1-4bda-a47b-e06eca20e462": Phase="Pending", Reason="", readiness=false. Elapsed: 17.330429ms
Feb  1 15:04:30.748: INFO: Pod "pod-28ddced1-91c1-4bda-a47b-e06eca20e462": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024816688s
Feb  1 15:04:32.755: INFO: Pod "pod-28ddced1-91c1-4bda-a47b-e06eca20e462": Phase="Pending", Reason="", readiness=false. Elapsed: 4.03197628s
Feb  1 15:04:34.762: INFO: Pod "pod-28ddced1-91c1-4bda-a47b-e06eca20e462": Phase="Pending", Reason="", readiness=false. Elapsed: 6.039121564s
Feb  1 15:04:36.781: INFO: Pod "pod-28ddced1-91c1-4bda-a47b-e06eca20e462": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.057815816s
STEP: Saw pod success
Feb  1 15:04:36.781: INFO: Pod "pod-28ddced1-91c1-4bda-a47b-e06eca20e462" satisfied condition "success or failure"
Feb  1 15:04:36.789: INFO: Trying to get logs from node iruya-node pod pod-28ddced1-91c1-4bda-a47b-e06eca20e462 container test-container: 
STEP: delete the pod
Feb  1 15:04:36.871: INFO: Waiting for pod pod-28ddced1-91c1-4bda-a47b-e06eca20e462 to disappear
Feb  1 15:04:36.938: INFO: Pod pod-28ddced1-91c1-4bda-a47b-e06eca20e462 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  1 15:04:36.938: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-6736" for this suite.
Feb  1 15:04:43.134: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  1 15:04:43.254: INFO: namespace emptydir-6736 deletion completed in 6.300647583s

• [SLOW TEST:14.689 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should scale a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  1 15:04:43.254: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[It] should scale a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a replication controller
Feb  1 15:04:43.360: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9425'
Feb  1 15:04:45.578: INFO: stderr: ""
Feb  1 15:04:45.579: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Feb  1 15:04:45.579: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9425'
Feb  1 15:04:45.826: INFO: stderr: ""
Feb  1 15:04:45.826: INFO: stdout: "update-demo-nautilus-8hdbp update-demo-nautilus-jp6nc "
Feb  1 15:04:45.827: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8hdbp -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9425'
Feb  1 15:04:46.018: INFO: stderr: ""
Feb  1 15:04:46.018: INFO: stdout: ""
Feb  1 15:04:46.018: INFO: update-demo-nautilus-8hdbp is created but not running
Feb  1 15:04:51.019: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9425'
Feb  1 15:04:53.426: INFO: stderr: ""
Feb  1 15:04:53.427: INFO: stdout: "update-demo-nautilus-8hdbp update-demo-nautilus-jp6nc "
Feb  1 15:04:53.428: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8hdbp -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9425'
Feb  1 15:04:53.755: INFO: stderr: ""
Feb  1 15:04:53.755: INFO: stdout: ""
Feb  1 15:04:53.755: INFO: update-demo-nautilus-8hdbp is created but not running
Feb  1 15:04:58.755: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9425'
Feb  1 15:04:58.900: INFO: stderr: ""
Feb  1 15:04:58.900: INFO: stdout: "update-demo-nautilus-8hdbp update-demo-nautilus-jp6nc "
Feb  1 15:04:58.900: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8hdbp -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9425'
Feb  1 15:04:59.024: INFO: stderr: ""
Feb  1 15:04:59.024: INFO: stdout: "true"
Feb  1 15:04:59.024: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8hdbp -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9425'
Feb  1 15:04:59.106: INFO: stderr: ""
Feb  1 15:04:59.106: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb  1 15:04:59.106: INFO: validating pod update-demo-nautilus-8hdbp
Feb  1 15:04:59.121: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb  1 15:04:59.121: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb  1 15:04:59.121: INFO: update-demo-nautilus-8hdbp is verified up and running
Feb  1 15:04:59.121: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jp6nc -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9425'
Feb  1 15:04:59.216: INFO: stderr: ""
Feb  1 15:04:59.216: INFO: stdout: "true"
Feb  1 15:04:59.216: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jp6nc -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9425'
Feb  1 15:04:59.292: INFO: stderr: ""
Feb  1 15:04:59.292: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb  1 15:04:59.292: INFO: validating pod update-demo-nautilus-jp6nc
Feb  1 15:04:59.297: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb  1 15:04:59.297: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb  1 15:04:59.297: INFO: update-demo-nautilus-jp6nc is verified up and running
STEP: scaling down the replication controller
Feb  1 15:04:59.299: INFO: scanned /root for discovery docs: 
Feb  1 15:04:59.299: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-9425'
Feb  1 15:05:00.451: INFO: stderr: ""
Feb  1 15:05:00.451: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Feb  1 15:05:00.453: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9425'
Feb  1 15:05:00.634: INFO: stderr: ""
Feb  1 15:05:00.634: INFO: stdout: "update-demo-nautilus-8hdbp update-demo-nautilus-jp6nc "
STEP: Replicas for name=update-demo: expected=1 actual=2
Feb  1 15:05:05.634: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9425'
Feb  1 15:05:05.805: INFO: stderr: ""
Feb  1 15:05:05.805: INFO: stdout: "update-demo-nautilus-8hdbp "
Feb  1 15:05:05.805: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8hdbp -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9425'
Feb  1 15:05:05.981: INFO: stderr: ""
Feb  1 15:05:05.981: INFO: stdout: "true"
Feb  1 15:05:05.982: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8hdbp -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9425'
Feb  1 15:05:06.092: INFO: stderr: ""
Feb  1 15:05:06.092: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb  1 15:05:06.092: INFO: validating pod update-demo-nautilus-8hdbp
Feb  1 15:05:06.097: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb  1 15:05:06.097: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb  1 15:05:06.097: INFO: update-demo-nautilus-8hdbp is verified up and running
STEP: scaling up the replication controller
Feb  1 15:05:06.099: INFO: scanned /root for discovery docs: 
Feb  1 15:05:06.099: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-9425'
Feb  1 15:05:07.274: INFO: stderr: ""
Feb  1 15:05:07.274: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Feb  1 15:05:07.274: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9425'
Feb  1 15:05:07.466: INFO: stderr: ""
Feb  1 15:05:07.467: INFO: stdout: "update-demo-nautilus-8hdbp update-demo-nautilus-vfzhm "
Feb  1 15:05:07.467: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8hdbp -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9425'
Feb  1 15:05:07.565: INFO: stderr: ""
Feb  1 15:05:07.565: INFO: stdout: "true"
Feb  1 15:05:07.565: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8hdbp -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9425'
Feb  1 15:05:07.723: INFO: stderr: ""
Feb  1 15:05:07.723: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb  1 15:05:07.723: INFO: validating pod update-demo-nautilus-8hdbp
Feb  1 15:05:07.730: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb  1 15:05:07.730: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb  1 15:05:07.730: INFO: update-demo-nautilus-8hdbp is verified up and running
Feb  1 15:05:07.730: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-vfzhm -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9425'
Feb  1 15:05:07.829: INFO: stderr: ""
Feb  1 15:05:07.829: INFO: stdout: ""
Feb  1 15:05:07.829: INFO: update-demo-nautilus-vfzhm is created but not running
Feb  1 15:05:12.830: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9425'
Feb  1 15:05:13.140: INFO: stderr: ""
Feb  1 15:05:13.141: INFO: stdout: "update-demo-nautilus-8hdbp update-demo-nautilus-vfzhm "
Feb  1 15:05:13.141: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8hdbp -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9425'
Feb  1 15:05:13.286: INFO: stderr: ""
Feb  1 15:05:13.286: INFO: stdout: "true"
Feb  1 15:05:13.286: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8hdbp -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9425'
Feb  1 15:05:13.435: INFO: stderr: ""
Feb  1 15:05:13.435: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb  1 15:05:13.435: INFO: validating pod update-demo-nautilus-8hdbp
Feb  1 15:05:13.445: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb  1 15:05:13.445: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb  1 15:05:13.445: INFO: update-demo-nautilus-8hdbp is verified up and running
Feb  1 15:05:13.445: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-vfzhm -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9425'
Feb  1 15:05:13.585: INFO: stderr: ""
Feb  1 15:05:13.585: INFO: stdout: ""
Feb  1 15:05:13.585: INFO: update-demo-nautilus-vfzhm is created but not running
Feb  1 15:05:18.587: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9425'
Feb  1 15:05:18.721: INFO: stderr: ""
Feb  1 15:05:18.721: INFO: stdout: "update-demo-nautilus-8hdbp update-demo-nautilus-vfzhm "
Feb  1 15:05:18.722: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8hdbp -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9425'
Feb  1 15:05:18.884: INFO: stderr: ""
Feb  1 15:05:18.884: INFO: stdout: "true"
Feb  1 15:05:18.884: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8hdbp -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9425'
Feb  1 15:05:18.991: INFO: stderr: ""
Feb  1 15:05:18.992: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb  1 15:05:18.992: INFO: validating pod update-demo-nautilus-8hdbp
Feb  1 15:05:19.006: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb  1 15:05:19.006: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb  1 15:05:19.006: INFO: update-demo-nautilus-8hdbp is verified up and running
Feb  1 15:05:19.007: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-vfzhm -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9425'
Feb  1 15:05:19.152: INFO: stderr: ""
Feb  1 15:05:19.152: INFO: stdout: "true"
Feb  1 15:05:19.153: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-vfzhm -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9425'
Feb  1 15:05:19.287: INFO: stderr: ""
Feb  1 15:05:19.287: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb  1 15:05:19.287: INFO: validating pod update-demo-nautilus-vfzhm
Feb  1 15:05:19.296: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb  1 15:05:19.297: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb  1 15:05:19.297: INFO: update-demo-nautilus-vfzhm is verified up and running
STEP: using delete to clean up resources
Feb  1 15:05:19.297: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-9425'
Feb  1 15:05:19.414: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb  1 15:05:19.414: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Feb  1 15:05:19.415: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-9425'
Feb  1 15:05:19.537: INFO: stderr: "No resources found.\n"
Feb  1 15:05:19.537: INFO: stdout: ""
Feb  1 15:05:19.537: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-9425 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Feb  1 15:05:19.652: INFO: stderr: ""
Feb  1 15:05:19.652: INFO: stdout: "update-demo-nautilus-8hdbp\nupdate-demo-nautilus-vfzhm\n"
Feb  1 15:05:20.152: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-9425'
Feb  1 15:05:22.007: INFO: stderr: "No resources found.\n"
Feb  1 15:05:22.011: INFO: stdout: ""
Feb  1 15:05:22.012: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-9425 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Feb  1 15:05:22.352: INFO: stderr: ""
Feb  1 15:05:22.353: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  1 15:05:22.354: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-9425" for this suite.
Feb  1 15:05:44.435: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  1 15:05:44.578: INFO: namespace kubectl-9425 deletion completed in 22.174331191s

• [SLOW TEST:61.324 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should scale a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  1 15:05:44.579: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the container
STEP: wait for the container to reach Failed
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Feb  1 15:05:51.878: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  1 15:05:51.992: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-1781" for this suite.
Feb  1 15:05:58.025: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  1 15:05:58.125: INFO: namespace container-runtime-1781 deletion completed in 6.127687279s

• [SLOW TEST:13.546 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129
      should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  1 15:05:58.126: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward api env vars
Feb  1 15:05:58.188: INFO: Waiting up to 5m0s for pod "downward-api-88f06c23-2f42-432c-a959-eb58e1139ef0" in namespace "downward-api-551" to be "success or failure"
Feb  1 15:05:58.217: INFO: Pod "downward-api-88f06c23-2f42-432c-a959-eb58e1139ef0": Phase="Pending", Reason="", readiness=false. Elapsed: 28.646471ms
Feb  1 15:06:00.226: INFO: Pod "downward-api-88f06c23-2f42-432c-a959-eb58e1139ef0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.037828114s
Feb  1 15:06:02.233: INFO: Pod "downward-api-88f06c23-2f42-432c-a959-eb58e1139ef0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.044552993s
Feb  1 15:06:04.246: INFO: Pod "downward-api-88f06c23-2f42-432c-a959-eb58e1139ef0": Phase="Pending", Reason="", readiness=false. Elapsed: 6.057780684s
Feb  1 15:06:06.254: INFO: Pod "downward-api-88f06c23-2f42-432c-a959-eb58e1139ef0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.065941235s
STEP: Saw pod success
Feb  1 15:06:06.254: INFO: Pod "downward-api-88f06c23-2f42-432c-a959-eb58e1139ef0" satisfied condition "success or failure"
Feb  1 15:06:06.260: INFO: Trying to get logs from node iruya-node pod downward-api-88f06c23-2f42-432c-a959-eb58e1139ef0 container dapi-container: 
STEP: delete the pod
Feb  1 15:06:06.363: INFO: Waiting for pod downward-api-88f06c23-2f42-432c-a959-eb58e1139ef0 to disappear
Feb  1 15:06:06.373: INFO: Pod downward-api-88f06c23-2f42-432c-a959-eb58e1139ef0 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  1 15:06:06.373: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-551" for this suite.
Feb  1 15:06:12.440: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  1 15:06:12.561: INFO: namespace downward-api-551 deletion completed in 6.178358123s

• [SLOW TEST:14.435 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32
  should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  1 15:06:12.563: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0777 on tmpfs
Feb  1 15:06:12.666: INFO: Waiting up to 5m0s for pod "pod-a1876a68-cc43-417c-b2f2-409f81931c0d" in namespace "emptydir-1714" to be "success or failure"
Feb  1 15:06:12.732: INFO: Pod "pod-a1876a68-cc43-417c-b2f2-409f81931c0d": Phase="Pending", Reason="", readiness=false. Elapsed: 65.87333ms
Feb  1 15:06:14.747: INFO: Pod "pod-a1876a68-cc43-417c-b2f2-409f81931c0d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.080999686s
Feb  1 15:06:16.762: INFO: Pod "pod-a1876a68-cc43-417c-b2f2-409f81931c0d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.095441549s
Feb  1 15:06:18.772: INFO: Pod "pod-a1876a68-cc43-417c-b2f2-409f81931c0d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.105663597s
Feb  1 15:06:20.793: INFO: Pod "pod-a1876a68-cc43-417c-b2f2-409f81931c0d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.126389284s
STEP: Saw pod success
Feb  1 15:06:20.793: INFO: Pod "pod-a1876a68-cc43-417c-b2f2-409f81931c0d" satisfied condition "success or failure"
Feb  1 15:06:20.799: INFO: Trying to get logs from node iruya-node pod pod-a1876a68-cc43-417c-b2f2-409f81931c0d container test-container: 
STEP: delete the pod
Feb  1 15:06:20.904: INFO: Waiting for pod pod-a1876a68-cc43-417c-b2f2-409f81931c0d to disappear
Feb  1 15:06:20.914: INFO: Pod pod-a1876a68-cc43-417c-b2f2-409f81931c0d no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  1 15:06:20.914: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-1714" for this suite.
Feb  1 15:06:28.950: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  1 15:06:29.117: INFO: namespace emptydir-1714 deletion completed in 8.194697554s

• [SLOW TEST:16.554 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  1 15:06:29.118: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb  1 15:06:29.300: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"9e9d34c7-c677-4de7-a676-3a0b053ba035", Controller:(*bool)(0xc0029692b2), BlockOwnerDeletion:(*bool)(0xc0029692b3)}}
Feb  1 15:06:29.324: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"d1901ac5-23d1-40ea-ab76-745e21ea464d", Controller:(*bool)(0xc0033eb442), BlockOwnerDeletion:(*bool)(0xc0033eb443)}}
Feb  1 15:06:29.335: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"5820180b-5b50-4ae1-8b87-24af7a2fcc1e", Controller:(*bool)(0xc002a4204a), BlockOwnerDeletion:(*bool)(0xc002a4204b)}}
[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  1 15:06:34.509: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-3903" for this suite.
Feb  1 15:06:40.546: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  1 15:06:40.698: INFO: namespace gc-3903 deletion completed in 6.181806129s

• [SLOW TEST:11.581 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  1 15:06:40.699: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a watch on configmaps with label A
STEP: creating a watch on configmaps with label B
STEP: creating a watch on configmaps with label A or B
STEP: creating a configmap with label A and ensuring the correct watchers observe the notification
Feb  1 15:06:40.830: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-4553,SelfLink:/api/v1/namespaces/watch-4553/configmaps/e2e-watch-test-configmap-a,UID:e4e8679c-082e-4496-9322-608c2b9d840b,ResourceVersion:22707797,Generation:0,CreationTimestamp:2020-02-01 15:06:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
Feb  1 15:06:40.831: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-4553,SelfLink:/api/v1/namespaces/watch-4553/configmaps/e2e-watch-test-configmap-a,UID:e4e8679c-082e-4496-9322-608c2b9d840b,ResourceVersion:22707797,Generation:0,CreationTimestamp:2020-02-01 15:06:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
STEP: modifying configmap A and ensuring the correct watchers observe the notification
Feb  1 15:06:50.862: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-4553,SelfLink:/api/v1/namespaces/watch-4553/configmaps/e2e-watch-test-configmap-a,UID:e4e8679c-082e-4496-9322-608c2b9d840b,ResourceVersion:22707812,Generation:0,CreationTimestamp:2020-02-01 15:06:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
Feb  1 15:06:50.864: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-4553,SelfLink:/api/v1/namespaces/watch-4553/configmaps/e2e-watch-test-configmap-a,UID:e4e8679c-082e-4496-9322-608c2b9d840b,ResourceVersion:22707812,Generation:0,CreationTimestamp:2020-02-01 15:06:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying configmap A again and ensuring the correct watchers observe the notification
Feb  1 15:07:00.880: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-4553,SelfLink:/api/v1/namespaces/watch-4553/configmaps/e2e-watch-test-configmap-a,UID:e4e8679c-082e-4496-9322-608c2b9d840b,ResourceVersion:22707826,Generation:0,CreationTimestamp:2020-02-01 15:06:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Feb  1 15:07:00.880: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-4553,SelfLink:/api/v1/namespaces/watch-4553/configmaps/e2e-watch-test-configmap-a,UID:e4e8679c-082e-4496-9322-608c2b9d840b,ResourceVersion:22707826,Generation:0,CreationTimestamp:2020-02-01 15:06:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
STEP: deleting configmap A and ensuring the correct watchers observe the notification
Feb  1 15:07:10.899: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-4553,SelfLink:/api/v1/namespaces/watch-4553/configmaps/e2e-watch-test-configmap-a,UID:e4e8679c-082e-4496-9322-608c2b9d840b,ResourceVersion:22707840,Generation:0,CreationTimestamp:2020-02-01 15:06:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Feb  1 15:07:10.900: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-4553,SelfLink:/api/v1/namespaces/watch-4553/configmaps/e2e-watch-test-configmap-a,UID:e4e8679c-082e-4496-9322-608c2b9d840b,ResourceVersion:22707840,Generation:0,CreationTimestamp:2020-02-01 15:06:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
STEP: creating a configmap with label B and ensuring the correct watchers observe the notification
Feb  1 15:07:20.925: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-4553,SelfLink:/api/v1/namespaces/watch-4553/configmaps/e2e-watch-test-configmap-b,UID:3980355e-26bc-4448-8e4b-d759492b0f76,ResourceVersion:22707854,Generation:0,CreationTimestamp:2020-02-01 15:07:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
Feb  1 15:07:20.925: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-4553,SelfLink:/api/v1/namespaces/watch-4553/configmaps/e2e-watch-test-configmap-b,UID:3980355e-26bc-4448-8e4b-d759492b0f76,ResourceVersion:22707854,Generation:0,CreationTimestamp:2020-02-01 15:07:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
STEP: deleting configmap B and ensuring the correct watchers observe the notification
Feb  1 15:07:30.938: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-4553,SelfLink:/api/v1/namespaces/watch-4553/configmaps/e2e-watch-test-configmap-b,UID:3980355e-26bc-4448-8e4b-d759492b0f76,ResourceVersion:22707868,Generation:0,CreationTimestamp:2020-02-01 15:07:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
Feb  1 15:07:30.939: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-4553,SelfLink:/api/v1/namespaces/watch-4553/configmaps/e2e-watch-test-configmap-b,UID:3980355e-26bc-4448-8e4b-d759492b0f76,ResourceVersion:22707868,Generation:0,CreationTimestamp:2020-02-01 15:07:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  1 15:07:40.939: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-4553" for this suite.
Feb  1 15:07:47.036: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  1 15:07:47.167: INFO: namespace watch-4553 deletion completed in 6.20787317s

• [SLOW TEST:66.468 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  1 15:07:47.167: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-8b0060fa-9452-4f10-bc76-50278611dd5e
STEP: Creating a pod to test consume configMaps
Feb  1 15:07:47.382: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-e363b0f3-d3a2-4469-9c56-8bb9cd296295" in namespace "projected-7674" to be "success or failure"
Feb  1 15:07:47.393: INFO: Pod "pod-projected-configmaps-e363b0f3-d3a2-4469-9c56-8bb9cd296295": Phase="Pending", Reason="", readiness=false. Elapsed: 11.536737ms
Feb  1 15:07:49.406: INFO: Pod "pod-projected-configmaps-e363b0f3-d3a2-4469-9c56-8bb9cd296295": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02414551s
Feb  1 15:07:51.414: INFO: Pod "pod-projected-configmaps-e363b0f3-d3a2-4469-9c56-8bb9cd296295": Phase="Pending", Reason="", readiness=false. Elapsed: 4.032302855s
Feb  1 15:07:53.429: INFO: Pod "pod-projected-configmaps-e363b0f3-d3a2-4469-9c56-8bb9cd296295": Phase="Pending", Reason="", readiness=false. Elapsed: 6.047133061s
Feb  1 15:07:55.446: INFO: Pod "pod-projected-configmaps-e363b0f3-d3a2-4469-9c56-8bb9cd296295": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.063733608s
STEP: Saw pod success
Feb  1 15:07:55.446: INFO: Pod "pod-projected-configmaps-e363b0f3-d3a2-4469-9c56-8bb9cd296295" satisfied condition "success or failure"
Feb  1 15:07:55.450: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-e363b0f3-d3a2-4469-9c56-8bb9cd296295 container projected-configmap-volume-test: 
STEP: delete the pod
Feb  1 15:07:55.543: INFO: Waiting for pod pod-projected-configmaps-e363b0f3-d3a2-4469-9c56-8bb9cd296295 to disappear
Feb  1 15:07:55.560: INFO: Pod pod-projected-configmaps-e363b0f3-d3a2-4469-9c56-8bb9cd296295 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  1 15:07:55.560: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7674" for this suite.
Feb  1 15:08:01.602: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  1 15:08:01.733: INFO: namespace projected-7674 deletion completed in 6.159574683s

• [SLOW TEST:14.566 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSFeb  1 15:08:01.734: INFO: Running AfterSuite actions on all nodes
Feb  1 15:08:01.734: INFO: Running AfterSuite actions on node 1
Feb  1 15:08:01.734: INFO: Skipping dumping logs from cluster

Ran 215 of 4412 Specs in 7909.656 seconds
SUCCESS! -- 215 Passed | 0 Failed | 0 Pending | 4197 Skipped
PASS