I1226 12:56:08.786334 9 e2e.go:243] Starting e2e run "d74e562d-b9c4-4da8-a239-ac6b8953e07c" on Ginkgo node 1 Running Suite: Kubernetes e2e suite =================================== Random Seed: 1577364967 - Will randomize all specs Will run 215 of 4412 specs Dec 26 12:56:09.000: INFO: >>> kubeConfig: /root/.kube/config Dec 26 12:56:09.006: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Dec 26 12:56:09.041: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Dec 26 12:56:09.077: INFO: 10 / 10 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Dec 26 12:56:09.077: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Dec 26 12:56:09.077: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Dec 26 12:56:09.085: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Dec 26 12:56:09.085: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'weave-net' (0 seconds elapsed) Dec 26 12:56:09.085: INFO: e2e test version: v1.15.7 Dec 26 12:56:09.087: INFO: kube-apiserver version: v1.15.1 SS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 26 12:56:09.087: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred Dec 26 12:56:09.249: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 Dec 26 12:56:09.251: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Dec 26 12:56:09.256: INFO: Waiting for terminating namespaces to be deleted... Dec 26 12:56:09.258: INFO: Logging pods the kubelet thinks is on node iruya-node before test Dec 26 12:56:09.270: INFO: weave-net-rlp57 from kube-system started at 2019-10-12 11:56:39 +0000 UTC (2 container statuses recorded) Dec 26 12:56:09.270: INFO: Container weave ready: true, restart count 0 Dec 26 12:56:09.270: INFO: Container weave-npc ready: true, restart count 0 Dec 26 12:56:09.270: INFO: kube-proxy-976zl from kube-system started at 2019-08-04 09:01:39 +0000 UTC (1 container statuses recorded) Dec 26 12:56:09.270: INFO: Container kube-proxy ready: true, restart count 0 Dec 26 12:56:09.270: INFO: Logging pods the kubelet thinks is on node iruya-server-sfge57q7djm7 before test Dec 26 12:56:09.279: INFO: kube-apiserver-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:39 +0000 UTC (1 container statuses recorded) Dec 26 12:56:09.279: INFO: Container kube-apiserver ready: true, restart count 0 Dec 26 12:56:09.279: INFO: kube-scheduler-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:43 +0000 UTC (1 container statuses recorded) Dec 26 12:56:09.279: INFO: Container kube-scheduler ready: true, restart count 7 Dec 26 12:56:09.279: INFO: coredns-5c98db65d4-xx8w8 from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded) Dec 26 12:56:09.279: INFO: Container coredns ready: true, restart count 0 Dec 26 12:56:09.279: INFO: etcd-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:38 +0000 UTC (1 container statuses recorded) Dec 26 12:56:09.279: INFO: Container etcd ready: true, restart count 0 Dec 26 12:56:09.279: INFO: weave-net-bzl4d from kube-system started at 2019-08-04 08:52:37 +0000 UTC (2 container statuses recorded) Dec 26 12:56:09.279: INFO: Container weave ready: true, restart count 0 Dec 26 12:56:09.279: INFO: Container weave-npc ready: true, restart count 0 Dec 26 12:56:09.279: INFO: coredns-5c98db65d4-bm4gs from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded) Dec 26 12:56:09.279: INFO: Container coredns ready: true, restart count 0 Dec 26 12:56:09.279: INFO: kube-controller-manager-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:42 +0000 UTC (1 container statuses recorded) Dec 26 12:56:09.279: INFO: Container kube-controller-manager ready: true, restart count 10 Dec 26 12:56:09.279: INFO: kube-proxy-58v95 from kube-system started at 2019-08-04 08:52:37 +0000 UTC (1 container statuses recorded) Dec 26 12:56:09.279: INFO: Container kube-proxy ready: true, restart count 0 [It] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: verifying the node has the label node iruya-node STEP: verifying the node has the label node iruya-server-sfge57q7djm7 Dec 26 12:56:09.436: INFO: Pod coredns-5c98db65d4-bm4gs requesting resource cpu=100m on Node iruya-server-sfge57q7djm7 Dec 26 12:56:09.436: INFO: Pod coredns-5c98db65d4-xx8w8 requesting resource cpu=100m on Node iruya-server-sfge57q7djm7 Dec 26 12:56:09.436: INFO: Pod etcd-iruya-server-sfge57q7djm7 requesting resource cpu=0m on Node iruya-server-sfge57q7djm7 Dec 26 12:56:09.436: INFO: Pod kube-apiserver-iruya-server-sfge57q7djm7 requesting resource cpu=250m on Node iruya-server-sfge57q7djm7 Dec 26 12:56:09.436: INFO: Pod kube-controller-manager-iruya-server-sfge57q7djm7 requesting resource cpu=200m on Node iruya-server-sfge57q7djm7 Dec 26 12:56:09.436: INFO: Pod kube-proxy-58v95 requesting resource cpu=0m on Node iruya-server-sfge57q7djm7 Dec 26 12:56:09.436: INFO: Pod kube-proxy-976zl requesting resource cpu=0m on Node iruya-node Dec 26 12:56:09.436: INFO: Pod kube-scheduler-iruya-server-sfge57q7djm7 requesting resource cpu=100m on Node iruya-server-sfge57q7djm7 Dec 26 12:56:09.436: INFO: Pod weave-net-bzl4d requesting resource cpu=20m on Node iruya-server-sfge57q7djm7 Dec 26 12:56:09.436: INFO: Pod weave-net-rlp57 requesting resource cpu=20m on Node iruya-node STEP: Starting Pods to consume most of the cluster CPU. STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-4ea8a33d-ecc7-4ff9-b21a-6e0ad9dd0455.15e3ed151ef5c261], Reason = [Scheduled], Message = [Successfully assigned sched-pred-9542/filler-pod-4ea8a33d-ecc7-4ff9-b21a-6e0ad9dd0455 to iruya-server-sfge57q7djm7] STEP: Considering event: Type = [Normal], Name = [filler-pod-4ea8a33d-ecc7-4ff9-b21a-6e0ad9dd0455.15e3ed16c916a040], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-4ea8a33d-ecc7-4ff9-b21a-6e0ad9dd0455.15e3ed17c26906be], Reason = [Created], Message = [Created container filler-pod-4ea8a33d-ecc7-4ff9-b21a-6e0ad9dd0455] STEP: Considering event: Type = [Normal], Name = [filler-pod-4ea8a33d-ecc7-4ff9-b21a-6e0ad9dd0455.15e3ed17ea5343a2], Reason = [Started], Message = [Started container filler-pod-4ea8a33d-ecc7-4ff9-b21a-6e0ad9dd0455] STEP: Considering event: Type = [Normal], Name = [filler-pod-aa07cf9b-e9ee-4ca8-8d6b-6697bea9865e.15e3ed151f040b1d], Reason = [Scheduled], Message = [Successfully assigned sched-pred-9542/filler-pod-aa07cf9b-e9ee-4ca8-8d6b-6697bea9865e to iruya-node] STEP: Considering event: Type = [Normal], Name = [filler-pod-aa07cf9b-e9ee-4ca8-8d6b-6697bea9865e.15e3ed169357d546], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-aa07cf9b-e9ee-4ca8-8d6b-6697bea9865e.15e3ed176da68c99], Reason = [Created], Message = [Created container filler-pod-aa07cf9b-e9ee-4ca8-8d6b-6697bea9865e] STEP: Considering event: Type = [Normal], Name = [filler-pod-aa07cf9b-e9ee-4ca8-8d6b-6697bea9865e.15e3ed1795e35062], Reason = [Started], Message = [Started container filler-pod-aa07cf9b-e9ee-4ca8-8d6b-6697bea9865e] STEP: Considering event: Type = [Warning], Name = [additional-pod.15e3ed1866829209], Reason = [FailedScheduling], Message = [0/2 nodes are available: 2 Insufficient cpu.] STEP: removing the label node off the node iruya-node STEP: verifying the node doesn't have the label node STEP: removing the label node off the node iruya-server-sfge57q7djm7 STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 26 12:56:24.719: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-9542" for this suite. Dec 26 12:56:35.533: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 26 12:56:35.955: INFO: namespace sched-pred-9542 deletion completed in 11.227435678s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72 • [SLOW TEST:26.869 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23 validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 26 12:56:35.956: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-configmap-7j59 STEP: Creating a pod to test atomic-volume-subpath Dec 26 12:56:36.260: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-7j59" in namespace "subpath-3091" to be "success or failure" Dec 26 12:56:36.265: INFO: Pod "pod-subpath-test-configmap-7j59": Phase="Pending", Reason="", readiness=false. Elapsed: 4.370356ms Dec 26 12:56:38.275: INFO: Pod "pod-subpath-test-configmap-7j59": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014939054s Dec 26 12:56:40.295: INFO: Pod "pod-subpath-test-configmap-7j59": Phase="Pending", Reason="", readiness=false. Elapsed: 4.034445121s Dec 26 12:56:42.302: INFO: Pod "pod-subpath-test-configmap-7j59": Phase="Pending", Reason="", readiness=false. Elapsed: 6.041893973s Dec 26 12:56:44.320: INFO: Pod "pod-subpath-test-configmap-7j59": Phase="Pending", Reason="", readiness=false. Elapsed: 8.05954783s Dec 26 12:56:46.332: INFO: Pod "pod-subpath-test-configmap-7j59": Phase="Pending", Reason="", readiness=false. Elapsed: 10.071883986s Dec 26 12:56:48.340: INFO: Pod "pod-subpath-test-configmap-7j59": Phase="Running", Reason="", readiness=true. Elapsed: 12.080091563s Dec 26 12:56:50.354: INFO: Pod "pod-subpath-test-configmap-7j59": Phase="Running", Reason="", readiness=true. Elapsed: 14.093814232s Dec 26 12:56:52.368: INFO: Pod "pod-subpath-test-configmap-7j59": Phase="Running", Reason="", readiness=true. Elapsed: 16.107594548s Dec 26 12:56:54.375: INFO: Pod "pod-subpath-test-configmap-7j59": Phase="Running", Reason="", readiness=true. Elapsed: 18.114638874s Dec 26 12:56:56.384: INFO: Pod "pod-subpath-test-configmap-7j59": Phase="Running", Reason="", readiness=true. Elapsed: 20.12346587s Dec 26 12:56:58.395: INFO: Pod "pod-subpath-test-configmap-7j59": Phase="Running", Reason="", readiness=true. Elapsed: 22.134897283s Dec 26 12:57:00.410: INFO: Pod "pod-subpath-test-configmap-7j59": Phase="Running", Reason="", readiness=true. Elapsed: 24.149962555s Dec 26 12:57:02.424: INFO: Pod "pod-subpath-test-configmap-7j59": Phase="Running", Reason="", readiness=true. Elapsed: 26.163887275s Dec 26 12:57:04.434: INFO: Pod "pod-subpath-test-configmap-7j59": Phase="Running", Reason="", readiness=true. Elapsed: 28.173757306s Dec 26 12:57:07.083: INFO: Pod "pod-subpath-test-configmap-7j59": Phase="Running", Reason="", readiness=true. Elapsed: 30.822202863s Dec 26 12:57:09.088: INFO: Pod "pod-subpath-test-configmap-7j59": Phase="Succeeded", Reason="", readiness=false. Elapsed: 32.827715963s STEP: Saw pod success Dec 26 12:57:09.088: INFO: Pod "pod-subpath-test-configmap-7j59" satisfied condition "success or failure" Dec 26 12:57:09.092: INFO: Trying to get logs from node iruya-node pod pod-subpath-test-configmap-7j59 container test-container-subpath-configmap-7j59: STEP: delete the pod Dec 26 12:57:09.155: INFO: Waiting for pod pod-subpath-test-configmap-7j59 to disappear Dec 26 12:57:09.171: INFO: Pod pod-subpath-test-configmap-7j59 no longer exists STEP: Deleting pod pod-subpath-test-configmap-7j59 Dec 26 12:57:09.172: INFO: Deleting pod "pod-subpath-test-configmap-7j59" in namespace "subpath-3091" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 26 12:57:09.187: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-3091" for this suite. Dec 26 12:57:15.218: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 26 12:57:15.303: INFO: namespace subpath-3091 deletion completed in 6.111444436s • [SLOW TEST:39.346 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl version should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 26 12:57:15.303: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Dec 26 12:57:15.432: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version' Dec 26 12:57:15.601: INFO: stderr: "" Dec 26 12:57:15.601: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"15\", GitVersion:\"v1.15.7\", GitCommit:\"6c143d35bb11d74970e7bc0b6c45b6bfdffc0bd4\", GitTreeState:\"clean\", BuildDate:\"2019-12-22T16:55:20Z\", GoVersion:\"go1.12.14\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"15\", GitVersion:\"v1.15.1\", GitCommit:\"4485c6f18cee9a5d3c3b4e523bd27972b1b53892\", GitTreeState:\"clean\", BuildDate:\"2019-07-18T09:09:21Z\", GoVersion:\"go1.12.5\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 26 12:57:15.601: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5831" for this suite. Dec 26 12:57:21.631: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 26 12:57:21.758: INFO: namespace kubectl-5831 deletion completed in 6.149849308s • [SLOW TEST:6.455 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl version /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 26 12:57:21.759: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:179 [It] should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 26 12:57:21.961: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-2001" for this suite. Dec 26 12:57:44.127: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 26 12:57:44.221: INFO: namespace pods-2001 deletion completed in 22.247460085s • [SLOW TEST:22.462 seconds] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 26 12:57:44.221: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted Dec 26 12:57:53.407: INFO: 10 pods remaining Dec 26 12:57:53.407: INFO: 10 pods has nil DeletionTimestamp Dec 26 12:57:53.407: INFO: Dec 26 12:57:54.432: INFO: 0 pods remaining Dec 26 12:57:54.432: INFO: 0 pods has nil DeletionTimestamp Dec 26 12:57:54.432: INFO: STEP: Gathering metrics W1226 12:57:55.192367 9 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Dec 26 12:57:55.192: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 26 12:57:55.192: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-8730" for this suite. Dec 26 12:58:07.429: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 26 12:58:07.521: INFO: namespace gc-8730 deletion completed in 12.324687169s • [SLOW TEST:23.300 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 26 12:58:07.521: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-downwardapi-pxp8 STEP: Creating a pod to test atomic-volume-subpath Dec 26 12:58:07.693: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-pxp8" in namespace "subpath-9186" to be "success or failure" Dec 26 12:58:07.735: INFO: Pod "pod-subpath-test-downwardapi-pxp8": Phase="Pending", Reason="", readiness=false. Elapsed: 42.273948ms Dec 26 12:58:09.746: INFO: Pod "pod-subpath-test-downwardapi-pxp8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.05293728s Dec 26 12:58:11.819: INFO: Pod "pod-subpath-test-downwardapi-pxp8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.126057653s Dec 26 12:58:13.844: INFO: Pod "pod-subpath-test-downwardapi-pxp8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.150746831s Dec 26 12:58:15.855: INFO: Pod "pod-subpath-test-downwardapi-pxp8": Phase="Pending", Reason="", readiness=false. Elapsed: 8.162258154s Dec 26 12:58:17.889: INFO: Pod "pod-subpath-test-downwardapi-pxp8": Phase="Running", Reason="", readiness=true. Elapsed: 10.19601762s Dec 26 12:58:19.905: INFO: Pod "pod-subpath-test-downwardapi-pxp8": Phase="Running", Reason="", readiness=true. Elapsed: 12.212488936s Dec 26 12:58:21.913: INFO: Pod "pod-subpath-test-downwardapi-pxp8": Phase="Running", Reason="", readiness=true. Elapsed: 14.220212816s Dec 26 12:58:23.942: INFO: Pod "pod-subpath-test-downwardapi-pxp8": Phase="Running", Reason="", readiness=true. Elapsed: 16.249130618s Dec 26 12:58:25.973: INFO: Pod "pod-subpath-test-downwardapi-pxp8": Phase="Running", Reason="", readiness=true. Elapsed: 18.280695822s Dec 26 12:58:27.981: INFO: Pod "pod-subpath-test-downwardapi-pxp8": Phase="Running", Reason="", readiness=true. Elapsed: 20.287802575s Dec 26 12:58:29.986: INFO: Pod "pod-subpath-test-downwardapi-pxp8": Phase="Running", Reason="", readiness=true. Elapsed: 22.293096811s Dec 26 12:58:32.040: INFO: Pod "pod-subpath-test-downwardapi-pxp8": Phase="Running", Reason="", readiness=true. Elapsed: 24.34724528s Dec 26 12:58:34.051: INFO: Pod "pod-subpath-test-downwardapi-pxp8": Phase="Running", Reason="", readiness=true. Elapsed: 26.358048939s Dec 26 12:58:37.393: INFO: Pod "pod-subpath-test-downwardapi-pxp8": Phase="Running", Reason="", readiness=true. Elapsed: 29.699826155s Dec 26 12:58:39.408: INFO: Pod "pod-subpath-test-downwardapi-pxp8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 31.715190279s STEP: Saw pod success Dec 26 12:58:39.408: INFO: Pod "pod-subpath-test-downwardapi-pxp8" satisfied condition "success or failure" Dec 26 12:58:39.414: INFO: Trying to get logs from node iruya-node pod pod-subpath-test-downwardapi-pxp8 container test-container-subpath-downwardapi-pxp8: STEP: delete the pod Dec 26 12:58:39.629: INFO: Waiting for pod pod-subpath-test-downwardapi-pxp8 to disappear Dec 26 12:58:39.693: INFO: Pod pod-subpath-test-downwardapi-pxp8 no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-pxp8 Dec 26 12:58:39.693: INFO: Deleting pod "pod-subpath-test-downwardapi-pxp8" in namespace "subpath-9186" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 26 12:58:39.697: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-9186" for this suite. Dec 26 12:58:45.741: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 26 12:58:45.932: INFO: namespace subpath-9186 deletion completed in 6.230880259s • [SLOW TEST:38.411 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 26 12:58:45.933: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 Dec 26 12:58:46.068: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Dec 26 12:58:46.085: INFO: Waiting for terminating namespaces to be deleted... Dec 26 12:58:46.095: INFO: Logging pods the kubelet thinks is on node iruya-node before test Dec 26 12:58:46.118: INFO: kube-proxy-976zl from kube-system started at 2019-08-04 09:01:39 +0000 UTC (1 container statuses recorded) Dec 26 12:58:46.118: INFO: Container kube-proxy ready: true, restart count 0 Dec 26 12:58:46.118: INFO: weave-net-rlp57 from kube-system started at 2019-10-12 11:56:39 +0000 UTC (2 container statuses recorded) Dec 26 12:58:46.118: INFO: Container weave ready: true, restart count 0 Dec 26 12:58:46.118: INFO: Container weave-npc ready: true, restart count 0 Dec 26 12:58:46.118: INFO: Logging pods the kubelet thinks is on node iruya-server-sfge57q7djm7 before test Dec 26 12:58:46.134: INFO: kube-scheduler-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:43 +0000 UTC (1 container statuses recorded) Dec 26 12:58:46.134: INFO: Container kube-scheduler ready: true, restart count 7 Dec 26 12:58:46.134: INFO: coredns-5c98db65d4-xx8w8 from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded) Dec 26 12:58:46.134: INFO: Container coredns ready: true, restart count 0 Dec 26 12:58:46.134: INFO: etcd-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:38 +0000 UTC (1 container statuses recorded) Dec 26 12:58:46.134: INFO: Container etcd ready: true, restart count 0 Dec 26 12:58:46.134: INFO: weave-net-bzl4d from kube-system started at 2019-08-04 08:52:37 +0000 UTC (2 container statuses recorded) Dec 26 12:58:46.134: INFO: Container weave ready: true, restart count 0 Dec 26 12:58:46.134: INFO: Container weave-npc ready: true, restart count 0 Dec 26 12:58:46.134: INFO: coredns-5c98db65d4-bm4gs from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded) Dec 26 12:58:46.134: INFO: Container coredns ready: true, restart count 0 Dec 26 12:58:46.134: INFO: kube-controller-manager-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:42 +0000 UTC (1 container statuses recorded) Dec 26 12:58:46.134: INFO: Container kube-controller-manager ready: true, restart count 10 Dec 26 12:58:46.134: INFO: kube-proxy-58v95 from kube-system started at 2019-08-04 08:52:37 +0000 UTC (1 container statuses recorded) Dec 26 12:58:46.134: INFO: Container kube-proxy ready: true, restart count 0 Dec 26 12:58:46.134: INFO: kube-apiserver-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:39 +0000 UTC (1 container statuses recorded) Dec 26 12:58:46.134: INFO: Container kube-apiserver ready: true, restart count 0 [It] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-3408773d-3184-4f31-91c8-ca64b2b785a5 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-3408773d-3184-4f31-91c8-ca64b2b785a5 off the node iruya-node STEP: verifying the node doesn't have the label kubernetes.io/e2e-3408773d-3184-4f31-91c8-ca64b2b785a5 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 26 12:59:02.374: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-708" for this suite. Dec 26 12:59:16.458: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 26 12:59:16.566: INFO: namespace sched-pred-708 deletion completed in 14.183555537s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72 • [SLOW TEST:30.634 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23 validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 26 12:59:16.567: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with configMap that has name projected-configmap-test-upd-86720d03-8b9b-4239-b531-e7acbbf1573f STEP: Creating the pod STEP: Updating configmap projected-configmap-test-upd-86720d03-8b9b-4239-b531-e7acbbf1573f STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 26 12:59:28.919: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2913" for this suite. Dec 26 12:59:52.966: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 26 12:59:53.067: INFO: namespace projected-2913 deletion completed in 24.131884827s • [SLOW TEST:36.500 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 26 12:59:53.068: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Dec 26 13:00:01.751: INFO: Successfully updated pod "pod-update-activedeadlineseconds-43acc063-d320-4d3b-80c0-076c74753c9e" Dec 26 13:00:01.752: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-43acc063-d320-4d3b-80c0-076c74753c9e" in namespace "pods-264" to be "terminated due to deadline exceeded" Dec 26 13:00:01.855: INFO: Pod "pod-update-activedeadlineseconds-43acc063-d320-4d3b-80c0-076c74753c9e": Phase="Running", Reason="", readiness=true. Elapsed: 103.30719ms Dec 26 13:00:03.862: INFO: Pod "pod-update-activedeadlineseconds-43acc063-d320-4d3b-80c0-076c74753c9e": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.110050345s Dec 26 13:00:03.862: INFO: Pod "pod-update-activedeadlineseconds-43acc063-d320-4d3b-80c0-076c74753c9e" satisfied condition "terminated due to deadline exceeded" [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 26 13:00:03.862: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-264" for this suite. Dec 26 13:00:09.949: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 26 13:00:10.048: INFO: namespace pods-264 deletion completed in 6.181341235s • [SLOW TEST:16.980 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl replace should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 26 13:00:10.049: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1721 [It] should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Dec 26 13:00:10.144: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --labels=run=e2e-test-nginx-pod --namespace=kubectl-36' Dec 26 13:00:12.794: INFO: stderr: "" Dec 26 13:00:12.794: INFO: stdout: "pod/e2e-test-nginx-pod created\n" STEP: verifying the pod e2e-test-nginx-pod is running STEP: verifying the pod e2e-test-nginx-pod was created Dec 26 13:00:22.846: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-nginx-pod --namespace=kubectl-36 -o json' Dec 26 13:00:23.012: INFO: stderr: "" Dec 26 13:00:23.012: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2019-12-26T13:00:12Z\",\n \"labels\": {\n \"run\": \"e2e-test-nginx-pod\"\n },\n \"name\": \"e2e-test-nginx-pod\",\n \"namespace\": \"kubectl-36\",\n \"resourceVersion\": \"18138647\",\n \"selfLink\": \"/api/v1/namespaces/kubectl-36/pods/e2e-test-nginx-pod\",\n \"uid\": \"603b5bda-a916-4577-83c9-eea44c3660d8\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/nginx:1.14-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-nginx-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-rqdgj\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"iruya-node\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-rqdgj\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-rqdgj\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2019-12-26T13:00:12Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2019-12-26T13:00:20Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2019-12-26T13:00:20Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2019-12-26T13:00:12Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"docker://73fab898c6ef9a9c04dd5699382e083177bde4764794d127594c1cb7054608a6\",\n \"image\": \"nginx:1.14-alpine\",\n \"imageID\": \"docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7\",\n \"lastState\": {},\n \"name\": \"e2e-test-nginx-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2019-12-26T13:00:19Z\"\n }\n }\n }\n ],\n \"hostIP\": \"10.96.3.65\",\n \"phase\": \"Running\",\n \"podIP\": \"10.44.0.1\",\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2019-12-26T13:00:12Z\"\n }\n}\n" STEP: replace the image in the pod Dec 26 13:00:23.012: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-36' Dec 26 13:00:23.424: INFO: stderr: "" Dec 26 13:00:23.425: INFO: stdout: "pod/e2e-test-nginx-pod replaced\n" STEP: verifying the pod e2e-test-nginx-pod has the right image docker.io/library/busybox:1.29 [AfterEach] [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1726 Dec 26 13:00:23.440: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=kubectl-36' Dec 26 13:00:30.795: INFO: stderr: "" Dec 26 13:00:30.795: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 26 13:00:30.795: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-36" for this suite. Dec 26 13:00:36.893: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 26 13:00:36.995: INFO: namespace kubectl-36 deletion completed in 6.15110792s • [SLOW TEST:26.947 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 26 13:00:36.996: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Dec 26 13:00:37.098: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. Dec 26 13:00:37.139: INFO: Number of nodes with available pods: 0 Dec 26 13:00:37.139: INFO: Node iruya-node is running more than one daemon pod Dec 26 13:00:38.608: INFO: Number of nodes with available pods: 0 Dec 26 13:00:38.608: INFO: Node iruya-node is running more than one daemon pod Dec 26 13:00:39.157: INFO: Number of nodes with available pods: 0 Dec 26 13:00:39.157: INFO: Node iruya-node is running more than one daemon pod Dec 26 13:00:40.226: INFO: Number of nodes with available pods: 0 Dec 26 13:00:40.226: INFO: Node iruya-node is running more than one daemon pod Dec 26 13:00:41.159: INFO: Number of nodes with available pods: 0 Dec 26 13:00:41.159: INFO: Node iruya-node is running more than one daemon pod Dec 26 13:00:43.499: INFO: Number of nodes with available pods: 0 Dec 26 13:00:43.500: INFO: Node iruya-node is running more than one daemon pod Dec 26 13:00:44.304: INFO: Number of nodes with available pods: 0 Dec 26 13:00:44.304: INFO: Node iruya-node is running more than one daemon pod Dec 26 13:00:45.707: INFO: Number of nodes with available pods: 0 Dec 26 13:00:45.707: INFO: Node iruya-node is running more than one daemon pod Dec 26 13:00:46.193: INFO: Number of nodes with available pods: 0 Dec 26 13:00:46.194: INFO: Node iruya-node is running more than one daemon pod Dec 26 13:00:47.158: INFO: Number of nodes with available pods: 1 Dec 26 13:00:47.158: INFO: Node iruya-node is running more than one daemon pod Dec 26 13:00:48.163: INFO: Number of nodes with available pods: 2 Dec 26 13:00:48.163: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. Dec 26 13:00:48.291: INFO: Wrong image for pod: daemon-set-jwblv. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Dec 26 13:00:48.291: INFO: Wrong image for pod: daemon-set-nbcps. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Dec 26 13:00:49.661: INFO: Wrong image for pod: daemon-set-jwblv. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Dec 26 13:00:49.661: INFO: Wrong image for pod: daemon-set-nbcps. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Dec 26 13:00:50.314: INFO: Wrong image for pod: daemon-set-jwblv. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Dec 26 13:00:50.314: INFO: Wrong image for pod: daemon-set-nbcps. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Dec 26 13:00:51.314: INFO: Wrong image for pod: daemon-set-jwblv. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Dec 26 13:00:51.314: INFO: Wrong image for pod: daemon-set-nbcps. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Dec 26 13:00:52.326: INFO: Wrong image for pod: daemon-set-jwblv. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Dec 26 13:00:52.326: INFO: Wrong image for pod: daemon-set-nbcps. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Dec 26 13:00:53.315: INFO: Wrong image for pod: daemon-set-jwblv. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Dec 26 13:00:53.315: INFO: Pod daemon-set-jwblv is not available Dec 26 13:00:53.315: INFO: Wrong image for pod: daemon-set-nbcps. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Dec 26 13:00:54.342: INFO: Wrong image for pod: daemon-set-jwblv. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Dec 26 13:00:54.342: INFO: Pod daemon-set-jwblv is not available Dec 26 13:00:54.342: INFO: Wrong image for pod: daemon-set-nbcps. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Dec 26 13:00:55.316: INFO: Wrong image for pod: daemon-set-jwblv. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Dec 26 13:00:55.316: INFO: Pod daemon-set-jwblv is not available Dec 26 13:00:55.316: INFO: Wrong image for pod: daemon-set-nbcps. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Dec 26 13:00:56.323: INFO: Wrong image for pod: daemon-set-jwblv. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Dec 26 13:00:56.324: INFO: Pod daemon-set-jwblv is not available Dec 26 13:00:56.324: INFO: Wrong image for pod: daemon-set-nbcps. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Dec 26 13:00:57.314: INFO: Wrong image for pod: daemon-set-jwblv. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Dec 26 13:00:57.314: INFO: Pod daemon-set-jwblv is not available Dec 26 13:00:57.314: INFO: Wrong image for pod: daemon-set-nbcps. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Dec 26 13:00:58.329: INFO: Pod daemon-set-bzhv5 is not available Dec 26 13:00:58.329: INFO: Wrong image for pod: daemon-set-nbcps. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Dec 26 13:00:59.426: INFO: Pod daemon-set-bzhv5 is not available Dec 26 13:00:59.426: INFO: Wrong image for pod: daemon-set-nbcps. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Dec 26 13:01:00.321: INFO: Pod daemon-set-bzhv5 is not available Dec 26 13:01:00.321: INFO: Wrong image for pod: daemon-set-nbcps. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Dec 26 13:01:01.314: INFO: Pod daemon-set-bzhv5 is not available Dec 26 13:01:01.314: INFO: Wrong image for pod: daemon-set-nbcps. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Dec 26 13:01:03.079: INFO: Pod daemon-set-bzhv5 is not available Dec 26 13:01:03.079: INFO: Wrong image for pod: daemon-set-nbcps. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Dec 26 13:01:03.546: INFO: Pod daemon-set-bzhv5 is not available Dec 26 13:01:03.547: INFO: Wrong image for pod: daemon-set-nbcps. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Dec 26 13:01:04.318: INFO: Pod daemon-set-bzhv5 is not available Dec 26 13:01:04.319: INFO: Wrong image for pod: daemon-set-nbcps. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Dec 26 13:01:05.315: INFO: Pod daemon-set-bzhv5 is not available Dec 26 13:01:05.315: INFO: Wrong image for pod: daemon-set-nbcps. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Dec 26 13:01:06.316: INFO: Pod daemon-set-bzhv5 is not available Dec 26 13:01:06.316: INFO: Wrong image for pod: daemon-set-nbcps. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Dec 26 13:01:07.374: INFO: Wrong image for pod: daemon-set-nbcps. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Dec 26 13:01:08.334: INFO: Wrong image for pod: daemon-set-nbcps. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Dec 26 13:01:09.387: INFO: Wrong image for pod: daemon-set-nbcps. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Dec 26 13:01:10.318: INFO: Wrong image for pod: daemon-set-nbcps. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Dec 26 13:01:11.314: INFO: Wrong image for pod: daemon-set-nbcps. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Dec 26 13:01:12.312: INFO: Wrong image for pod: daemon-set-nbcps. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Dec 26 13:01:12.313: INFO: Pod daemon-set-nbcps is not available Dec 26 13:01:13.320: INFO: Wrong image for pod: daemon-set-nbcps. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Dec 26 13:01:13.320: INFO: Pod daemon-set-nbcps is not available Dec 26 13:01:14.317: INFO: Wrong image for pod: daemon-set-nbcps. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Dec 26 13:01:14.317: INFO: Pod daemon-set-nbcps is not available Dec 26 13:01:15.313: INFO: Wrong image for pod: daemon-set-nbcps. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Dec 26 13:01:15.313: INFO: Pod daemon-set-nbcps is not available Dec 26 13:01:16.314: INFO: Wrong image for pod: daemon-set-nbcps. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Dec 26 13:01:16.314: INFO: Pod daemon-set-nbcps is not available Dec 26 13:01:17.323: INFO: Pod daemon-set-wwvnv is not available STEP: Check that daemon pods are still running on every node of the cluster. Dec 26 13:01:17.371: INFO: Number of nodes with available pods: 1 Dec 26 13:01:17.371: INFO: Node iruya-node is running more than one daemon pod Dec 26 13:01:18.391: INFO: Number of nodes with available pods: 1 Dec 26 13:01:18.392: INFO: Node iruya-node is running more than one daemon pod Dec 26 13:01:19.493: INFO: Number of nodes with available pods: 1 Dec 26 13:01:19.494: INFO: Node iruya-node is running more than one daemon pod Dec 26 13:01:20.388: INFO: Number of nodes with available pods: 1 Dec 26 13:01:20.388: INFO: Node iruya-node is running more than one daemon pod Dec 26 13:01:21.481: INFO: Number of nodes with available pods: 1 Dec 26 13:01:21.481: INFO: Node iruya-node is running more than one daemon pod Dec 26 13:01:22.392: INFO: Number of nodes with available pods: 1 Dec 26 13:01:22.392: INFO: Node iruya-node is running more than one daemon pod Dec 26 13:01:23.392: INFO: Number of nodes with available pods: 1 Dec 26 13:01:23.392: INFO: Node iruya-node is running more than one daemon pod Dec 26 13:01:24.384: INFO: Number of nodes with available pods: 1 Dec 26 13:01:24.384: INFO: Node iruya-node is running more than one daemon pod Dec 26 13:01:25.453: INFO: Number of nodes with available pods: 1 Dec 26 13:01:25.453: INFO: Node iruya-node is running more than one daemon pod Dec 26 13:01:26.389: INFO: Number of nodes with available pods: 2 Dec 26 13:01:26.389: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-8711, will wait for the garbage collector to delete the pods Dec 26 13:01:26.500: INFO: Deleting DaemonSet.extensions daemon-set took: 33.072698ms Dec 26 13:01:26.900: INFO: Terminating DaemonSet.extensions daemon-set pods took: 400.500692ms Dec 26 13:01:36.636: INFO: Number of nodes with available pods: 0 Dec 26 13:01:36.636: INFO: Number of running nodes: 0, number of available pods: 0 Dec 26 13:01:36.642: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-8711/daemonsets","resourceVersion":"18138845"},"items":null} Dec 26 13:01:36.648: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-8711/pods","resourceVersion":"18138845"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 26 13:01:36.664: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-8711" for this suite. Dec 26 13:01:42.699: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 26 13:01:42.809: INFO: namespace daemonsets-8711 deletion completed in 6.139143943s • [SLOW TEST:65.813 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 26 13:01:42.809: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod Dec 26 13:01:42.933: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 26 13:01:58.535: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-4652" for this suite. Dec 26 13:02:04.699: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 26 13:02:04.828: INFO: namespace init-container-4652 deletion completed in 6.23633059s • [SLOW TEST:22.019 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 26 13:02:04.828: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0777 on node default medium Dec 26 13:02:04.936: INFO: Waiting up to 5m0s for pod "pod-cc80cf0d-9ec8-4170-a541-d18a10ff09cc" in namespace "emptydir-1756" to be "success or failure" Dec 26 13:02:04.942: INFO: Pod "pod-cc80cf0d-9ec8-4170-a541-d18a10ff09cc": Phase="Pending", Reason="", readiness=false. Elapsed: 6.258123ms Dec 26 13:02:06.952: INFO: Pod "pod-cc80cf0d-9ec8-4170-a541-d18a10ff09cc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016719342s Dec 26 13:02:08.965: INFO: Pod "pod-cc80cf0d-9ec8-4170-a541-d18a10ff09cc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.029587311s Dec 26 13:02:10.978: INFO: Pod "pod-cc80cf0d-9ec8-4170-a541-d18a10ff09cc": Phase="Pending", Reason="", readiness=false. Elapsed: 6.042043957s Dec 26 13:02:12.995: INFO: Pod "pod-cc80cf0d-9ec8-4170-a541-d18a10ff09cc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.05951091s STEP: Saw pod success Dec 26 13:02:12.995: INFO: Pod "pod-cc80cf0d-9ec8-4170-a541-d18a10ff09cc" satisfied condition "success or failure" Dec 26 13:02:13.000: INFO: Trying to get logs from node iruya-node pod pod-cc80cf0d-9ec8-4170-a541-d18a10ff09cc container test-container: STEP: delete the pod Dec 26 13:02:13.080: INFO: Waiting for pod pod-cc80cf0d-9ec8-4170-a541-d18a10ff09cc to disappear Dec 26 13:02:13.089: INFO: Pod pod-cc80cf0d-9ec8-4170-a541-d18a10ff09cc no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 26 13:02:13.089: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1756" for this suite. Dec 26 13:02:19.136: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 26 13:02:19.251: INFO: namespace emptydir-1756 deletion completed in 6.147012785s • [SLOW TEST:14.423 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 26 13:02:19.251: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-f5e88648-a623-4857-9508-d93db8f97e76 STEP: Creating a pod to test consume configMaps Dec 26 13:02:19.398: INFO: Waiting up to 5m0s for pod "pod-configmaps-9fa63fb0-6982-4e11-ac1a-61ac834c25c3" in namespace "configmap-130" to be "success or failure" Dec 26 13:02:19.410: INFO: Pod "pod-configmaps-9fa63fb0-6982-4e11-ac1a-61ac834c25c3": Phase="Pending", Reason="", readiness=false. Elapsed: 11.90108ms Dec 26 13:02:21.456: INFO: Pod "pod-configmaps-9fa63fb0-6982-4e11-ac1a-61ac834c25c3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.057788928s Dec 26 13:02:23.464: INFO: Pod "pod-configmaps-9fa63fb0-6982-4e11-ac1a-61ac834c25c3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.066497127s Dec 26 13:02:25.471: INFO: Pod "pod-configmaps-9fa63fb0-6982-4e11-ac1a-61ac834c25c3": Phase="Pending", Reason="", readiness=false. Elapsed: 6.073348019s Dec 26 13:02:27.478: INFO: Pod "pod-configmaps-9fa63fb0-6982-4e11-ac1a-61ac834c25c3": Phase="Pending", Reason="", readiness=false. Elapsed: 8.080454665s Dec 26 13:02:29.516: INFO: Pod "pod-configmaps-9fa63fb0-6982-4e11-ac1a-61ac834c25c3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.118407143s STEP: Saw pod success Dec 26 13:02:29.516: INFO: Pod "pod-configmaps-9fa63fb0-6982-4e11-ac1a-61ac834c25c3" satisfied condition "success or failure" Dec 26 13:02:29.520: INFO: Trying to get logs from node iruya-node pod pod-configmaps-9fa63fb0-6982-4e11-ac1a-61ac834c25c3 container configmap-volume-test: STEP: delete the pod Dec 26 13:02:29.604: INFO: Waiting for pod pod-configmaps-9fa63fb0-6982-4e11-ac1a-61ac834c25c3 to disappear Dec 26 13:02:29.733: INFO: Pod pod-configmaps-9fa63fb0-6982-4e11-ac1a-61ac834c25c3 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 26 13:02:29.733: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-130" for this suite. Dec 26 13:02:35.769: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 26 13:02:35.984: INFO: namespace configmap-130 deletion completed in 6.242583856s • [SLOW TEST:16.734 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 26 13:02:35.985: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Dec 26 13:02:36.226: INFO: Number of nodes with available pods: 0 Dec 26 13:02:36.226: INFO: Node iruya-node is running more than one daemon pod Dec 26 13:02:37.246: INFO: Number of nodes with available pods: 0 Dec 26 13:02:37.246: INFO: Node iruya-node is running more than one daemon pod Dec 26 13:02:40.580: INFO: Number of nodes with available pods: 0 Dec 26 13:02:40.580: INFO: Node iruya-node is running more than one daemon pod Dec 26 13:02:41.680: INFO: Number of nodes with available pods: 0 Dec 26 13:02:41.681: INFO: Node iruya-node is running more than one daemon pod Dec 26 13:02:42.244: INFO: Number of nodes with available pods: 0 Dec 26 13:02:42.244: INFO: Node iruya-node is running more than one daemon pod Dec 26 13:02:43.280: INFO: Number of nodes with available pods: 0 Dec 26 13:02:43.280: INFO: Node iruya-node is running more than one daemon pod Dec 26 13:02:45.625: INFO: Number of nodes with available pods: 0 Dec 26 13:02:45.625: INFO: Node iruya-node is running more than one daemon pod Dec 26 13:02:46.244: INFO: Number of nodes with available pods: 0 Dec 26 13:02:46.244: INFO: Node iruya-node is running more than one daemon pod Dec 26 13:02:47.703: INFO: Number of nodes with available pods: 0 Dec 26 13:02:47.703: INFO: Node iruya-node is running more than one daemon pod Dec 26 13:02:48.249: INFO: Number of nodes with available pods: 0 Dec 26 13:02:48.249: INFO: Node iruya-node is running more than one daemon pod Dec 26 13:02:49.243: INFO: Number of nodes with available pods: 0 Dec 26 13:02:49.244: INFO: Node iruya-node is running more than one daemon pod Dec 26 13:02:50.244: INFO: Number of nodes with available pods: 2 Dec 26 13:02:50.244: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. Dec 26 13:02:50.309: INFO: Number of nodes with available pods: 2 Dec 26 13:02:50.309: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-7494, will wait for the garbage collector to delete the pods Dec 26 13:02:50.488: INFO: Deleting DaemonSet.extensions daemon-set took: 13.325861ms Dec 26 13:02:50.788: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.51313ms Dec 26 13:03:07.998: INFO: Number of nodes with available pods: 0 Dec 26 13:03:07.998: INFO: Number of running nodes: 0, number of available pods: 0 Dec 26 13:03:08.002: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-7494/daemonsets","resourceVersion":"18139124"},"items":null} Dec 26 13:03:08.006: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-7494/pods","resourceVersion":"18139124"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 26 13:03:08.016: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-7494" for this suite. Dec 26 13:03:14.044: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 26 13:03:14.161: INFO: namespace daemonsets-7494 deletion completed in 6.140321303s • [SLOW TEST:38.176 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 26 13:03:14.161: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test substitution in container's args Dec 26 13:03:14.285: INFO: Waiting up to 5m0s for pod "var-expansion-aca9f75c-dfc5-42a7-b7d2-a9260aa836e2" in namespace "var-expansion-8583" to be "success or failure" Dec 26 13:03:14.292: INFO: Pod "var-expansion-aca9f75c-dfc5-42a7-b7d2-a9260aa836e2": Phase="Pending", Reason="", readiness=false. Elapsed: 7.133043ms Dec 26 13:03:16.303: INFO: Pod "var-expansion-aca9f75c-dfc5-42a7-b7d2-a9260aa836e2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018453777s Dec 26 13:03:18.327: INFO: Pod "var-expansion-aca9f75c-dfc5-42a7-b7d2-a9260aa836e2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.041950847s Dec 26 13:03:20.338: INFO: Pod "var-expansion-aca9f75c-dfc5-42a7-b7d2-a9260aa836e2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.053075757s Dec 26 13:03:22.345: INFO: Pod "var-expansion-aca9f75c-dfc5-42a7-b7d2-a9260aa836e2": Phase="Pending", Reason="", readiness=false. Elapsed: 8.060602345s Dec 26 13:03:24.365: INFO: Pod "var-expansion-aca9f75c-dfc5-42a7-b7d2-a9260aa836e2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.079776521s STEP: Saw pod success Dec 26 13:03:24.365: INFO: Pod "var-expansion-aca9f75c-dfc5-42a7-b7d2-a9260aa836e2" satisfied condition "success or failure" Dec 26 13:03:24.379: INFO: Trying to get logs from node iruya-node pod var-expansion-aca9f75c-dfc5-42a7-b7d2-a9260aa836e2 container dapi-container: STEP: delete the pod Dec 26 13:03:24.493: INFO: Waiting for pod var-expansion-aca9f75c-dfc5-42a7-b7d2-a9260aa836e2 to disappear Dec 26 13:03:24.508: INFO: Pod var-expansion-aca9f75c-dfc5-42a7-b7d2-a9260aa836e2 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 26 13:03:24.508: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-8583" for this suite. Dec 26 13:03:30.546: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 26 13:03:30.704: INFO: namespace var-expansion-8583 deletion completed in 6.190383677s • [SLOW TEST:16.543 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 26 13:03:30.705: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object Dec 26 13:03:31.007: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-2270,SelfLink:/api/v1/namespaces/watch-2270/configmaps/e2e-watch-test-label-changed,UID:46286125-671a-4995-be16-26268d109730,ResourceVersion:18139203,Generation:0,CreationTimestamp:2019-12-26 13:03:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} Dec 26 13:03:31.007: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-2270,SelfLink:/api/v1/namespaces/watch-2270/configmaps/e2e-watch-test-label-changed,UID:46286125-671a-4995-be16-26268d109730,ResourceVersion:18139205,Generation:0,CreationTimestamp:2019-12-26 13:03:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} Dec 26 13:03:31.007: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-2270,SelfLink:/api/v1/namespaces/watch-2270/configmaps/e2e-watch-test-label-changed,UID:46286125-671a-4995-be16-26268d109730,ResourceVersion:18139206,Generation:0,CreationTimestamp:2019-12-26 13:03:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored Dec 26 13:03:41.109: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-2270,SelfLink:/api/v1/namespaces/watch-2270/configmaps/e2e-watch-test-label-changed,UID:46286125-671a-4995-be16-26268d109730,ResourceVersion:18139221,Generation:0,CreationTimestamp:2019-12-26 13:03:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Dec 26 13:03:41.109: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-2270,SelfLink:/api/v1/namespaces/watch-2270/configmaps/e2e-watch-test-label-changed,UID:46286125-671a-4995-be16-26268d109730,ResourceVersion:18139223,Generation:0,CreationTimestamp:2019-12-26 13:03:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} Dec 26 13:03:41.109: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-2270,SelfLink:/api/v1/namespaces/watch-2270/configmaps/e2e-watch-test-label-changed,UID:46286125-671a-4995-be16-26268d109730,ResourceVersion:18139224,Generation:0,CreationTimestamp:2019-12-26 13:03:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 26 13:03:41.109: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-2270" for this suite. Dec 26 13:03:47.222: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 26 13:03:47.349: INFO: namespace watch-2270 deletion completed in 6.223812974s • [SLOW TEST:16.645 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 26 13:03:47.350: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes Dec 26 13:03:47.402: INFO: observed the pod list STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice STEP: verifying pod deletion was observed [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 26 13:04:04.968: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-8739" for this suite. Dec 26 13:04:10.994: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 26 13:04:11.189: INFO: namespace pods-8739 deletion completed in 6.216825939s • [SLOW TEST:23.839 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 26 13:04:11.190: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44 [It] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod Dec 26 13:04:11.252: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 26 13:04:26.126: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-5901" for this suite. Dec 26 13:04:34.200: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 26 13:04:34.285: INFO: namespace init-container-5901 deletion completed in 8.101451624s • [SLOW TEST:23.096 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 26 13:04:34.286: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods STEP: Gathering metrics W1226 13:05:18.046005 9 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Dec 26 13:05:18.046: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 26 13:05:18.046: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-5745" for this suite. Dec 26 13:05:30.175: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 26 13:05:33.933: INFO: namespace gc-5745 deletion completed in 15.884214703s • [SLOW TEST:59.648 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 26 13:05:33.934: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 26 13:05:46.529: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-6448" for this suite. Dec 26 13:05:52.612: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 26 13:05:52.682: INFO: namespace emptydir-wrapper-6448 deletion completed in 6.141528992s • [SLOW TEST:18.748 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 26 13:05:52.683: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Dec 26 13:05:52.833: INFO: Number of nodes with available pods: 0 Dec 26 13:05:52.833: INFO: Node iruya-node is running more than one daemon pod Dec 26 13:05:53.859: INFO: Number of nodes with available pods: 0 Dec 26 13:05:53.859: INFO: Node iruya-node is running more than one daemon pod Dec 26 13:05:55.257: INFO: Number of nodes with available pods: 0 Dec 26 13:05:55.257: INFO: Node iruya-node is running more than one daemon pod Dec 26 13:05:55.854: INFO: Number of nodes with available pods: 0 Dec 26 13:05:55.854: INFO: Node iruya-node is running more than one daemon pod Dec 26 13:05:56.849: INFO: Number of nodes with available pods: 0 Dec 26 13:05:56.849: INFO: Node iruya-node is running more than one daemon pod Dec 26 13:05:57.844: INFO: Number of nodes with available pods: 0 Dec 26 13:05:57.844: INFO: Node iruya-node is running more than one daemon pod Dec 26 13:06:00.021: INFO: Number of nodes with available pods: 0 Dec 26 13:06:00.021: INFO: Node iruya-node is running more than one daemon pod Dec 26 13:06:03.245: INFO: Number of nodes with available pods: 0 Dec 26 13:06:03.245: INFO: Node iruya-node is running more than one daemon pod Dec 26 13:06:03.847: INFO: Number of nodes with available pods: 0 Dec 26 13:06:03.848: INFO: Node iruya-node is running more than one daemon pod Dec 26 13:06:04.875: INFO: Number of nodes with available pods: 2 Dec 26 13:06:04.875: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Stop a daemon pod, check that the daemon pod is revived. Dec 26 13:06:04.977: INFO: Number of nodes with available pods: 1 Dec 26 13:06:04.977: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod Dec 26 13:06:05.998: INFO: Number of nodes with available pods: 1 Dec 26 13:06:05.998: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod Dec 26 13:06:07.103: INFO: Number of nodes with available pods: 1 Dec 26 13:06:07.103: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod Dec 26 13:06:08.000: INFO: Number of nodes with available pods: 1 Dec 26 13:06:08.000: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod Dec 26 13:06:09.481: INFO: Number of nodes with available pods: 1 Dec 26 13:06:09.481: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod Dec 26 13:06:10.045: INFO: Number of nodes with available pods: 1 Dec 26 13:06:10.045: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod Dec 26 13:06:10.996: INFO: Number of nodes with available pods: 1 Dec 26 13:06:10.996: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod Dec 26 13:06:11.995: INFO: Number of nodes with available pods: 1 Dec 26 13:06:11.995: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod Dec 26 13:06:13.795: INFO: Number of nodes with available pods: 1 Dec 26 13:06:13.795: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod Dec 26 13:06:14.333: INFO: Number of nodes with available pods: 1 Dec 26 13:06:14.333: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod Dec 26 13:06:15.391: INFO: Number of nodes with available pods: 1 Dec 26 13:06:15.391: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod Dec 26 13:06:16.022: INFO: Number of nodes with available pods: 1 Dec 26 13:06:16.022: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod Dec 26 13:06:16.995: INFO: Number of nodes with available pods: 1 Dec 26 13:06:16.995: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod Dec 26 13:06:18.598: INFO: Number of nodes with available pods: 1 Dec 26 13:06:18.598: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod Dec 26 13:06:19.128: INFO: Number of nodes with available pods: 1 Dec 26 13:06:19.128: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod Dec 26 13:06:20.072: INFO: Number of nodes with available pods: 1 Dec 26 13:06:20.072: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod Dec 26 13:06:21.007: INFO: Number of nodes with available pods: 1 Dec 26 13:06:21.007: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod Dec 26 13:06:22.054: INFO: Number of nodes with available pods: 2 Dec 26 13:06:22.054: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-3099, will wait for the garbage collector to delete the pods Dec 26 13:06:22.134: INFO: Deleting DaemonSet.extensions daemon-set took: 16.180125ms Dec 26 13:06:22.435: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.893222ms Dec 26 13:06:37.941: INFO: Number of nodes with available pods: 0 Dec 26 13:06:37.941: INFO: Number of running nodes: 0, number of available pods: 0 Dec 26 13:06:37.944: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-3099/daemonsets","resourceVersion":"18139772"},"items":null} Dec 26 13:06:37.946: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-3099/pods","resourceVersion":"18139772"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 26 13:06:38.022: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-3099" for this suite. Dec 26 13:06:44.054: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 26 13:06:44.157: INFO: namespace daemonsets-3099 deletion completed in 6.131070467s • [SLOW TEST:51.474 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 26 13:06:44.157: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 26 13:06:52.464: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-5561" for this suite. Dec 26 13:07:38.512: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 26 13:07:38.688: INFO: namespace kubelet-test-5561 deletion completed in 46.210444723s • [SLOW TEST:54.531 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a busybox command in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:40 should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 26 13:07:38.688: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: getting the auto-created API token Dec 26 13:07:39.348: INFO: created pod pod-service-account-defaultsa Dec 26 13:07:39.348: INFO: pod pod-service-account-defaultsa service account token volume mount: true Dec 26 13:07:39.406: INFO: created pod pod-service-account-mountsa Dec 26 13:07:39.406: INFO: pod pod-service-account-mountsa service account token volume mount: true Dec 26 13:07:39.434: INFO: created pod pod-service-account-nomountsa Dec 26 13:07:39.434: INFO: pod pod-service-account-nomountsa service account token volume mount: false Dec 26 13:07:39.467: INFO: created pod pod-service-account-defaultsa-mountspec Dec 26 13:07:39.467: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true Dec 26 13:07:39.481: INFO: created pod pod-service-account-mountsa-mountspec Dec 26 13:07:39.482: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true Dec 26 13:07:39.550: INFO: created pod pod-service-account-nomountsa-mountspec Dec 26 13:07:39.550: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true Dec 26 13:07:39.598: INFO: created pod pod-service-account-defaultsa-nomountspec Dec 26 13:07:39.598: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false Dec 26 13:07:39.609: INFO: created pod pod-service-account-mountsa-nomountspec Dec 26 13:07:39.609: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false Dec 26 13:07:39.634: INFO: created pod pod-service-account-nomountsa-nomountspec Dec 26 13:07:39.634: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 26 13:07:39.634: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-3274" for this suite. Dec 26 13:08:15.028: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 26 13:08:15.165: INFO: namespace svcaccounts-3274 deletion completed in 35.346579985s • [SLOW TEST:36.476 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 26 13:08:15.165: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [It] should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a replication controller Dec 26 13:08:15.294: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3963' Dec 26 13:08:15.638: INFO: stderr: "" Dec 26 13:08:15.639: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Dec 26 13:08:15.639: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3963' Dec 26 13:08:15.941: INFO: stderr: "" Dec 26 13:08:15.941: INFO: stdout: "update-demo-nautilus-4rdz9 update-demo-nautilus-d6l2n " Dec 26 13:08:15.942: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4rdz9 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3963' Dec 26 13:08:16.147: INFO: stderr: "" Dec 26 13:08:16.147: INFO: stdout: "" Dec 26 13:08:16.147: INFO: update-demo-nautilus-4rdz9 is created but not running Dec 26 13:08:21.148: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3963' Dec 26 13:08:21.345: INFO: stderr: "" Dec 26 13:08:21.345: INFO: stdout: "update-demo-nautilus-4rdz9 update-demo-nautilus-d6l2n " Dec 26 13:08:21.345: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4rdz9 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3963' Dec 26 13:08:21.494: INFO: stderr: "" Dec 26 13:08:21.494: INFO: stdout: "" Dec 26 13:08:21.494: INFO: update-demo-nautilus-4rdz9 is created but not running Dec 26 13:08:26.494: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3963' Dec 26 13:08:26.727: INFO: stderr: "" Dec 26 13:08:26.728: INFO: stdout: "update-demo-nautilus-4rdz9 update-demo-nautilus-d6l2n " Dec 26 13:08:26.728: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4rdz9 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3963' Dec 26 13:08:26.874: INFO: stderr: "" Dec 26 13:08:26.875: INFO: stdout: "" Dec 26 13:08:26.875: INFO: update-demo-nautilus-4rdz9 is created but not running Dec 26 13:08:31.875: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3963' Dec 26 13:08:32.001: INFO: stderr: "" Dec 26 13:08:32.001: INFO: stdout: "update-demo-nautilus-4rdz9 update-demo-nautilus-d6l2n " Dec 26 13:08:32.001: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4rdz9 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3963' Dec 26 13:08:32.103: INFO: stderr: "" Dec 26 13:08:32.103: INFO: stdout: "true" Dec 26 13:08:32.104: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4rdz9 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3963' Dec 26 13:08:32.224: INFO: stderr: "" Dec 26 13:08:32.225: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Dec 26 13:08:32.225: INFO: validating pod update-demo-nautilus-4rdz9 Dec 26 13:08:32.246: INFO: got data: { "image": "nautilus.jpg" } Dec 26 13:08:32.246: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Dec 26 13:08:32.246: INFO: update-demo-nautilus-4rdz9 is verified up and running Dec 26 13:08:32.246: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-d6l2n -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3963' Dec 26 13:08:32.341: INFO: stderr: "" Dec 26 13:08:32.341: INFO: stdout: "true" Dec 26 13:08:32.341: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-d6l2n -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3963' Dec 26 13:08:32.435: INFO: stderr: "" Dec 26 13:08:32.435: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Dec 26 13:08:32.435: INFO: validating pod update-demo-nautilus-d6l2n Dec 26 13:08:32.469: INFO: got data: { "image": "nautilus.jpg" } Dec 26 13:08:32.469: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Dec 26 13:08:32.469: INFO: update-demo-nautilus-d6l2n is verified up and running STEP: using delete to clean up resources Dec 26 13:08:32.470: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-3963' Dec 26 13:08:32.658: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Dec 26 13:08:32.659: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Dec 26 13:08:32.659: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-3963' Dec 26 13:08:32.769: INFO: stderr: "No resources found.\n" Dec 26 13:08:32.769: INFO: stdout: "" Dec 26 13:08:32.769: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-3963 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Dec 26 13:08:33.038: INFO: stderr: "" Dec 26 13:08:33.039: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 26 13:08:33.039: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3963" for this suite. Dec 26 13:08:55.183: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 26 13:08:55.260: INFO: namespace kubectl-3963 deletion completed in 22.186379014s • [SLOW TEST:40.095 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 26 13:08:55.261: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Dec 26 13:08:55.398: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4705be98-0636-4505-8308-c3ff9cf2ee18" in namespace "projected-8558" to be "success or failure" Dec 26 13:08:55.408: INFO: Pod "downwardapi-volume-4705be98-0636-4505-8308-c3ff9cf2ee18": Phase="Pending", Reason="", readiness=false. Elapsed: 10.24883ms Dec 26 13:08:57.425: INFO: Pod "downwardapi-volume-4705be98-0636-4505-8308-c3ff9cf2ee18": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027326878s Dec 26 13:08:59.433: INFO: Pod "downwardapi-volume-4705be98-0636-4505-8308-c3ff9cf2ee18": Phase="Pending", Reason="", readiness=false. Elapsed: 4.035658403s Dec 26 13:09:01.451: INFO: Pod "downwardapi-volume-4705be98-0636-4505-8308-c3ff9cf2ee18": Phase="Pending", Reason="", readiness=false. Elapsed: 6.053122557s Dec 26 13:09:03.516: INFO: Pod "downwardapi-volume-4705be98-0636-4505-8308-c3ff9cf2ee18": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.118307883s STEP: Saw pod success Dec 26 13:09:03.516: INFO: Pod "downwardapi-volume-4705be98-0636-4505-8308-c3ff9cf2ee18" satisfied condition "success or failure" Dec 26 13:09:03.520: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-4705be98-0636-4505-8308-c3ff9cf2ee18 container client-container: STEP: delete the pod Dec 26 13:09:03.591: INFO: Waiting for pod downwardapi-volume-4705be98-0636-4505-8308-c3ff9cf2ee18 to disappear Dec 26 13:09:03.598: INFO: Pod downwardapi-volume-4705be98-0636-4505-8308-c3ff9cf2ee18 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 26 13:09:03.598: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8558" for this suite. Dec 26 13:09:09.732: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 26 13:09:10.096: INFO: namespace projected-8558 deletion completed in 6.491206342s • [SLOW TEST:14.835 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 26 13:09:10.096: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name cm-test-opt-del-cde9baa3-39ad-4e07-8def-5f12647714af STEP: Creating configMap with name cm-test-opt-upd-071f613b-c8af-4f2e-bc11-c231d4683ba1 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-cde9baa3-39ad-4e07-8def-5f12647714af STEP: Updating configmap cm-test-opt-upd-071f613b-c8af-4f2e-bc11-c231d4683ba1 STEP: Creating configMap with name cm-test-opt-create-2bef1826-e5d9-4b1b-9327-dc337b3998d8 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 26 13:09:26.681: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5266" for this suite. Dec 26 13:09:50.728: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 26 13:09:50.854: INFO: namespace configmap-5266 deletion completed in 24.167896434s • [SLOW TEST:40.758 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 26 13:09:50.855: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test use defaults Dec 26 13:09:51.093: INFO: Waiting up to 5m0s for pod "client-containers-22363c71-d324-4ce3-ba61-81a4b670632d" in namespace "containers-2136" to be "success or failure" Dec 26 13:09:51.114: INFO: Pod "client-containers-22363c71-d324-4ce3-ba61-81a4b670632d": Phase="Pending", Reason="", readiness=false. Elapsed: 20.789313ms Dec 26 13:09:53.124: INFO: Pod "client-containers-22363c71-d324-4ce3-ba61-81a4b670632d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030616391s Dec 26 13:09:55.380: INFO: Pod "client-containers-22363c71-d324-4ce3-ba61-81a4b670632d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.286769984s Dec 26 13:09:57.387: INFO: Pod "client-containers-22363c71-d324-4ce3-ba61-81a4b670632d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.294063433s Dec 26 13:09:59.399: INFO: Pod "client-containers-22363c71-d324-4ce3-ba61-81a4b670632d": Phase="Pending", Reason="", readiness=false. Elapsed: 8.305547271s Dec 26 13:10:01.408: INFO: Pod "client-containers-22363c71-d324-4ce3-ba61-81a4b670632d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.315136152s STEP: Saw pod success Dec 26 13:10:01.408: INFO: Pod "client-containers-22363c71-d324-4ce3-ba61-81a4b670632d" satisfied condition "success or failure" Dec 26 13:10:01.413: INFO: Trying to get logs from node iruya-node pod client-containers-22363c71-d324-4ce3-ba61-81a4b670632d container test-container: STEP: delete the pod Dec 26 13:10:01.586: INFO: Waiting for pod client-containers-22363c71-d324-4ce3-ba61-81a4b670632d to disappear Dec 26 13:10:01.597: INFO: Pod client-containers-22363c71-d324-4ce3-ba61-81a4b670632d no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 26 13:10:01.597: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-2136" for this suite. Dec 26 13:10:07.641: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 26 13:10:07.758: INFO: namespace containers-2136 deletion completed in 6.133791844s • [SLOW TEST:16.903 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 26 13:10:07.758: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Dec 26 13:10:07.906: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0106fe6f-d5a8-4118-8038-c3ccba115b3d" in namespace "projected-3620" to be "success or failure" Dec 26 13:10:07.944: INFO: Pod "downwardapi-volume-0106fe6f-d5a8-4118-8038-c3ccba115b3d": Phase="Pending", Reason="", readiness=false. Elapsed: 37.619205ms Dec 26 13:10:10.006: INFO: Pod "downwardapi-volume-0106fe6f-d5a8-4118-8038-c3ccba115b3d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.099385896s Dec 26 13:10:12.020: INFO: Pod "downwardapi-volume-0106fe6f-d5a8-4118-8038-c3ccba115b3d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.113597125s Dec 26 13:10:14.048: INFO: Pod "downwardapi-volume-0106fe6f-d5a8-4118-8038-c3ccba115b3d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.141625439s Dec 26 13:10:16.057: INFO: Pod "downwardapi-volume-0106fe6f-d5a8-4118-8038-c3ccba115b3d": Phase="Pending", Reason="", readiness=false. Elapsed: 8.150225365s Dec 26 13:10:18.063: INFO: Pod "downwardapi-volume-0106fe6f-d5a8-4118-8038-c3ccba115b3d": Phase="Pending", Reason="", readiness=false. Elapsed: 10.156509299s Dec 26 13:10:20.080: INFO: Pod "downwardapi-volume-0106fe6f-d5a8-4118-8038-c3ccba115b3d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.17376834s STEP: Saw pod success Dec 26 13:10:20.080: INFO: Pod "downwardapi-volume-0106fe6f-d5a8-4118-8038-c3ccba115b3d" satisfied condition "success or failure" Dec 26 13:10:20.083: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-0106fe6f-d5a8-4118-8038-c3ccba115b3d container client-container: STEP: delete the pod Dec 26 13:10:20.294: INFO: Waiting for pod downwardapi-volume-0106fe6f-d5a8-4118-8038-c3ccba115b3d to disappear Dec 26 13:10:20.299: INFO: Pod downwardapi-volume-0106fe6f-d5a8-4118-8038-c3ccba115b3d no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 26 13:10:20.299: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3620" for this suite. Dec 26 13:10:26.371: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 26 13:10:26.488: INFO: namespace projected-3620 deletion completed in 6.160207725s • [SLOW TEST:18.730 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 26 13:10:26.488: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0666 on tmpfs Dec 26 13:10:26.668: INFO: Waiting up to 5m0s for pod "pod-2ef73432-5e8a-456d-a445-b3ac8a80f788" in namespace "emptydir-3355" to be "success or failure" Dec 26 13:10:26.707: INFO: Pod "pod-2ef73432-5e8a-456d-a445-b3ac8a80f788": Phase="Pending", Reason="", readiness=false. Elapsed: 39.282212ms Dec 26 13:10:28.721: INFO: Pod "pod-2ef73432-5e8a-456d-a445-b3ac8a80f788": Phase="Pending", Reason="", readiness=false. Elapsed: 2.053461717s Dec 26 13:10:30.732: INFO: Pod "pod-2ef73432-5e8a-456d-a445-b3ac8a80f788": Phase="Pending", Reason="", readiness=false. Elapsed: 4.063885292s Dec 26 13:10:32.739: INFO: Pod "pod-2ef73432-5e8a-456d-a445-b3ac8a80f788": Phase="Pending", Reason="", readiness=false. Elapsed: 6.071568111s Dec 26 13:10:34.746: INFO: Pod "pod-2ef73432-5e8a-456d-a445-b3ac8a80f788": Phase="Pending", Reason="", readiness=false. Elapsed: 8.078133455s Dec 26 13:10:36.768: INFO: Pod "pod-2ef73432-5e8a-456d-a445-b3ac8a80f788": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.099709613s STEP: Saw pod success Dec 26 13:10:36.768: INFO: Pod "pod-2ef73432-5e8a-456d-a445-b3ac8a80f788" satisfied condition "success or failure" Dec 26 13:10:36.779: INFO: Trying to get logs from node iruya-node pod pod-2ef73432-5e8a-456d-a445-b3ac8a80f788 container test-container: STEP: delete the pod Dec 26 13:10:36.963: INFO: Waiting for pod pod-2ef73432-5e8a-456d-a445-b3ac8a80f788 to disappear Dec 26 13:10:36.976: INFO: Pod pod-2ef73432-5e8a-456d-a445-b3ac8a80f788 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 26 13:10:36.976: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3355" for this suite. Dec 26 13:10:43.109: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 26 13:10:43.184: INFO: namespace emptydir-3355 deletion completed in 6.128849777s • [SLOW TEST:16.696 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 26 13:10:43.185: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-d654c270-ab03-4425-800e-0d8ca604cb0c STEP: Creating a pod to test consume configMaps Dec 26 13:10:43.373: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-396d6653-0901-4dba-9bb8-8d9af5ac0bfa" in namespace "projected-4472" to be "success or failure" Dec 26 13:10:43.383: INFO: Pod "pod-projected-configmaps-396d6653-0901-4dba-9bb8-8d9af5ac0bfa": Phase="Pending", Reason="", readiness=false. Elapsed: 10.653896ms Dec 26 13:10:45.394: INFO: Pod "pod-projected-configmaps-396d6653-0901-4dba-9bb8-8d9af5ac0bfa": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020943411s Dec 26 13:10:47.412: INFO: Pod "pod-projected-configmaps-396d6653-0901-4dba-9bb8-8d9af5ac0bfa": Phase="Pending", Reason="", readiness=false. Elapsed: 4.038775511s Dec 26 13:10:49.421: INFO: Pod "pod-projected-configmaps-396d6653-0901-4dba-9bb8-8d9af5ac0bfa": Phase="Pending", Reason="", readiness=false. Elapsed: 6.048279056s Dec 26 13:10:51.436: INFO: Pod "pod-projected-configmaps-396d6653-0901-4dba-9bb8-8d9af5ac0bfa": Phase="Pending", Reason="", readiness=false. Elapsed: 8.063029403s Dec 26 13:10:53.452: INFO: Pod "pod-projected-configmaps-396d6653-0901-4dba-9bb8-8d9af5ac0bfa": Phase="Pending", Reason="", readiness=false. Elapsed: 10.079081312s Dec 26 13:10:55.462: INFO: Pod "pod-projected-configmaps-396d6653-0901-4dba-9bb8-8d9af5ac0bfa": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.088709195s STEP: Saw pod success Dec 26 13:10:55.462: INFO: Pod "pod-projected-configmaps-396d6653-0901-4dba-9bb8-8d9af5ac0bfa" satisfied condition "success or failure" Dec 26 13:10:55.480: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-396d6653-0901-4dba-9bb8-8d9af5ac0bfa container projected-configmap-volume-test: STEP: delete the pod Dec 26 13:10:55.574: INFO: Waiting for pod pod-projected-configmaps-396d6653-0901-4dba-9bb8-8d9af5ac0bfa to disappear Dec 26 13:10:55.581: INFO: Pod pod-projected-configmaps-396d6653-0901-4dba-9bb8-8d9af5ac0bfa no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 26 13:10:55.582: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4472" for this suite. Dec 26 13:11:01.694: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 26 13:11:01.849: INFO: namespace projected-4472 deletion completed in 6.262447533s • [SLOW TEST:18.664 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 26 13:11:01.850: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Delete Grace Period /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:47 [It] should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: setting up selector STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes Dec 26 13:11:10.146: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0' STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice Dec 26 13:11:30.286: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 26 13:11:30.294: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-8880" for this suite. Dec 26 13:11:36.324: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 26 13:11:36.452: INFO: namespace pods-8880 deletion completed in 6.148309725s • [SLOW TEST:34.603 seconds] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 [k8s.io] Delete Grace Period /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 26 13:11:36.453: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: getting the auto-created API token STEP: reading a file in the container Dec 26 13:11:49.174: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-8516 pod-service-account-fe364c7b-8bc6-4bd7-88c6-ca8b3950c74e -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token' STEP: reading a file in the container Dec 26 13:11:52.980: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-8516 pod-service-account-fe364c7b-8bc6-4bd7-88c6-ca8b3950c74e -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt' STEP: reading a file in the container Dec 26 13:11:53.432: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-8516 pod-service-account-fe364c7b-8bc6-4bd7-88c6-ca8b3950c74e -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace' [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 26 13:11:53.807: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-8516" for this suite. Dec 26 13:11:59.865: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 26 13:12:00.071: INFO: namespace svcaccounts-8516 deletion completed in 6.260729193s • [SLOW TEST:23.618 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 26 13:12:00.072: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Dec 26 13:12:00.236: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a8e4d062-5200-4481-a1e4-ede84d5a3ef0" in namespace "projected-7438" to be "success or failure" Dec 26 13:12:00.301: INFO: Pod "downwardapi-volume-a8e4d062-5200-4481-a1e4-ede84d5a3ef0": Phase="Pending", Reason="", readiness=false. Elapsed: 64.315837ms Dec 26 13:12:02.307: INFO: Pod "downwardapi-volume-a8e4d062-5200-4481-a1e4-ede84d5a3ef0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.070655737s Dec 26 13:12:04.315: INFO: Pod "downwardapi-volume-a8e4d062-5200-4481-a1e4-ede84d5a3ef0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.07882818s Dec 26 13:12:06.332: INFO: Pod "downwardapi-volume-a8e4d062-5200-4481-a1e4-ede84d5a3ef0": Phase="Pending", Reason="", readiness=false. Elapsed: 6.096018476s Dec 26 13:12:08.379: INFO: Pod "downwardapi-volume-a8e4d062-5200-4481-a1e4-ede84d5a3ef0": Phase="Pending", Reason="", readiness=false. Elapsed: 8.142887773s Dec 26 13:12:10.391: INFO: Pod "downwardapi-volume-a8e4d062-5200-4481-a1e4-ede84d5a3ef0": Phase="Pending", Reason="", readiness=false. Elapsed: 10.155054936s Dec 26 13:12:12.416: INFO: Pod "downwardapi-volume-a8e4d062-5200-4481-a1e4-ede84d5a3ef0": Phase="Pending", Reason="", readiness=false. Elapsed: 12.17947201s Dec 26 13:12:14.427: INFO: Pod "downwardapi-volume-a8e4d062-5200-4481-a1e4-ede84d5a3ef0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.190992889s STEP: Saw pod success Dec 26 13:12:14.428: INFO: Pod "downwardapi-volume-a8e4d062-5200-4481-a1e4-ede84d5a3ef0" satisfied condition "success or failure" Dec 26 13:12:14.431: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-a8e4d062-5200-4481-a1e4-ede84d5a3ef0 container client-container: STEP: delete the pod Dec 26 13:12:14.593: INFO: Waiting for pod downwardapi-volume-a8e4d062-5200-4481-a1e4-ede84d5a3ef0 to disappear Dec 26 13:12:14.601: INFO: Pod downwardapi-volume-a8e4d062-5200-4481-a1e4-ede84d5a3ef0 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 26 13:12:14.601: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7438" for this suite. Dec 26 13:12:20.636: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 26 13:12:20.833: INFO: namespace projected-7438 deletion completed in 6.227190135s • [SLOW TEST:20.761 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run default should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 26 13:12:20.834: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1420 [It] should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Dec 26 13:12:21.129: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-3828' Dec 26 13:12:21.285: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Dec 26 13:12:21.285: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n" STEP: verifying the pod controlled by e2e-test-nginx-deployment gets created [AfterEach] [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1426 Dec 26 13:12:23.316: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=kubectl-3828' Dec 26 13:12:23.572: INFO: stderr: "" Dec 26 13:12:23.572: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 26 13:12:23.572: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3828" for this suite. Dec 26 13:12:45.632: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 26 13:12:45.833: INFO: namespace kubectl-3828 deletion completed in 22.256795652s • [SLOW TEST:25.000 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 26 13:12:45.833: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-b617179b-41a8-4df9-a1ea-749368502484 STEP: Creating a pod to test consume secrets Dec 26 13:12:46.113: INFO: Waiting up to 5m0s for pod "pod-secrets-69a5d9b8-a56b-4fac-a5fc-5474946a738a" in namespace "secrets-2830" to be "success or failure" Dec 26 13:12:46.119: INFO: Pod "pod-secrets-69a5d9b8-a56b-4fac-a5fc-5474946a738a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.032208ms Dec 26 13:12:48.134: INFO: Pod "pod-secrets-69a5d9b8-a56b-4fac-a5fc-5474946a738a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020446075s Dec 26 13:12:50.140: INFO: Pod "pod-secrets-69a5d9b8-a56b-4fac-a5fc-5474946a738a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.026200105s Dec 26 13:12:52.148: INFO: Pod "pod-secrets-69a5d9b8-a56b-4fac-a5fc-5474946a738a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.034567183s Dec 26 13:12:54.156: INFO: Pod "pod-secrets-69a5d9b8-a56b-4fac-a5fc-5474946a738a": Phase="Pending", Reason="", readiness=false. Elapsed: 8.042798348s Dec 26 13:12:56.165: INFO: Pod "pod-secrets-69a5d9b8-a56b-4fac-a5fc-5474946a738a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.051817065s STEP: Saw pod success Dec 26 13:12:56.165: INFO: Pod "pod-secrets-69a5d9b8-a56b-4fac-a5fc-5474946a738a" satisfied condition "success or failure" Dec 26 13:12:56.170: INFO: Trying to get logs from node iruya-node pod pod-secrets-69a5d9b8-a56b-4fac-a5fc-5474946a738a container secret-volume-test: STEP: delete the pod Dec 26 13:12:56.264: INFO: Waiting for pod pod-secrets-69a5d9b8-a56b-4fac-a5fc-5474946a738a to disappear Dec 26 13:12:56.277: INFO: Pod pod-secrets-69a5d9b8-a56b-4fac-a5fc-5474946a738a no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 26 13:12:56.277: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-2830" for this suite. Dec 26 13:13:02.303: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 26 13:13:02.397: INFO: namespace secrets-2830 deletion completed in 6.113314744s • [SLOW TEST:16.564 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 26 13:13:02.398: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-67e43f34-f736-4e9e-89a5-a106b1b5bac0 STEP: Creating a pod to test consume configMaps Dec 26 13:13:02.521: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-736f93eb-e813-4543-8aea-74dbf9482950" in namespace "projected-6922" to be "success or failure" Dec 26 13:13:02.531: INFO: Pod "pod-projected-configmaps-736f93eb-e813-4543-8aea-74dbf9482950": Phase="Pending", Reason="", readiness=false. Elapsed: 9.787123ms Dec 26 13:13:04.538: INFO: Pod "pod-projected-configmaps-736f93eb-e813-4543-8aea-74dbf9482950": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016359469s Dec 26 13:13:06.549: INFO: Pod "pod-projected-configmaps-736f93eb-e813-4543-8aea-74dbf9482950": Phase="Pending", Reason="", readiness=false. Elapsed: 4.028044443s Dec 26 13:13:08.561: INFO: Pod "pod-projected-configmaps-736f93eb-e813-4543-8aea-74dbf9482950": Phase="Pending", Reason="", readiness=false. Elapsed: 6.039200917s Dec 26 13:13:10.569: INFO: Pod "pod-projected-configmaps-736f93eb-e813-4543-8aea-74dbf9482950": Phase="Pending", Reason="", readiness=false. Elapsed: 8.047417077s Dec 26 13:13:12.586: INFO: Pod "pod-projected-configmaps-736f93eb-e813-4543-8aea-74dbf9482950": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.064473367s STEP: Saw pod success Dec 26 13:13:12.586: INFO: Pod "pod-projected-configmaps-736f93eb-e813-4543-8aea-74dbf9482950" satisfied condition "success or failure" Dec 26 13:13:12.596: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-736f93eb-e813-4543-8aea-74dbf9482950 container projected-configmap-volume-test: STEP: delete the pod Dec 26 13:13:12.688: INFO: Waiting for pod pod-projected-configmaps-736f93eb-e813-4543-8aea-74dbf9482950 to disappear Dec 26 13:13:12.696: INFO: Pod pod-projected-configmaps-736f93eb-e813-4543-8aea-74dbf9482950 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 26 13:13:12.696: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6922" for this suite. Dec 26 13:13:18.727: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 26 13:13:18.861: INFO: namespace projected-6922 deletion completed in 6.161860304s • [SLOW TEST:16.464 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 26 13:13:18.862: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: starting an echo server on multiple ports STEP: creating replication controller proxy-service-wpqwr in namespace proxy-173 I1226 13:13:19.071244 9 runners.go:180] Created replication controller with name: proxy-service-wpqwr, namespace: proxy-173, replica count: 1 I1226 13:13:20.122291 9 runners.go:180] proxy-service-wpqwr Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1226 13:13:21.122833 9 runners.go:180] proxy-service-wpqwr Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1226 13:13:22.123463 9 runners.go:180] proxy-service-wpqwr Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1226 13:13:23.124207 9 runners.go:180] proxy-service-wpqwr Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1226 13:13:24.124921 9 runners.go:180] proxy-service-wpqwr Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1226 13:13:25.125555 9 runners.go:180] proxy-service-wpqwr Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1226 13:13:26.125915 9 runners.go:180] proxy-service-wpqwr Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1226 13:13:27.126948 9 runners.go:180] proxy-service-wpqwr Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I1226 13:13:28.127754 9 runners.go:180] proxy-service-wpqwr Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Dec 26 13:13:28.151: INFO: setup took 9.152518568s, starting test cases STEP: running 16 cases, 20 attempts per case, 320 total attempts Dec 26 13:13:28.211: INFO: (0) /api/v1/namespaces/proxy-173/pods/proxy-service-wpqwr-ktqm9/proxy/: test (200; 57.771964ms) Dec 26 13:13:28.211: INFO: (0) /api/v1/namespaces/proxy-173/services/proxy-service-wpqwr:portname1/proxy/: foo (200; 57.9284ms) Dec 26 13:13:28.214: INFO: (0) /api/v1/namespaces/proxy-173/pods/proxy-service-wpqwr-ktqm9:1080/proxy/: testt... (200; 61.124776ms) Dec 26 13:13:28.217: INFO: (0) /api/v1/namespaces/proxy-173/services/proxy-service-wpqwr:portname2/proxy/: bar (200; 64.0186ms) Dec 26 13:13:28.217: INFO: (0) /api/v1/namespaces/proxy-173/pods/http:proxy-service-wpqwr-ktqm9:160/proxy/: foo (200; 64.167729ms) Dec 26 13:13:28.217: INFO: (0) /api/v1/namespaces/proxy-173/pods/http:proxy-service-wpqwr-ktqm9:162/proxy/: bar (200; 63.919698ms) Dec 26 13:13:28.217: INFO: (0) /api/v1/namespaces/proxy-173/pods/proxy-service-wpqwr-ktqm9:160/proxy/: foo (200; 64.628351ms) Dec 26 13:13:28.217: INFO: (0) /api/v1/namespaces/proxy-173/services/http:proxy-service-wpqwr:portname2/proxy/: bar (200; 64.254081ms) Dec 26 13:13:28.218: INFO: (0) /api/v1/namespaces/proxy-173/pods/proxy-service-wpqwr-ktqm9:162/proxy/: bar (200; 64.999674ms) Dec 26 13:13:28.218: INFO: (0) /api/v1/namespaces/proxy-173/services/http:proxy-service-wpqwr:portname1/proxy/: foo (200; 65.260023ms) Dec 26 13:13:28.224: INFO: (0) /api/v1/namespaces/proxy-173/pods/https:proxy-service-wpqwr-ktqm9:460/proxy/: tls baz (200; 71.227195ms) Dec 26 13:13:28.225: INFO: (0) /api/v1/namespaces/proxy-173/pods/https:proxy-service-wpqwr-ktqm9:443/proxy/: test (200; 79.908892ms) Dec 26 13:13:28.307: INFO: (1) /api/v1/namespaces/proxy-173/pods/https:proxy-service-wpqwr-ktqm9:460/proxy/: tls baz (200; 80.034562ms) Dec 26 13:13:28.307: INFO: (1) /api/v1/namespaces/proxy-173/pods/http:proxy-service-wpqwr-ktqm9:1080/proxy/: t... (200; 81.489943ms) Dec 26 13:13:28.307: INFO: (1) /api/v1/namespaces/proxy-173/services/https:proxy-service-wpqwr:tlsportname1/proxy/: tls baz (200; 79.324516ms) Dec 26 13:13:28.307: INFO: (1) /api/v1/namespaces/proxy-173/services/http:proxy-service-wpqwr:portname1/proxy/: foo (200; 81.050815ms) Dec 26 13:13:28.307: INFO: (1) /api/v1/namespaces/proxy-173/pods/proxy-service-wpqwr-ktqm9:1080/proxy/: testt... (200; 13.610688ms) Dec 26 13:13:28.323: INFO: (2) /api/v1/namespaces/proxy-173/pods/https:proxy-service-wpqwr-ktqm9:443/proxy/: test (200; 15.73708ms) Dec 26 13:13:28.324: INFO: (2) /api/v1/namespaces/proxy-173/services/http:proxy-service-wpqwr:portname2/proxy/: bar (200; 15.756133ms) Dec 26 13:13:28.325: INFO: (2) /api/v1/namespaces/proxy-173/services/http:proxy-service-wpqwr:portname1/proxy/: foo (200; 16.250713ms) Dec 26 13:13:28.325: INFO: (2) /api/v1/namespaces/proxy-173/services/https:proxy-service-wpqwr:tlsportname2/proxy/: tls qux (200; 16.546152ms) Dec 26 13:13:28.325: INFO: (2) /api/v1/namespaces/proxy-173/pods/http:proxy-service-wpqwr-ktqm9:160/proxy/: foo (200; 16.778525ms) Dec 26 13:13:28.325: INFO: (2) /api/v1/namespaces/proxy-173/pods/https:proxy-service-wpqwr-ktqm9:460/proxy/: tls baz (200; 16.775924ms) Dec 26 13:13:28.325: INFO: (2) /api/v1/namespaces/proxy-173/pods/http:proxy-service-wpqwr-ktqm9:162/proxy/: bar (200; 16.797691ms) Dec 26 13:13:28.326: INFO: (2) /api/v1/namespaces/proxy-173/pods/proxy-service-wpqwr-ktqm9:160/proxy/: foo (200; 18.202414ms) Dec 26 13:13:28.326: INFO: (2) /api/v1/namespaces/proxy-173/pods/https:proxy-service-wpqwr-ktqm9:462/proxy/: tls qux (200; 18.1907ms) Dec 26 13:13:28.327: INFO: (2) /api/v1/namespaces/proxy-173/pods/proxy-service-wpqwr-ktqm9:1080/proxy/: testtestt... (200; 7.910122ms) Dec 26 13:13:28.341: INFO: (3) /api/v1/namespaces/proxy-173/pods/https:proxy-service-wpqwr-ktqm9:462/proxy/: tls qux (200; 8.691634ms) Dec 26 13:13:28.342: INFO: (3) /api/v1/namespaces/proxy-173/pods/https:proxy-service-wpqwr-ktqm9:443/proxy/: test (200; 15.398804ms) Dec 26 13:13:28.348: INFO: (3) /api/v1/namespaces/proxy-173/services/proxy-service-wpqwr:portname2/proxy/: bar (200; 15.705993ms) Dec 26 13:13:28.363: INFO: (4) /api/v1/namespaces/proxy-173/pods/http:proxy-service-wpqwr-ktqm9:162/proxy/: bar (200; 13.930126ms) Dec 26 13:13:28.363: INFO: (4) /api/v1/namespaces/proxy-173/pods/http:proxy-service-wpqwr-ktqm9:1080/proxy/: t... (200; 14.316443ms) Dec 26 13:13:28.364: INFO: (4) /api/v1/namespaces/proxy-173/pods/http:proxy-service-wpqwr-ktqm9:160/proxy/: foo (200; 15.466316ms) Dec 26 13:13:28.365: INFO: (4) /api/v1/namespaces/proxy-173/pods/https:proxy-service-wpqwr-ktqm9:462/proxy/: tls qux (200; 15.485388ms) Dec 26 13:13:28.365: INFO: (4) /api/v1/namespaces/proxy-173/pods/https:proxy-service-wpqwr-ktqm9:443/proxy/: testtest (200; 17.377144ms) Dec 26 13:13:28.368: INFO: (4) /api/v1/namespaces/proxy-173/pods/proxy-service-wpqwr-ktqm9:162/proxy/: bar (200; 19.001504ms) Dec 26 13:13:28.368: INFO: (4) /api/v1/namespaces/proxy-173/pods/https:proxy-service-wpqwr-ktqm9:460/proxy/: tls baz (200; 18.544293ms) Dec 26 13:13:28.372: INFO: (4) /api/v1/namespaces/proxy-173/services/proxy-service-wpqwr:portname2/proxy/: bar (200; 22.597528ms) Dec 26 13:13:28.372: INFO: (4) /api/v1/namespaces/proxy-173/services/proxy-service-wpqwr:portname1/proxy/: foo (200; 22.750618ms) Dec 26 13:13:28.372: INFO: (4) /api/v1/namespaces/proxy-173/services/http:proxy-service-wpqwr:portname2/proxy/: bar (200; 22.807709ms) Dec 26 13:13:28.373: INFO: (4) /api/v1/namespaces/proxy-173/services/https:proxy-service-wpqwr:tlsportname1/proxy/: tls baz (200; 24.208108ms) Dec 26 13:13:28.373: INFO: (4) /api/v1/namespaces/proxy-173/services/https:proxy-service-wpqwr:tlsportname2/proxy/: tls qux (200; 24.502687ms) Dec 26 13:13:28.374: INFO: (4) /api/v1/namespaces/proxy-173/services/http:proxy-service-wpqwr:portname1/proxy/: foo (200; 24.618075ms) Dec 26 13:13:28.381: INFO: (5) /api/v1/namespaces/proxy-173/pods/proxy-service-wpqwr-ktqm9:162/proxy/: bar (200; 7.072434ms) Dec 26 13:13:28.381: INFO: (5) /api/v1/namespaces/proxy-173/pods/http:proxy-service-wpqwr-ktqm9:160/proxy/: foo (200; 7.595463ms) Dec 26 13:13:28.382: INFO: (5) /api/v1/namespaces/proxy-173/pods/http:proxy-service-wpqwr-ktqm9:162/proxy/: bar (200; 8.334097ms) Dec 26 13:13:28.382: INFO: (5) /api/v1/namespaces/proxy-173/pods/proxy-service-wpqwr-ktqm9/proxy/: test (200; 8.406855ms) Dec 26 13:13:28.382: INFO: (5) /api/v1/namespaces/proxy-173/pods/proxy-service-wpqwr-ktqm9:1080/proxy/: testt... (200; 9.759419ms) Dec 26 13:13:28.384: INFO: (5) /api/v1/namespaces/proxy-173/pods/https:proxy-service-wpqwr-ktqm9:460/proxy/: tls baz (200; 10.052315ms) Dec 26 13:13:28.384: INFO: (5) /api/v1/namespaces/proxy-173/services/proxy-service-wpqwr:portname1/proxy/: foo (200; 10.068968ms) Dec 26 13:13:28.384: INFO: (5) /api/v1/namespaces/proxy-173/pods/https:proxy-service-wpqwr-ktqm9:443/proxy/: t... (200; 5.961108ms) Dec 26 13:13:28.399: INFO: (6) /api/v1/namespaces/proxy-173/pods/http:proxy-service-wpqwr-ktqm9:160/proxy/: foo (200; 11.129769ms) Dec 26 13:13:28.399: INFO: (6) /api/v1/namespaces/proxy-173/pods/proxy-service-wpqwr-ktqm9/proxy/: test (200; 11.171611ms) Dec 26 13:13:28.399: INFO: (6) /api/v1/namespaces/proxy-173/pods/https:proxy-service-wpqwr-ktqm9:462/proxy/: tls qux (200; 11.267361ms) Dec 26 13:13:28.399: INFO: (6) /api/v1/namespaces/proxy-173/pods/http:proxy-service-wpqwr-ktqm9:162/proxy/: bar (200; 11.502229ms) Dec 26 13:13:28.399: INFO: (6) /api/v1/namespaces/proxy-173/pods/https:proxy-service-wpqwr-ktqm9:443/proxy/: testt... (200; 30.737422ms) Dec 26 13:13:28.477: INFO: (7) /api/v1/namespaces/proxy-173/pods/http:proxy-service-wpqwr-ktqm9:162/proxy/: bar (200; 30.595248ms) Dec 26 13:13:28.477: INFO: (7) /api/v1/namespaces/proxy-173/pods/proxy-service-wpqwr-ktqm9/proxy/: test (200; 30.756084ms) Dec 26 13:13:28.477: INFO: (7) /api/v1/namespaces/proxy-173/services/http:proxy-service-wpqwr:portname2/proxy/: bar (200; 31.273586ms) Dec 26 13:13:28.477: INFO: (7) /api/v1/namespaces/proxy-173/pods/proxy-service-wpqwr-ktqm9:162/proxy/: bar (200; 31.387876ms) Dec 26 13:13:28.477: INFO: (7) /api/v1/namespaces/proxy-173/pods/https:proxy-service-wpqwr-ktqm9:443/proxy/: testtesttest (200; 20.823957ms) Dec 26 13:13:28.499: INFO: (8) /api/v1/namespaces/proxy-173/pods/http:proxy-service-wpqwr-ktqm9:1080/proxy/: t... (200; 20.75161ms) Dec 26 13:13:28.499: INFO: (8) /api/v1/namespaces/proxy-173/pods/http:proxy-service-wpqwr-ktqm9:162/proxy/: bar (200; 20.748294ms) Dec 26 13:13:28.500: INFO: (8) /api/v1/namespaces/proxy-173/pods/http:proxy-service-wpqwr-ktqm9:160/proxy/: foo (200; 21.350358ms) Dec 26 13:13:28.500: INFO: (8) /api/v1/namespaces/proxy-173/services/https:proxy-service-wpqwr:tlsportname2/proxy/: tls qux (200; 21.514608ms) Dec 26 13:13:28.500: INFO: (8) /api/v1/namespaces/proxy-173/services/https:proxy-service-wpqwr:tlsportname1/proxy/: tls baz (200; 21.08865ms) Dec 26 13:13:28.500: INFO: (8) /api/v1/namespaces/proxy-173/pods/https:proxy-service-wpqwr-ktqm9:460/proxy/: tls baz (200; 21.174627ms) Dec 26 13:13:28.500: INFO: (8) /api/v1/namespaces/proxy-173/services/http:proxy-service-wpqwr:portname1/proxy/: foo (200; 21.033594ms) Dec 26 13:13:28.501: INFO: (8) /api/v1/namespaces/proxy-173/services/proxy-service-wpqwr:portname2/proxy/: bar (200; 21.833646ms) Dec 26 13:13:28.502: INFO: (8) /api/v1/namespaces/proxy-173/services/proxy-service-wpqwr:portname1/proxy/: foo (200; 23.685074ms) Dec 26 13:13:28.515: INFO: (9) /api/v1/namespaces/proxy-173/pods/http:proxy-service-wpqwr-ktqm9:162/proxy/: bar (200; 12.130824ms) Dec 26 13:13:28.515: INFO: (9) /api/v1/namespaces/proxy-173/pods/proxy-service-wpqwr-ktqm9:1080/proxy/: testt... (200; 12.950866ms) Dec 26 13:13:28.516: INFO: (9) /api/v1/namespaces/proxy-173/pods/http:proxy-service-wpqwr-ktqm9:160/proxy/: foo (200; 13.729931ms) Dec 26 13:13:28.517: INFO: (9) /api/v1/namespaces/proxy-173/pods/https:proxy-service-wpqwr-ktqm9:462/proxy/: tls qux (200; 14.169491ms) Dec 26 13:13:28.517: INFO: (9) /api/v1/namespaces/proxy-173/pods/proxy-service-wpqwr-ktqm9/proxy/: test (200; 14.208111ms) Dec 26 13:13:28.517: INFO: (9) /api/v1/namespaces/proxy-173/services/http:proxy-service-wpqwr:portname2/proxy/: bar (200; 14.654971ms) Dec 26 13:13:28.518: INFO: (9) /api/v1/namespaces/proxy-173/services/https:proxy-service-wpqwr:tlsportname1/proxy/: tls baz (200; 15.208089ms) Dec 26 13:13:28.518: INFO: (9) /api/v1/namespaces/proxy-173/pods/proxy-service-wpqwr-ktqm9:162/proxy/: bar (200; 15.288263ms) Dec 26 13:13:28.518: INFO: (9) /api/v1/namespaces/proxy-173/services/https:proxy-service-wpqwr:tlsportname2/proxy/: tls qux (200; 15.755741ms) Dec 26 13:13:28.518: INFO: (9) /api/v1/namespaces/proxy-173/pods/https:proxy-service-wpqwr-ktqm9:443/proxy/: t... (200; 7.018306ms) Dec 26 13:13:28.527: INFO: (10) /api/v1/namespaces/proxy-173/pods/http:proxy-service-wpqwr-ktqm9:160/proxy/: foo (200; 8.060532ms) Dec 26 13:13:28.529: INFO: (10) /api/v1/namespaces/proxy-173/pods/proxy-service-wpqwr-ktqm9:1080/proxy/: testtest (200; 10.603832ms) Dec 26 13:13:28.530: INFO: (10) /api/v1/namespaces/proxy-173/pods/proxy-service-wpqwr-ktqm9:160/proxy/: foo (200; 10.978938ms) Dec 26 13:13:28.530: INFO: (10) /api/v1/namespaces/proxy-173/pods/https:proxy-service-wpqwr-ktqm9:460/proxy/: tls baz (200; 10.936169ms) Dec 26 13:13:28.530: INFO: (10) /api/v1/namespaces/proxy-173/pods/proxy-service-wpqwr-ktqm9:162/proxy/: bar (200; 11.224181ms) Dec 26 13:13:28.530: INFO: (10) /api/v1/namespaces/proxy-173/pods/http:proxy-service-wpqwr-ktqm9:162/proxy/: bar (200; 11.366567ms) Dec 26 13:13:28.531: INFO: (10) /api/v1/namespaces/proxy-173/services/https:proxy-service-wpqwr:tlsportname2/proxy/: tls qux (200; 11.494157ms) Dec 26 13:13:28.531: INFO: (10) /api/v1/namespaces/proxy-173/services/http:proxy-service-wpqwr:portname2/proxy/: bar (200; 12.250038ms) Dec 26 13:13:28.531: INFO: (10) /api/v1/namespaces/proxy-173/pods/https:proxy-service-wpqwr-ktqm9:443/proxy/: test (200; 8.706504ms) Dec 26 13:13:28.545: INFO: (11) /api/v1/namespaces/proxy-173/pods/proxy-service-wpqwr-ktqm9:160/proxy/: foo (200; 9.432059ms) Dec 26 13:13:28.545: INFO: (11) /api/v1/namespaces/proxy-173/pods/http:proxy-service-wpqwr-ktqm9:160/proxy/: foo (200; 9.670041ms) Dec 26 13:13:28.545: INFO: (11) /api/v1/namespaces/proxy-173/pods/proxy-service-wpqwr-ktqm9:162/proxy/: bar (200; 9.952893ms) Dec 26 13:13:28.545: INFO: (11) /api/v1/namespaces/proxy-173/pods/https:proxy-service-wpqwr-ktqm9:462/proxy/: tls qux (200; 10.071133ms) Dec 26 13:13:28.545: INFO: (11) /api/v1/namespaces/proxy-173/pods/http:proxy-service-wpqwr-ktqm9:1080/proxy/: t... (200; 10.243994ms) Dec 26 13:13:28.546: INFO: (11) /api/v1/namespaces/proxy-173/pods/https:proxy-service-wpqwr-ktqm9:443/proxy/: testtesttest (200; 21.755272ms) Dec 26 13:13:28.571: INFO: (12) /api/v1/namespaces/proxy-173/pods/https:proxy-service-wpqwr-ktqm9:443/proxy/: t... (200; 22.379615ms) Dec 26 13:13:28.571: INFO: (12) /api/v1/namespaces/proxy-173/services/proxy-service-wpqwr:portname2/proxy/: bar (200; 22.464213ms) Dec 26 13:13:28.572: INFO: (12) /api/v1/namespaces/proxy-173/pods/http:proxy-service-wpqwr-ktqm9:162/proxy/: bar (200; 22.642374ms) Dec 26 13:13:28.582: INFO: (13) /api/v1/namespaces/proxy-173/pods/https:proxy-service-wpqwr-ktqm9:460/proxy/: tls baz (200; 10.130785ms) Dec 26 13:13:28.584: INFO: (13) /api/v1/namespaces/proxy-173/pods/http:proxy-service-wpqwr-ktqm9:162/proxy/: bar (200; 12.005364ms) Dec 26 13:13:28.590: INFO: (13) /api/v1/namespaces/proxy-173/pods/proxy-service-wpqwr-ktqm9/proxy/: test (200; 18.287476ms) Dec 26 13:13:28.590: INFO: (13) /api/v1/namespaces/proxy-173/pods/https:proxy-service-wpqwr-ktqm9:462/proxy/: tls qux (200; 18.351997ms) Dec 26 13:13:28.592: INFO: (13) /api/v1/namespaces/proxy-173/pods/https:proxy-service-wpqwr-ktqm9:443/proxy/: t... (200; 21.972501ms) Dec 26 13:13:28.594: INFO: (13) /api/v1/namespaces/proxy-173/services/https:proxy-service-wpqwr:tlsportname2/proxy/: tls qux (200; 21.881676ms) Dec 26 13:13:28.594: INFO: (13) /api/v1/namespaces/proxy-173/pods/proxy-service-wpqwr-ktqm9:160/proxy/: foo (200; 21.788251ms) Dec 26 13:13:28.594: INFO: (13) /api/v1/namespaces/proxy-173/services/http:proxy-service-wpqwr:portname2/proxy/: bar (200; 21.891931ms) Dec 26 13:13:28.594: INFO: (13) /api/v1/namespaces/proxy-173/pods/proxy-service-wpqwr-ktqm9:1080/proxy/: testtest (200; 35.881514ms) Dec 26 13:13:28.639: INFO: (14) /api/v1/namespaces/proxy-173/services/http:proxy-service-wpqwr:portname2/proxy/: bar (200; 35.926166ms) Dec 26 13:13:28.639: INFO: (14) /api/v1/namespaces/proxy-173/pods/proxy-service-wpqwr-ktqm9:1080/proxy/: testt... (200; 36.325921ms) Dec 26 13:13:28.639: INFO: (14) /api/v1/namespaces/proxy-173/pods/http:proxy-service-wpqwr-ktqm9:162/proxy/: bar (200; 36.490599ms) Dec 26 13:13:28.639: INFO: (14) /api/v1/namespaces/proxy-173/services/https:proxy-service-wpqwr:tlsportname1/proxy/: tls baz (200; 36.250153ms) Dec 26 13:13:28.639: INFO: (14) /api/v1/namespaces/proxy-173/pods/https:proxy-service-wpqwr-ktqm9:443/proxy/: t... (200; 13.244579ms) Dec 26 13:13:28.658: INFO: (15) /api/v1/namespaces/proxy-173/pods/proxy-service-wpqwr-ktqm9:1080/proxy/: testtest (200; 25.63784ms) Dec 26 13:13:28.669: INFO: (15) /api/v1/namespaces/proxy-173/services/https:proxy-service-wpqwr:tlsportname1/proxy/: tls baz (200; 25.940421ms) Dec 26 13:13:28.669: INFO: (15) /api/v1/namespaces/proxy-173/pods/https:proxy-service-wpqwr-ktqm9:460/proxy/: tls baz (200; 25.690389ms) Dec 26 13:13:28.670: INFO: (15) /api/v1/namespaces/proxy-173/services/proxy-service-wpqwr:portname1/proxy/: foo (200; 26.006735ms) Dec 26 13:13:28.674: INFO: (15) /api/v1/namespaces/proxy-173/services/http:proxy-service-wpqwr:portname2/proxy/: bar (200; 30.73365ms) Dec 26 13:13:28.687: INFO: (16) /api/v1/namespaces/proxy-173/services/proxy-service-wpqwr:portname2/proxy/: bar (200; 12.595305ms) Dec 26 13:13:28.687: INFO: (16) /api/v1/namespaces/proxy-173/pods/https:proxy-service-wpqwr-ktqm9:460/proxy/: tls baz (200; 12.838853ms) Dec 26 13:13:28.688: INFO: (16) /api/v1/namespaces/proxy-173/pods/http:proxy-service-wpqwr-ktqm9:162/proxy/: bar (200; 13.257244ms) Dec 26 13:13:28.688: INFO: (16) /api/v1/namespaces/proxy-173/pods/proxy-service-wpqwr-ktqm9:1080/proxy/: testtest (200; 13.16554ms) Dec 26 13:13:28.689: INFO: (16) /api/v1/namespaces/proxy-173/services/https:proxy-service-wpqwr:tlsportname1/proxy/: tls baz (200; 14.274758ms) Dec 26 13:13:28.689: INFO: (16) /api/v1/namespaces/proxy-173/pods/http:proxy-service-wpqwr-ktqm9:1080/proxy/: t... (200; 14.276643ms) Dec 26 13:13:28.689: INFO: (16) /api/v1/namespaces/proxy-173/pods/https:proxy-service-wpqwr-ktqm9:462/proxy/: tls qux (200; 14.987351ms) Dec 26 13:13:28.689: INFO: (16) /api/v1/namespaces/proxy-173/services/https:proxy-service-wpqwr:tlsportname2/proxy/: tls qux (200; 14.903838ms) Dec 26 13:13:28.690: INFO: (16) /api/v1/namespaces/proxy-173/pods/proxy-service-wpqwr-ktqm9:160/proxy/: foo (200; 15.870229ms) Dec 26 13:13:28.690: INFO: (16) /api/v1/namespaces/proxy-173/pods/http:proxy-service-wpqwr-ktqm9:160/proxy/: foo (200; 15.664697ms) Dec 26 13:13:28.690: INFO: (16) /api/v1/namespaces/proxy-173/pods/https:proxy-service-wpqwr-ktqm9:443/proxy/: t... (200; 8.384609ms) Dec 26 13:13:28.700: INFO: (17) /api/v1/namespaces/proxy-173/pods/proxy-service-wpqwr-ktqm9:162/proxy/: bar (200; 8.422941ms) Dec 26 13:13:28.700: INFO: (17) /api/v1/namespaces/proxy-173/pods/https:proxy-service-wpqwr-ktqm9:462/proxy/: tls qux (200; 8.433702ms) Dec 26 13:13:28.700: INFO: (17) /api/v1/namespaces/proxy-173/services/proxy-service-wpqwr:portname1/proxy/: foo (200; 8.521949ms) Dec 26 13:13:28.700: INFO: (17) /api/v1/namespaces/proxy-173/pods/proxy-service-wpqwr-ktqm9/proxy/: test (200; 8.483537ms) Dec 26 13:13:28.703: INFO: (17) /api/v1/namespaces/proxy-173/services/proxy-service-wpqwr:portname2/proxy/: bar (200; 11.000711ms) Dec 26 13:13:28.703: INFO: (17) /api/v1/namespaces/proxy-173/pods/https:proxy-service-wpqwr-ktqm9:460/proxy/: tls baz (200; 11.247923ms) Dec 26 13:13:28.704: INFO: (17) /api/v1/namespaces/proxy-173/services/http:proxy-service-wpqwr:portname1/proxy/: foo (200; 11.783848ms) Dec 26 13:13:28.705: INFO: (17) /api/v1/namespaces/proxy-173/services/http:proxy-service-wpqwr:portname2/proxy/: bar (200; 12.74727ms) Dec 26 13:13:28.705: INFO: (17) /api/v1/namespaces/proxy-173/pods/https:proxy-service-wpqwr-ktqm9:443/proxy/: testtest (200; 5.295762ms) Dec 26 13:13:28.718: INFO: (18) /api/v1/namespaces/proxy-173/services/https:proxy-service-wpqwr:tlsportname1/proxy/: tls baz (200; 11.48533ms) Dec 26 13:13:28.723: INFO: (18) /api/v1/namespaces/proxy-173/services/proxy-service-wpqwr:portname2/proxy/: bar (200; 16.636755ms) Dec 26 13:13:28.723: INFO: (18) /api/v1/namespaces/proxy-173/pods/http:proxy-service-wpqwr-ktqm9:1080/proxy/: t... (200; 17.084505ms) Dec 26 13:13:28.724: INFO: (18) /api/v1/namespaces/proxy-173/services/http:proxy-service-wpqwr:portname1/proxy/: foo (200; 17.936866ms) Dec 26 13:13:28.725: INFO: (18) /api/v1/namespaces/proxy-173/pods/https:proxy-service-wpqwr-ktqm9:462/proxy/: tls qux (200; 18.239886ms) Dec 26 13:13:28.725: INFO: (18) /api/v1/namespaces/proxy-173/pods/https:proxy-service-wpqwr-ktqm9:460/proxy/: tls baz (200; 18.301765ms) Dec 26 13:13:28.725: INFO: (18) /api/v1/namespaces/proxy-173/services/https:proxy-service-wpqwr:tlsportname2/proxy/: tls qux (200; 18.287831ms) Dec 26 13:13:28.725: INFO: (18) /api/v1/namespaces/proxy-173/pods/http:proxy-service-wpqwr-ktqm9:162/proxy/: bar (200; 18.830833ms) Dec 26 13:13:28.725: INFO: (18) /api/v1/namespaces/proxy-173/services/http:proxy-service-wpqwr:portname2/proxy/: bar (200; 18.932211ms) Dec 26 13:13:28.726: INFO: (18) /api/v1/namespaces/proxy-173/pods/proxy-service-wpqwr-ktqm9:160/proxy/: foo (200; 19.216075ms) Dec 26 13:13:28.726: INFO: (18) /api/v1/namespaces/proxy-173/services/proxy-service-wpqwr:portname1/proxy/: foo (200; 19.528276ms) Dec 26 13:13:28.726: INFO: (18) /api/v1/namespaces/proxy-173/pods/proxy-service-wpqwr-ktqm9:1080/proxy/: testtesttest (200; 13.220503ms) Dec 26 13:13:28.741: INFO: (19) /api/v1/namespaces/proxy-173/pods/https:proxy-service-wpqwr-ktqm9:460/proxy/: tls baz (200; 13.364431ms) Dec 26 13:13:28.741: INFO: (19) /api/v1/namespaces/proxy-173/pods/http:proxy-service-wpqwr-ktqm9:1080/proxy/: t... (200; 13.473076ms) Dec 26 13:13:28.744: INFO: (19) /api/v1/namespaces/proxy-173/services/http:proxy-service-wpqwr:portname2/proxy/: bar (200; 16.264399ms) Dec 26 13:13:28.745: INFO: (19) /api/v1/namespaces/proxy-173/services/http:proxy-service-wpqwr:portname1/proxy/: foo (200; 17.398119ms) Dec 26 13:13:28.745: INFO: (19) /api/v1/namespaces/proxy-173/pods/proxy-service-wpqwr-ktqm9:160/proxy/: foo (200; 17.533119ms) Dec 26 13:13:28.745: INFO: (19) /api/v1/namespaces/proxy-173/pods/proxy-service-wpqwr-ktqm9:162/proxy/: bar (200; 17.449908ms) Dec 26 13:13:28.746: INFO: (19) /api/v1/namespaces/proxy-173/services/proxy-service-wpqwr:portname2/proxy/: bar (200; 18.118076ms) Dec 26 13:13:28.746: INFO: (19) /api/v1/namespaces/proxy-173/pods/http:proxy-service-wpqwr-ktqm9:162/proxy/: bar (200; 18.332987ms) Dec 26 13:13:28.747: INFO: (19) /api/v1/namespaces/proxy-173/pods/https:proxy-service-wpqwr-ktqm9:462/proxy/: tls qux (200; 18.905468ms) Dec 26 13:13:28.747: INFO: (19) /api/v1/namespaces/proxy-173/services/proxy-service-wpqwr:portname1/proxy/: foo (200; 18.739104ms) Dec 26 13:13:28.752: INFO: (19) /api/v1/namespaces/proxy-173/services/https:proxy-service-wpqwr:tlsportname1/proxy/: tls baz (200; 24.600221ms) Dec 26 13:13:28.753: INFO: (19) /api/v1/namespaces/proxy-173/services/https:proxy-service-wpqwr:tlsportname2/proxy/: tls qux (200; 25.459531ms) Dec 26 13:13:28.753: INFO: (19) /api/v1/namespaces/proxy-173/pods/https:proxy-service-wpqwr-ktqm9:443/proxy/: >> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Dec 26 13:13:41.067: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c74f3d0f-28cd-4c7e-aa3e-461a1a4f72ae" in namespace "projected-2237" to be "success or failure" Dec 26 13:13:41.171: INFO: Pod "downwardapi-volume-c74f3d0f-28cd-4c7e-aa3e-461a1a4f72ae": Phase="Pending", Reason="", readiness=false. Elapsed: 103.698156ms Dec 26 13:13:43.178: INFO: Pod "downwardapi-volume-c74f3d0f-28cd-4c7e-aa3e-461a1a4f72ae": Phase="Pending", Reason="", readiness=false. Elapsed: 2.110320416s Dec 26 13:13:45.193: INFO: Pod "downwardapi-volume-c74f3d0f-28cd-4c7e-aa3e-461a1a4f72ae": Phase="Pending", Reason="", readiness=false. Elapsed: 4.125691834s Dec 26 13:13:47.204: INFO: Pod "downwardapi-volume-c74f3d0f-28cd-4c7e-aa3e-461a1a4f72ae": Phase="Pending", Reason="", readiness=false. Elapsed: 6.13700289s Dec 26 13:13:49.225: INFO: Pod "downwardapi-volume-c74f3d0f-28cd-4c7e-aa3e-461a1a4f72ae": Phase="Pending", Reason="", readiness=false. Elapsed: 8.157698417s Dec 26 13:13:51.244: INFO: Pod "downwardapi-volume-c74f3d0f-28cd-4c7e-aa3e-461a1a4f72ae": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.176974463s STEP: Saw pod success Dec 26 13:13:51.245: INFO: Pod "downwardapi-volume-c74f3d0f-28cd-4c7e-aa3e-461a1a4f72ae" satisfied condition "success or failure" Dec 26 13:13:51.250: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-c74f3d0f-28cd-4c7e-aa3e-461a1a4f72ae container client-container: STEP: delete the pod Dec 26 13:13:51.447: INFO: Waiting for pod downwardapi-volume-c74f3d0f-28cd-4c7e-aa3e-461a1a4f72ae to disappear Dec 26 13:13:51.453: INFO: Pod downwardapi-volume-c74f3d0f-28cd-4c7e-aa3e-461a1a4f72ae no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 26 13:13:51.453: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2237" for this suite. Dec 26 13:13:57.495: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 26 13:13:57.686: INFO: namespace projected-2237 deletion completed in 6.225842847s • [SLOW TEST:16.703 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 26 13:13:57.686: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications Dec 26 13:13:57.863: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-159,SelfLink:/api/v1/namespaces/watch-159/configmaps/e2e-watch-test-watch-closed,UID:e4c20870-928f-4090-afdc-c9003e330320,ResourceVersion:18140930,Generation:0,CreationTimestamp:2019-12-26 13:13:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} Dec 26 13:13:57.863: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-159,SelfLink:/api/v1/namespaces/watch-159/configmaps/e2e-watch-test-watch-closed,UID:e4c20870-928f-4090-afdc-c9003e330320,ResourceVersion:18140931,Generation:0,CreationTimestamp:2019-12-26 13:13:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed Dec 26 13:13:57.893: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-159,SelfLink:/api/v1/namespaces/watch-159/configmaps/e2e-watch-test-watch-closed,UID:e4c20870-928f-4090-afdc-c9003e330320,ResourceVersion:18140932,Generation:0,CreationTimestamp:2019-12-26 13:13:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Dec 26 13:13:57.893: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-159,SelfLink:/api/v1/namespaces/watch-159/configmaps/e2e-watch-test-watch-closed,UID:e4c20870-928f-4090-afdc-c9003e330320,ResourceVersion:18140933,Generation:0,CreationTimestamp:2019-12-26 13:13:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 26 13:13:57.893: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-159" for this suite. Dec 26 13:14:04.088: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 26 13:14:04.177: INFO: namespace watch-159 deletion completed in 6.277590855s • [SLOW TEST:6.491 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 26 13:14:04.177: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Dec 26 13:14:04.333: INFO: Waiting up to 5m0s for pod "downwardapi-volume-163e399a-772c-417f-83c5-acdba68ac6fb" in namespace "downward-api-3722" to be "success or failure" Dec 26 13:14:04.400: INFO: Pod "downwardapi-volume-163e399a-772c-417f-83c5-acdba68ac6fb": Phase="Pending", Reason="", readiness=false. Elapsed: 67.077558ms Dec 26 13:14:06.412: INFO: Pod "downwardapi-volume-163e399a-772c-417f-83c5-acdba68ac6fb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.079537044s Dec 26 13:14:08.425: INFO: Pod "downwardapi-volume-163e399a-772c-417f-83c5-acdba68ac6fb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.091875827s Dec 26 13:14:10.436: INFO: Pod "downwardapi-volume-163e399a-772c-417f-83c5-acdba68ac6fb": Phase="Pending", Reason="", readiness=false. Elapsed: 6.103066762s Dec 26 13:14:12.445: INFO: Pod "downwardapi-volume-163e399a-772c-417f-83c5-acdba68ac6fb": Phase="Pending", Reason="", readiness=false. Elapsed: 8.112089825s Dec 26 13:14:14.455: INFO: Pod "downwardapi-volume-163e399a-772c-417f-83c5-acdba68ac6fb": Phase="Pending", Reason="", readiness=false. Elapsed: 10.122294043s Dec 26 13:14:16.467: INFO: Pod "downwardapi-volume-163e399a-772c-417f-83c5-acdba68ac6fb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.134250751s STEP: Saw pod success Dec 26 13:14:16.467: INFO: Pod "downwardapi-volume-163e399a-772c-417f-83c5-acdba68ac6fb" satisfied condition "success or failure" Dec 26 13:14:16.472: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-163e399a-772c-417f-83c5-acdba68ac6fb container client-container: STEP: delete the pod Dec 26 13:14:16.562: INFO: Waiting for pod downwardapi-volume-163e399a-772c-417f-83c5-acdba68ac6fb to disappear Dec 26 13:14:16.569: INFO: Pod downwardapi-volume-163e399a-772c-417f-83c5-acdba68ac6fb no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 26 13:14:16.569: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3722" for this suite. Dec 26 13:14:22.638: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 26 13:14:22.757: INFO: namespace downward-api-3722 deletion completed in 6.180938312s • [SLOW TEST:18.580 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 26 13:14:22.757: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod Dec 26 13:14:22.861: INFO: PodSpec: initContainers in spec.initContainers Dec 26 13:15:33.291: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-fc712f4f-d74d-4020-af41-a905cfb1b6ae", GenerateName:"", Namespace:"init-container-6004", SelfLink:"/api/v1/namespaces/init-container-6004/pods/pod-init-fc712f4f-d74d-4020-af41-a905cfb1b6ae", UID:"19030dc3-351b-4afb-a0de-ca65dd618fc2", ResourceVersion:"18141119", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63712962862, loc:(*time.Location)(0x7ea48a0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"861263187"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-d648t", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc001db0380), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-d648t", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-d648t", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-d648t", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc000929348), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"iruya-node", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc002c17d40), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc000929420)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc000929440)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc000929448), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc00092944c), PreemptionPolicy:(*v1.PreemptionPolicy)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712962863, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712962863, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712962863, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712962862, loc:(*time.Location)(0x7ea48a0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.96.3.65", PodIP:"10.44.0.1", StartTime:(*v1.Time)(0xc0025eff60), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc00186c8c0)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc00186c930)}, Ready:false, RestartCount:3, Image:"busybox:1.29", ImageID:"docker-pullable://busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"docker://5b03d61bb32ecf65b23163f58576e4704f4622409d3eeb42ff93eadfd35d0504"}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0025effa0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:""}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0025eff80), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:""}}, QOSClass:"Guaranteed"}} [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 26 13:15:33.293: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-6004" for this suite. Dec 26 13:15:55.336: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 26 13:15:55.510: INFO: namespace init-container-6004 deletion completed in 22.203554262s • [SLOW TEST:92.752 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 26 13:15:55.510: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Dec 26 13:15:55.571: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace STEP: Creating rc "condition-test" that asks for more than the allowed pod quota STEP: Checking rc "condition-test" has the desired failure condition set STEP: Scaling down rc "condition-test" to satisfy pod quota Dec 26 13:15:57.667: INFO: Updating replication controller "condition-test" STEP: Checking rc "condition-test" has no failure condition set [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 26 13:15:58.901: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-8988" for this suite. Dec 26 13:16:11.256: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 26 13:16:11.430: INFO: namespace replication-controller-8988 deletion completed in 12.515066987s • [SLOW TEST:15.920 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 26 13:16:11.430: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Setting up the test STEP: Creating hostNetwork=false pod STEP: Creating hostNetwork=true pod STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false Dec 26 13:16:35.733: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-2359 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Dec 26 13:16:35.733: INFO: >>> kubeConfig: /root/.kube/config Dec 26 13:16:36.270: INFO: Exec stderr: "" Dec 26 13:16:36.271: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-2359 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Dec 26 13:16:36.271: INFO: >>> kubeConfig: /root/.kube/config Dec 26 13:16:36.789: INFO: Exec stderr: "" Dec 26 13:16:36.789: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-2359 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Dec 26 13:16:36.790: INFO: >>> kubeConfig: /root/.kube/config Dec 26 13:16:37.220: INFO: Exec stderr: "" Dec 26 13:16:37.220: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-2359 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Dec 26 13:16:37.220: INFO: >>> kubeConfig: /root/.kube/config Dec 26 13:16:37.677: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount Dec 26 13:16:37.677: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-2359 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Dec 26 13:16:37.678: INFO: >>> kubeConfig: /root/.kube/config Dec 26 13:16:37.931: INFO: Exec stderr: "" Dec 26 13:16:37.931: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-2359 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Dec 26 13:16:37.931: INFO: >>> kubeConfig: /root/.kube/config Dec 26 13:16:38.176: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true Dec 26 13:16:38.177: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-2359 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Dec 26 13:16:38.177: INFO: >>> kubeConfig: /root/.kube/config Dec 26 13:16:38.452: INFO: Exec stderr: "" Dec 26 13:16:38.452: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-2359 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Dec 26 13:16:38.452: INFO: >>> kubeConfig: /root/.kube/config Dec 26 13:16:38.711: INFO: Exec stderr: "" Dec 26 13:16:38.711: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-2359 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Dec 26 13:16:38.711: INFO: >>> kubeConfig: /root/.kube/config Dec 26 13:16:38.979: INFO: Exec stderr: "" Dec 26 13:16:38.979: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-2359 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Dec 26 13:16:38.979: INFO: >>> kubeConfig: /root/.kube/config Dec 26 13:16:39.246: INFO: Exec stderr: "" [AfterEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 26 13:16:39.247: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-kubelet-etc-hosts-2359" for this suite. Dec 26 13:17:31.299: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 26 13:17:31.424: INFO: namespace e2e-kubelet-etc-hosts-2359 deletion completed in 52.170224205s • [SLOW TEST:79.994 seconds] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 26 13:17:31.424: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification Dec 26 13:17:31.575: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-312,SelfLink:/api/v1/namespaces/watch-312/configmaps/e2e-watch-test-configmap-a,UID:dd4a7118-f6fe-423a-ae89-a403c5d3b403,ResourceVersion:18141406,Generation:0,CreationTimestamp:2019-12-26 13:17:31 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} Dec 26 13:17:31.576: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-312,SelfLink:/api/v1/namespaces/watch-312/configmaps/e2e-watch-test-configmap-a,UID:dd4a7118-f6fe-423a-ae89-a403c5d3b403,ResourceVersion:18141406,Generation:0,CreationTimestamp:2019-12-26 13:17:31 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: modifying configmap A and ensuring the correct watchers observe the notification Dec 26 13:17:41.595: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-312,SelfLink:/api/v1/namespaces/watch-312/configmaps/e2e-watch-test-configmap-a,UID:dd4a7118-f6fe-423a-ae89-a403c5d3b403,ResourceVersion:18141420,Generation:0,CreationTimestamp:2019-12-26 13:17:31 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} Dec 26 13:17:41.595: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-312,SelfLink:/api/v1/namespaces/watch-312/configmaps/e2e-watch-test-configmap-a,UID:dd4a7118-f6fe-423a-ae89-a403c5d3b403,ResourceVersion:18141420,Generation:0,CreationTimestamp:2019-12-26 13:17:31 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying configmap A again and ensuring the correct watchers observe the notification Dec 26 13:17:51.632: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-312,SelfLink:/api/v1/namespaces/watch-312/configmaps/e2e-watch-test-configmap-a,UID:dd4a7118-f6fe-423a-ae89-a403c5d3b403,ResourceVersion:18141435,Generation:0,CreationTimestamp:2019-12-26 13:17:31 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Dec 26 13:17:51.633: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-312,SelfLink:/api/v1/namespaces/watch-312/configmaps/e2e-watch-test-configmap-a,UID:dd4a7118-f6fe-423a-ae89-a403c5d3b403,ResourceVersion:18141435,Generation:0,CreationTimestamp:2019-12-26 13:17:31 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: deleting configmap A and ensuring the correct watchers observe the notification Dec 26 13:18:01.651: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-312,SelfLink:/api/v1/namespaces/watch-312/configmaps/e2e-watch-test-configmap-a,UID:dd4a7118-f6fe-423a-ae89-a403c5d3b403,ResourceVersion:18141449,Generation:0,CreationTimestamp:2019-12-26 13:17:31 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Dec 26 13:18:01.651: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-312,SelfLink:/api/v1/namespaces/watch-312/configmaps/e2e-watch-test-configmap-a,UID:dd4a7118-f6fe-423a-ae89-a403c5d3b403,ResourceVersion:18141449,Generation:0,CreationTimestamp:2019-12-26 13:17:31 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification Dec 26 13:18:11.665: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-312,SelfLink:/api/v1/namespaces/watch-312/configmaps/e2e-watch-test-configmap-b,UID:58fe6068-8684-4610-ad0f-93e2dcd16c62,ResourceVersion:18141463,Generation:0,CreationTimestamp:2019-12-26 13:18:11 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} Dec 26 13:18:11.665: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-312,SelfLink:/api/v1/namespaces/watch-312/configmaps/e2e-watch-test-configmap-b,UID:58fe6068-8684-4610-ad0f-93e2dcd16c62,ResourceVersion:18141463,Generation:0,CreationTimestamp:2019-12-26 13:18:11 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: deleting configmap B and ensuring the correct watchers observe the notification Dec 26 13:18:21.677: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-312,SelfLink:/api/v1/namespaces/watch-312/configmaps/e2e-watch-test-configmap-b,UID:58fe6068-8684-4610-ad0f-93e2dcd16c62,ResourceVersion:18141477,Generation:0,CreationTimestamp:2019-12-26 13:18:11 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} Dec 26 13:18:21.677: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-312,SelfLink:/api/v1/namespaces/watch-312/configmaps/e2e-watch-test-configmap-b,UID:58fe6068-8684-4610-ad0f-93e2dcd16c62,ResourceVersion:18141477,Generation:0,CreationTimestamp:2019-12-26 13:18:11 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 26 13:18:31.678: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-312" for this suite. Dec 26 13:18:37.760: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 26 13:18:37.937: INFO: namespace watch-312 deletion completed in 6.25009804s • [SLOW TEST:66.513 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 26 13:18:37.937: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0777 on tmpfs Dec 26 13:18:38.013: INFO: Waiting up to 5m0s for pod "pod-e8f70ace-defe-4a85-933e-92d951062ac4" in namespace "emptydir-9981" to be "success or failure" Dec 26 13:18:38.059: INFO: Pod "pod-e8f70ace-defe-4a85-933e-92d951062ac4": Phase="Pending", Reason="", readiness=false. Elapsed: 45.941287ms Dec 26 13:18:40.071: INFO: Pod "pod-e8f70ace-defe-4a85-933e-92d951062ac4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0579406s Dec 26 13:18:42.078: INFO: Pod "pod-e8f70ace-defe-4a85-933e-92d951062ac4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.064876987s Dec 26 13:18:44.088: INFO: Pod "pod-e8f70ace-defe-4a85-933e-92d951062ac4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.075068884s Dec 26 13:18:46.767: INFO: Pod "pod-e8f70ace-defe-4a85-933e-92d951062ac4": Phase="Pending", Reason="", readiness=false. Elapsed: 8.753576886s Dec 26 13:18:48.774: INFO: Pod "pod-e8f70ace-defe-4a85-933e-92d951062ac4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.760258311s STEP: Saw pod success Dec 26 13:18:48.774: INFO: Pod "pod-e8f70ace-defe-4a85-933e-92d951062ac4" satisfied condition "success or failure" Dec 26 13:18:48.779: INFO: Trying to get logs from node iruya-node pod pod-e8f70ace-defe-4a85-933e-92d951062ac4 container test-container: STEP: delete the pod Dec 26 13:18:48.835: INFO: Waiting for pod pod-e8f70ace-defe-4a85-933e-92d951062ac4 to disappear Dec 26 13:18:48.840: INFO: Pod pod-e8f70ace-defe-4a85-933e-92d951062ac4 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 26 13:18:48.840: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9981" for this suite. Dec 26 13:18:54.891: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 26 13:18:55.072: INFO: namespace emptydir-9981 deletion completed in 6.22740162s • [SLOW TEST:17.135 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 26 13:18:55.073: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 Dec 26 13:18:55.173: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Dec 26 13:18:55.181: INFO: Waiting for terminating namespaces to be deleted... Dec 26 13:18:55.186: INFO: Logging pods the kubelet thinks is on node iruya-node before test Dec 26 13:18:55.199: INFO: weave-net-rlp57 from kube-system started at 2019-10-12 11:56:39 +0000 UTC (2 container statuses recorded) Dec 26 13:18:55.200: INFO: Container weave ready: true, restart count 0 Dec 26 13:18:55.200: INFO: Container weave-npc ready: true, restart count 0 Dec 26 13:18:55.200: INFO: kube-proxy-976zl from kube-system started at 2019-08-04 09:01:39 +0000 UTC (1 container statuses recorded) Dec 26 13:18:55.200: INFO: Container kube-proxy ready: true, restart count 0 Dec 26 13:18:55.200: INFO: Logging pods the kubelet thinks is on node iruya-server-sfge57q7djm7 before test Dec 26 13:18:55.214: INFO: kube-scheduler-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:43 +0000 UTC (1 container statuses recorded) Dec 26 13:18:55.214: INFO: Container kube-scheduler ready: true, restart count 7 Dec 26 13:18:55.214: INFO: coredns-5c98db65d4-xx8w8 from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded) Dec 26 13:18:55.214: INFO: Container coredns ready: true, restart count 0 Dec 26 13:18:55.214: INFO: etcd-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:38 +0000 UTC (1 container statuses recorded) Dec 26 13:18:55.214: INFO: Container etcd ready: true, restart count 0 Dec 26 13:18:55.214: INFO: weave-net-bzl4d from kube-system started at 2019-08-04 08:52:37 +0000 UTC (2 container statuses recorded) Dec 26 13:18:55.214: INFO: Container weave ready: true, restart count 0 Dec 26 13:18:55.214: INFO: Container weave-npc ready: true, restart count 0 Dec 26 13:18:55.214: INFO: coredns-5c98db65d4-bm4gs from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded) Dec 26 13:18:55.214: INFO: Container coredns ready: true, restart count 0 Dec 26 13:18:55.214: INFO: kube-controller-manager-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:42 +0000 UTC (1 container statuses recorded) Dec 26 13:18:55.214: INFO: Container kube-controller-manager ready: true, restart count 10 Dec 26 13:18:55.214: INFO: kube-proxy-58v95 from kube-system started at 2019-08-04 08:52:37 +0000 UTC (1 container statuses recorded) Dec 26 13:18:55.214: INFO: Container kube-proxy ready: true, restart count 0 Dec 26 13:18:55.214: INFO: kube-apiserver-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:39 +0000 UTC (1 container statuses recorded) Dec 26 13:18:55.214: INFO: Container kube-apiserver ready: true, restart count 0 [It] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.15e3ee531c8e7931], Reason = [FailedScheduling], Message = [0/2 nodes are available: 2 node(s) didn't match node selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 26 13:18:56.243: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-8078" for this suite. Dec 26 13:19:02.270: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 26 13:19:02.381: INFO: namespace sched-pred-8078 deletion completed in 6.132399579s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72 • [SLOW TEST:7.309 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23 validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 26 13:19:02.382: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Dec 26 13:19:02.502: INFO: Creating deployment "test-recreate-deployment" Dec 26 13:19:02.514: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 Dec 26 13:19:02.526: INFO: new replicaset for deployment "test-recreate-deployment" is yet to be created Dec 26 13:19:04.541: INFO: Waiting deployment "test-recreate-deployment" to complete Dec 26 13:19:04.544: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712963142, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712963142, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712963142, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712963142, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 26 13:19:06.564: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712963142, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712963142, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712963142, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712963142, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 26 13:19:08.571: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712963142, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712963142, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712963142, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712963142, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 26 13:19:10.560: INFO: Triggering a new rollout for deployment "test-recreate-deployment" Dec 26 13:19:10.575: INFO: Updating deployment test-recreate-deployment Dec 26 13:19:10.576: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 Dec 26 13:19:10.853: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment,GenerateName:,Namespace:deployment-1870,SelfLink:/apis/apps/v1/namespaces/deployment-1870/deployments/test-recreate-deployment,UID:496ba451-f5b6-4a12-ba60-7b1eff5b0df7,ResourceVersion:18141626,Generation:2,CreationTimestamp:2019-12-26 13:19:02 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[{Available False 2019-12-26 13:19:10 +0000 UTC 2019-12-26 13:19:10 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2019-12-26 13:19:10 +0000 UTC 2019-12-26 13:19:02 +0000 UTC ReplicaSetUpdated ReplicaSet "test-recreate-deployment-5c8c9cc69d" is progressing.}],ReadyReplicas:0,CollisionCount:nil,},} Dec 26 13:19:10.904: INFO: New ReplicaSet "test-recreate-deployment-5c8c9cc69d" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5c8c9cc69d,GenerateName:,Namespace:deployment-1870,SelfLink:/apis/apps/v1/namespaces/deployment-1870/replicasets/test-recreate-deployment-5c8c9cc69d,UID:501acacc-8e63-4309-b40c-3730888822c2,ResourceVersion:18141623,Generation:1,CreationTimestamp:2019-12-26 13:19:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 496ba451-f5b6-4a12-ba60-7b1eff5b0df7 0xc002b9dbe7 0xc002b9dbe8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Dec 26 13:19:10.904: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": Dec 26 13:19:10.904: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-6df85df6b9,GenerateName:,Namespace:deployment-1870,SelfLink:/apis/apps/v1/namespaces/deployment-1870/replicasets/test-recreate-deployment-6df85df6b9,UID:62b75586-60ca-4c0f-8b3d-d0b65185a3a2,ResourceVersion:18141615,Generation:2,CreationTimestamp:2019-12-26 13:19:02 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 496ba451-f5b6-4a12-ba60-7b1eff5b0df7 0xc002b9dcb7 0xc002b9dcb8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Dec 26 13:19:10.994: INFO: Pod "test-recreate-deployment-5c8c9cc69d-twhpz" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5c8c9cc69d-twhpz,GenerateName:test-recreate-deployment-5c8c9cc69d-,Namespace:deployment-1870,SelfLink:/api/v1/namespaces/deployment-1870/pods/test-recreate-deployment-5c8c9cc69d-twhpz,UID:ecd7dc89-de3e-4337-b7c6-caf372247521,ResourceVersion:18141627,Generation:0,CreationTimestamp:2019-12-26 13:19:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-recreate-deployment-5c8c9cc69d 501acacc-8e63-4309-b40c-3730888822c2 0xc002846d47 0xc002846d48}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-scz4w {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-scz4w,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-scz4w true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002846dc0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002846de0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-26 13:19:10 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-26 13:19:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-26 13:19:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-26 13:19:10 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2019-12-26 13:19:10 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 26 13:19:10.994: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-1870" for this suite. Dec 26 13:19:17.027: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 26 13:19:17.108: INFO: namespace deployment-1870 deletion completed in 6.10714909s • [SLOW TEST:14.727 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 26 13:19:17.109: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-projected-7r2n STEP: Creating a pod to test atomic-volume-subpath Dec 26 13:19:17.438: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-7r2n" in namespace "subpath-2119" to be "success or failure" Dec 26 13:19:17.574: INFO: Pod "pod-subpath-test-projected-7r2n": Phase="Pending", Reason="", readiness=false. Elapsed: 136.043768ms Dec 26 13:19:19.580: INFO: Pod "pod-subpath-test-projected-7r2n": Phase="Pending", Reason="", readiness=false. Elapsed: 2.142573017s Dec 26 13:19:21.593: INFO: Pod "pod-subpath-test-projected-7r2n": Phase="Pending", Reason="", readiness=false. Elapsed: 4.155227757s Dec 26 13:19:23.607: INFO: Pod "pod-subpath-test-projected-7r2n": Phase="Pending", Reason="", readiness=false. Elapsed: 6.16871055s Dec 26 13:19:25.634: INFO: Pod "pod-subpath-test-projected-7r2n": Phase="Running", Reason="", readiness=true. Elapsed: 8.195674885s Dec 26 13:19:27.645: INFO: Pod "pod-subpath-test-projected-7r2n": Phase="Running", Reason="", readiness=true. Elapsed: 10.207526351s Dec 26 13:19:29.657: INFO: Pod "pod-subpath-test-projected-7r2n": Phase="Running", Reason="", readiness=true. Elapsed: 12.219249149s Dec 26 13:19:31.671: INFO: Pod "pod-subpath-test-projected-7r2n": Phase="Running", Reason="", readiness=true. Elapsed: 14.233122047s Dec 26 13:19:33.680: INFO: Pod "pod-subpath-test-projected-7r2n": Phase="Running", Reason="", readiness=true. Elapsed: 16.242194792s Dec 26 13:19:35.687: INFO: Pod "pod-subpath-test-projected-7r2n": Phase="Running", Reason="", readiness=true. Elapsed: 18.249466426s Dec 26 13:19:37.697: INFO: Pod "pod-subpath-test-projected-7r2n": Phase="Running", Reason="", readiness=true. Elapsed: 20.259600484s Dec 26 13:19:39.711: INFO: Pod "pod-subpath-test-projected-7r2n": Phase="Running", Reason="", readiness=true. Elapsed: 22.272905175s Dec 26 13:19:41.733: INFO: Pod "pod-subpath-test-projected-7r2n": Phase="Running", Reason="", readiness=true. Elapsed: 24.295106641s Dec 26 13:19:43.746: INFO: Pod "pod-subpath-test-projected-7r2n": Phase="Running", Reason="", readiness=true. Elapsed: 26.307967538s Dec 26 13:19:45.793: INFO: Pod "pod-subpath-test-projected-7r2n": Phase="Running", Reason="", readiness=true. Elapsed: 28.355448804s Dec 26 13:19:47.802: INFO: Pod "pod-subpath-test-projected-7r2n": Phase="Succeeded", Reason="", readiness=false. Elapsed: 30.364361588s STEP: Saw pod success Dec 26 13:19:47.802: INFO: Pod "pod-subpath-test-projected-7r2n" satisfied condition "success or failure" Dec 26 13:19:47.806: INFO: Trying to get logs from node iruya-node pod pod-subpath-test-projected-7r2n container test-container-subpath-projected-7r2n: STEP: delete the pod Dec 26 13:19:47.893: INFO: Waiting for pod pod-subpath-test-projected-7r2n to disappear Dec 26 13:19:47.928: INFO: Pod pod-subpath-test-projected-7r2n no longer exists STEP: Deleting pod pod-subpath-test-projected-7r2n Dec 26 13:19:47.928: INFO: Deleting pod "pod-subpath-test-projected-7r2n" in namespace "subpath-2119" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 26 13:19:47.933: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-2119" for this suite. Dec 26 13:19:53.979: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 26 13:19:54.089: INFO: namespace subpath-2119 deletion completed in 6.150450558s • [SLOW TEST:36.980 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 26 13:19:54.090: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 26 13:20:02.316: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-4816" for this suite. Dec 26 13:20:48.394: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 26 13:20:48.529: INFO: namespace kubelet-test-4816 deletion completed in 46.166965418s • [SLOW TEST:54.439 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a read only busybox container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:187 should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-network] Services should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 26 13:20:48.529: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88 [It] should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating service multi-endpoint-test in namespace services-1510 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-1510 to expose endpoints map[] Dec 26 13:20:48.822: INFO: Get endpoints failed (107.152699ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found Dec 26 13:20:49.834: INFO: successfully validated that service multi-endpoint-test in namespace services-1510 exposes endpoints map[] (1.119391868s elapsed) STEP: Creating pod pod1 in namespace services-1510 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-1510 to expose endpoints map[pod1:[100]] Dec 26 13:20:54.072: INFO: Unexpected endpoints: found map[], expected map[pod1:[100]] (4.199398437s elapsed, will retry) Dec 26 13:20:57.106: INFO: successfully validated that service multi-endpoint-test in namespace services-1510 exposes endpoints map[pod1:[100]] (7.233548099s elapsed) STEP: Creating pod pod2 in namespace services-1510 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-1510 to expose endpoints map[pod1:[100] pod2:[101]] Dec 26 13:21:03.352: INFO: Unexpected endpoints: found map[0808c44a-de89-4c55-9f57-527a70555ddf:[100]], expected map[pod1:[100] pod2:[101]] (6.235846085s elapsed, will retry) Dec 26 13:21:05.493: INFO: successfully validated that service multi-endpoint-test in namespace services-1510 exposes endpoints map[pod1:[100] pod2:[101]] (8.376743497s elapsed) STEP: Deleting pod pod1 in namespace services-1510 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-1510 to expose endpoints map[pod2:[101]] Dec 26 13:21:06.534: INFO: successfully validated that service multi-endpoint-test in namespace services-1510 exposes endpoints map[pod2:[101]] (1.031517222s elapsed) STEP: Deleting pod pod2 in namespace services-1510 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-1510 to expose endpoints map[] Dec 26 13:21:06.573: INFO: successfully validated that service multi-endpoint-test in namespace services-1510 exposes endpoints map[] (20.759782ms elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 26 13:21:06.663: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-1510" for this suite. Dec 26 13:21:28.704: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 26 13:21:28.800: INFO: namespace services-1510 deletion completed in 22.124955808s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92 • [SLOW TEST:40.271 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 26 13:21:28.800: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W1226 13:21:38.957185 9 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Dec 26 13:21:38.957: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 26 13:21:38.957: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-8243" for this suite. Dec 26 13:21:45.351: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 26 13:21:45.479: INFO: namespace gc-8243 deletion completed in 6.517168929s • [SLOW TEST:16.678 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 26 13:21:45.479: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0666 on node default medium Dec 26 13:21:45.576: INFO: Waiting up to 5m0s for pod "pod-b3563554-5116-4346-859c-27ccce8ad7af" in namespace "emptydir-5013" to be "success or failure" Dec 26 13:21:45.582: INFO: Pod "pod-b3563554-5116-4346-859c-27ccce8ad7af": Phase="Pending", Reason="", readiness=false. Elapsed: 5.420439ms Dec 26 13:21:47.596: INFO: Pod "pod-b3563554-5116-4346-859c-27ccce8ad7af": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019835015s Dec 26 13:21:49.627: INFO: Pod "pod-b3563554-5116-4346-859c-27ccce8ad7af": Phase="Pending", Reason="", readiness=false. Elapsed: 4.050652805s Dec 26 13:21:51.636: INFO: Pod "pod-b3563554-5116-4346-859c-27ccce8ad7af": Phase="Pending", Reason="", readiness=false. Elapsed: 6.059456227s Dec 26 13:21:53.676: INFO: Pod "pod-b3563554-5116-4346-859c-27ccce8ad7af": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.099076193s STEP: Saw pod success Dec 26 13:21:53.676: INFO: Pod "pod-b3563554-5116-4346-859c-27ccce8ad7af" satisfied condition "success or failure" Dec 26 13:21:53.681: INFO: Trying to get logs from node iruya-node pod pod-b3563554-5116-4346-859c-27ccce8ad7af container test-container: STEP: delete the pod Dec 26 13:21:53.770: INFO: Waiting for pod pod-b3563554-5116-4346-859c-27ccce8ad7af to disappear Dec 26 13:21:53.884: INFO: Pod pod-b3563554-5116-4346-859c-27ccce8ad7af no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 26 13:21:53.884: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5013" for this suite. Dec 26 13:21:59.959: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 26 13:22:00.104: INFO: namespace emptydir-5013 deletion completed in 6.2117253s • [SLOW TEST:14.625 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 26 13:22:00.105: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0644 on tmpfs Dec 26 13:22:00.198: INFO: Waiting up to 5m0s for pod "pod-4c7f22b5-d2b0-4aa0-9206-9a6eb3034cd6" in namespace "emptydir-2779" to be "success or failure" Dec 26 13:22:00.209: INFO: Pod "pod-4c7f22b5-d2b0-4aa0-9206-9a6eb3034cd6": Phase="Pending", Reason="", readiness=false. Elapsed: 10.36513ms Dec 26 13:22:02.215: INFO: Pod "pod-4c7f22b5-d2b0-4aa0-9206-9a6eb3034cd6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016684942s Dec 26 13:22:04.314: INFO: Pod "pod-4c7f22b5-d2b0-4aa0-9206-9a6eb3034cd6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.115450595s Dec 26 13:22:06.571: INFO: Pod "pod-4c7f22b5-d2b0-4aa0-9206-9a6eb3034cd6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.372570646s Dec 26 13:22:08.593: INFO: Pod "pod-4c7f22b5-d2b0-4aa0-9206-9a6eb3034cd6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.394971691s STEP: Saw pod success Dec 26 13:22:08.594: INFO: Pod "pod-4c7f22b5-d2b0-4aa0-9206-9a6eb3034cd6" satisfied condition "success or failure" Dec 26 13:22:08.600: INFO: Trying to get logs from node iruya-node pod pod-4c7f22b5-d2b0-4aa0-9206-9a6eb3034cd6 container test-container: STEP: delete the pod Dec 26 13:22:08.759: INFO: Waiting for pod pod-4c7f22b5-d2b0-4aa0-9206-9a6eb3034cd6 to disappear Dec 26 13:22:08.777: INFO: Pod pod-4c7f22b5-d2b0-4aa0-9206-9a6eb3034cd6 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 26 13:22:08.777: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2779" for this suite. Dec 26 13:22:14.849: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 26 13:22:14.963: INFO: namespace emptydir-2779 deletion completed in 6.162623514s • [SLOW TEST:14.859 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 26 13:22:14.964: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Dec 26 13:22:23.204: INFO: Expected: &{OK} to match Container's Termination Message: OK -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 26 13:22:23.234: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-6247" for this suite. Dec 26 13:22:29.389: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 26 13:22:29.526: INFO: namespace container-runtime-6247 deletion completed in 6.287533991s • [SLOW TEST:14.563 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129 should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 26 13:22:29.528: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Dec 26 13:22:38.248: INFO: Successfully updated pod "pod-update-e145012a-2473-495d-ac16-856c4f70c262" STEP: verifying the updated pod is in kubernetes Dec 26 13:22:38.271: INFO: Pod update OK [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 26 13:22:38.271: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-3279" for this suite. Dec 26 13:23:00.297: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 26 13:23:00.453: INFO: namespace pods-3279 deletion completed in 22.175768278s • [SLOW TEST:30.926 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 26 13:23:00.454: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars Dec 26 13:23:00.567: INFO: Waiting up to 5m0s for pod "downward-api-7566f60b-0348-44a8-b8e9-6a6d5d3edb4f" in namespace "downward-api-1875" to be "success or failure" Dec 26 13:23:00.573: INFO: Pod "downward-api-7566f60b-0348-44a8-b8e9-6a6d5d3edb4f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.480318ms Dec 26 13:23:02.582: INFO: Pod "downward-api-7566f60b-0348-44a8-b8e9-6a6d5d3edb4f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014820103s Dec 26 13:23:04.597: INFO: Pod "downward-api-7566f60b-0348-44a8-b8e9-6a6d5d3edb4f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.030766968s Dec 26 13:23:06.637: INFO: Pod "downward-api-7566f60b-0348-44a8-b8e9-6a6d5d3edb4f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.069988819s Dec 26 13:23:08.653: INFO: Pod "downward-api-7566f60b-0348-44a8-b8e9-6a6d5d3edb4f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.086728587s STEP: Saw pod success Dec 26 13:23:08.654: INFO: Pod "downward-api-7566f60b-0348-44a8-b8e9-6a6d5d3edb4f" satisfied condition "success or failure" Dec 26 13:23:08.659: INFO: Trying to get logs from node iruya-node pod downward-api-7566f60b-0348-44a8-b8e9-6a6d5d3edb4f container dapi-container: STEP: delete the pod Dec 26 13:23:08.769: INFO: Waiting for pod downward-api-7566f60b-0348-44a8-b8e9-6a6d5d3edb4f to disappear Dec 26 13:23:08.780: INFO: Pod downward-api-7566f60b-0348-44a8-b8e9-6a6d5d3edb4f no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 26 13:23:08.780: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1875" for this suite. Dec 26 13:23:14.827: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 26 13:23:14.984: INFO: namespace downward-api-1875 deletion completed in 6.197508556s • [SLOW TEST:14.530 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 26 13:23:14.985: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Dec 26 13:23:15.046: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) Dec 26 13:23:15.171: INFO: Pod name sample-pod: Found 0 pods out of 1 Dec 26 13:23:20.180: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Dec 26 13:23:22.195: INFO: Creating deployment "test-rolling-update-deployment" Dec 26 13:23:22.211: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has Dec 26 13:23:22.232: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created Dec 26 13:23:24.259: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected Dec 26 13:23:24.264: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712963402, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712963402, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712963402, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712963402, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-79f6b9d75c\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 26 13:23:26.272: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712963402, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712963402, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712963402, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712963402, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-79f6b9d75c\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 26 13:23:28.274: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712963402, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712963402, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712963402, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712963402, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-79f6b9d75c\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 26 13:23:30.280: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 Dec 26 13:23:30.301: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment,GenerateName:,Namespace:deployment-5977,SelfLink:/apis/apps/v1/namespaces/deployment-5977/deployments/test-rolling-update-deployment,UID:c749146b-f7c1-4936-88ee-ebd391aa115f,ResourceVersion:18142300,Generation:1,CreationTimestamp:2019-12-26 13:23:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2019-12-26 13:23:22 +0000 UTC 2019-12-26 13:23:22 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2019-12-26 13:23:29 +0000 UTC 2019-12-26 13:23:22 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rolling-update-deployment-79f6b9d75c" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},} Dec 26 13:23:30.308: INFO: New ReplicaSet "test-rolling-update-deployment-79f6b9d75c" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-79f6b9d75c,GenerateName:,Namespace:deployment-5977,SelfLink:/apis/apps/v1/namespaces/deployment-5977/replicasets/test-rolling-update-deployment-79f6b9d75c,UID:4312c126-88e4-4b93-91dc-e2f2a566cf82,ResourceVersion:18142288,Generation:1,CreationTimestamp:2019-12-26 13:23:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment c749146b-f7c1-4936-88ee-ebd391aa115f 0xc002cd8a07 0xc002cd8a08}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} Dec 26 13:23:30.308: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": Dec 26 13:23:30.308: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-controller,GenerateName:,Namespace:deployment-5977,SelfLink:/apis/apps/v1/namespaces/deployment-5977/replicasets/test-rolling-update-controller,UID:4d8408fc-2a84-4c50-9a80-f6e0d6e8cdb4,ResourceVersion:18142299,Generation:2,CreationTimestamp:2019-12-26 13:23:15 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305832,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment c749146b-f7c1-4936-88ee-ebd391aa115f 0xc002cd8927 0xc002cd8928}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Dec 26 13:23:30.314: INFO: Pod "test-rolling-update-deployment-79f6b9d75c-2dxms" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-79f6b9d75c-2dxms,GenerateName:test-rolling-update-deployment-79f6b9d75c-,Namespace:deployment-5977,SelfLink:/api/v1/namespaces/deployment-5977/pods/test-rolling-update-deployment-79f6b9d75c-2dxms,UID:8184dd3d-354a-4fce-b2f0-84b7bb45ea04,ResourceVersion:18142287,Generation:0,CreationTimestamp:2019-12-26 13:23:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rolling-update-deployment-79f6b9d75c 4312c126-88e4-4b93-91dc-e2f2a566cf82 0xc002cd93c7 0xc002cd93c8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-shkcf {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-shkcf,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-shkcf true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002cd9450} {node.kubernetes.io/unreachable Exists NoExecute 0xc002cd94e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-26 13:23:22 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-26 13:23:29 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-26 13:23:29 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-26 13:23:22 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.2,StartTime:2019-12-26 13:23:22 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2019-12-26 13:23:28 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 docker://f5503ab8f5326be0e3b1ab9b88b9b17bd851b0314adc23c96ef47a84c2934b78}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 26 13:23:30.314: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-5977" for this suite. Dec 26 13:23:36.359: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 26 13:23:36.480: INFO: namespace deployment-5977 deletion completed in 6.160534964s • [SLOW TEST:21.496 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 26 13:23:36.481: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod liveness-abe023b1-f32e-4078-a1c0-f3879c019a30 in namespace container-probe-544 Dec 26 13:23:44.729: INFO: Started pod liveness-abe023b1-f32e-4078-a1c0-f3879c019a30 in namespace container-probe-544 STEP: checking the pod's current state and verifying that restartCount is present Dec 26 13:23:44.734: INFO: Initial restart count of pod liveness-abe023b1-f32e-4078-a1c0-f3879c019a30 is 0 Dec 26 13:24:02.826: INFO: Restart count of pod container-probe-544/liveness-abe023b1-f32e-4078-a1c0-f3879c019a30 is now 1 (18.091666144s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 26 13:24:02.856: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-544" for this suite. Dec 26 13:24:08.914: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 26 13:24:09.025: INFO: namespace container-probe-544 deletion completed in 6.146860415s • [SLOW TEST:32.545 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-network] Services should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 26 13:24:09.026: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88 [It] should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating service endpoint-test2 in namespace services-5408 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-5408 to expose endpoints map[] Dec 26 13:24:09.273: INFO: Get endpoints failed (10.710201ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found Dec 26 13:24:10.307: INFO: successfully validated that service endpoint-test2 in namespace services-5408 exposes endpoints map[] (1.044879214s elapsed) STEP: Creating pod pod1 in namespace services-5408 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-5408 to expose endpoints map[pod1:[80]] Dec 26 13:24:14.413: INFO: Unexpected endpoints: found map[], expected map[pod1:[80]] (4.083727598s elapsed, will retry) Dec 26 13:24:17.450: INFO: successfully validated that service endpoint-test2 in namespace services-5408 exposes endpoints map[pod1:[80]] (7.120495011s elapsed) STEP: Creating pod pod2 in namespace services-5408 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-5408 to expose endpoints map[pod1:[80] pod2:[80]] Dec 26 13:24:22.602: INFO: Unexpected endpoints: found map[d4219e26-dcb7-4b3c-8a6b-82b8e9a5df4d:[80]], expected map[pod1:[80] pod2:[80]] (5.136479559s elapsed, will retry) Dec 26 13:24:24.660: INFO: successfully validated that service endpoint-test2 in namespace services-5408 exposes endpoints map[pod1:[80] pod2:[80]] (7.193897431s elapsed) STEP: Deleting pod pod1 in namespace services-5408 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-5408 to expose endpoints map[pod2:[80]] Dec 26 13:24:24.689: INFO: successfully validated that service endpoint-test2 in namespace services-5408 exposes endpoints map[pod2:[80]] (18.039007ms elapsed) STEP: Deleting pod pod2 in namespace services-5408 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-5408 to expose endpoints map[] Dec 26 13:24:24.789: INFO: successfully validated that service endpoint-test2 in namespace services-5408 exposes endpoints map[] (75.103633ms elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 26 13:24:24.841: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-5408" for this suite. Dec 26 13:24:46.935: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 26 13:24:47.078: INFO: namespace services-5408 deletion completed in 22.168099683s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92 • [SLOW TEST:38.053 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 26 13:24:47.080: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-e21cd863-2c3a-450b-a126-2f2840695e25 STEP: Creating a pod to test consume secrets Dec 26 13:24:47.235: INFO: Waiting up to 5m0s for pod "pod-secrets-84447451-e699-486a-9ed6-c8ebc5bcd4e3" in namespace "secrets-6204" to be "success or failure" Dec 26 13:24:47.239: INFO: Pod "pod-secrets-84447451-e699-486a-9ed6-c8ebc5bcd4e3": Phase="Pending", Reason="", readiness=false. Elapsed: 3.932505ms Dec 26 13:24:50.366: INFO: Pod "pod-secrets-84447451-e699-486a-9ed6-c8ebc5bcd4e3": Phase="Pending", Reason="", readiness=false. Elapsed: 3.131474096s Dec 26 13:24:52.375: INFO: Pod "pod-secrets-84447451-e699-486a-9ed6-c8ebc5bcd4e3": Phase="Pending", Reason="", readiness=false. Elapsed: 5.140088831s Dec 26 13:24:54.386: INFO: Pod "pod-secrets-84447451-e699-486a-9ed6-c8ebc5bcd4e3": Phase="Pending", Reason="", readiness=false. Elapsed: 7.151526762s Dec 26 13:24:56.400: INFO: Pod "pod-secrets-84447451-e699-486a-9ed6-c8ebc5bcd4e3": Phase="Pending", Reason="", readiness=false. Elapsed: 9.165420144s Dec 26 13:24:58.411: INFO: Pod "pod-secrets-84447451-e699-486a-9ed6-c8ebc5bcd4e3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.175927638s STEP: Saw pod success Dec 26 13:24:58.411: INFO: Pod "pod-secrets-84447451-e699-486a-9ed6-c8ebc5bcd4e3" satisfied condition "success or failure" Dec 26 13:24:58.419: INFO: Trying to get logs from node iruya-node pod pod-secrets-84447451-e699-486a-9ed6-c8ebc5bcd4e3 container secret-volume-test: STEP: delete the pod Dec 26 13:24:58.551: INFO: Waiting for pod pod-secrets-84447451-e699-486a-9ed6-c8ebc5bcd4e3 to disappear Dec 26 13:24:58.560: INFO: Pod pod-secrets-84447451-e699-486a-9ed6-c8ebc5bcd4e3 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 26 13:24:58.560: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-6204" for this suite. Dec 26 13:25:04.607: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 26 13:25:04.718: INFO: namespace secrets-6204 deletion completed in 6.15110246s • [SLOW TEST:17.639 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-apps] Deployment deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 26 13:25:04.718: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Dec 26 13:25:04.917: INFO: Pod name rollover-pod: Found 0 pods out of 1 Dec 26 13:25:09.930: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Dec 26 13:25:13.941: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready Dec 26 13:25:15.951: INFO: Creating deployment "test-rollover-deployment" Dec 26 13:25:15.967: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations Dec 26 13:25:17.983: INFO: Check revision of new replica set for deployment "test-rollover-deployment" Dec 26 13:25:17.994: INFO: Ensure that both replica sets have 1 created replica Dec 26 13:25:17.999: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update Dec 26 13:25:18.008: INFO: Updating deployment test-rollover-deployment Dec 26 13:25:18.008: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller Dec 26 13:25:20.052: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 Dec 26 13:25:20.059: INFO: Make sure deployment "test-rollover-deployment" is complete Dec 26 13:25:20.065: INFO: all replica sets need to contain the pod-template-hash label Dec 26 13:25:20.065: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712963516, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712963516, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712963518, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712963516, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 26 13:25:22.076: INFO: all replica sets need to contain the pod-template-hash label Dec 26 13:25:22.077: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712963516, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712963516, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712963518, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712963516, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 26 13:25:24.085: INFO: all replica sets need to contain the pod-template-hash label Dec 26 13:25:24.085: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712963516, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712963516, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712963518, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712963516, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 26 13:25:26.077: INFO: all replica sets need to contain the pod-template-hash label Dec 26 13:25:26.077: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712963516, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712963516, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712963525, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712963516, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 26 13:25:28.082: INFO: all replica sets need to contain the pod-template-hash label Dec 26 13:25:28.083: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712963516, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712963516, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712963525, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712963516, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 26 13:25:30.080: INFO: all replica sets need to contain the pod-template-hash label Dec 26 13:25:30.080: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712963516, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712963516, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712963525, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712963516, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 26 13:25:32.079: INFO: all replica sets need to contain the pod-template-hash label Dec 26 13:25:32.080: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712963516, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712963516, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712963525, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712963516, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 26 13:25:34.082: INFO: all replica sets need to contain the pod-template-hash label Dec 26 13:25:34.082: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712963516, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712963516, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712963525, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712963516, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 26 13:25:36.108: INFO: Dec 26 13:25:36.108: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 Dec 26 13:25:36.203: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment,GenerateName:,Namespace:deployment-7770,SelfLink:/apis/apps/v1/namespaces/deployment-7770/deployments/test-rollover-deployment,UID:fd72a801-a86c-4f57-9424-8e256a4c43be,ResourceVersion:18142674,Generation:2,CreationTimestamp:2019-12-26 13:25:15 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2019-12-26 13:25:16 +0000 UTC 2019-12-26 13:25:16 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2019-12-26 13:25:36 +0000 UTC 2019-12-26 13:25:16 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rollover-deployment-854595fc44" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},} Dec 26 13:25:36.209: INFO: New ReplicaSet "test-rollover-deployment-854595fc44" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-854595fc44,GenerateName:,Namespace:deployment-7770,SelfLink:/apis/apps/v1/namespaces/deployment-7770/replicasets/test-rollover-deployment-854595fc44,UID:8890569e-66af-4714-8ef0-df1543ac50fa,ResourceVersion:18142662,Generation:2,CreationTimestamp:2019-12-26 13:25:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment fd72a801-a86c-4f57-9424-8e256a4c43be 0xc002c6c667 0xc002c6c668}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} Dec 26 13:25:36.209: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": Dec 26 13:25:36.209: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-controller,GenerateName:,Namespace:deployment-7770,SelfLink:/apis/apps/v1/namespaces/deployment-7770/replicasets/test-rollover-controller,UID:c69d0440-cc89-43c1-866a-845cc38f4f10,ResourceVersion:18142672,Generation:2,CreationTimestamp:2019-12-26 13:25:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment fd72a801-a86c-4f57-9424-8e256a4c43be 0xc002c6c557 0xc002c6c558}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Dec 26 13:25:36.209: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-9b8b997cf,GenerateName:,Namespace:deployment-7770,SelfLink:/apis/apps/v1/namespaces/deployment-7770/replicasets/test-rollover-deployment-9b8b997cf,UID:6ba8e32b-524c-4530-9425-37cd4dcacf81,ResourceVersion:18142625,Generation:2,CreationTimestamp:2019-12-26 13:25:15 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment fd72a801-a86c-4f57-9424-8e256a4c43be 0xc002c6c840 0xc002c6c841}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Dec 26 13:25:36.215: INFO: Pod "test-rollover-deployment-854595fc44-kr545" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-854595fc44-kr545,GenerateName:test-rollover-deployment-854595fc44-,Namespace:deployment-7770,SelfLink:/api/v1/namespaces/deployment-7770/pods/test-rollover-deployment-854595fc44-kr545,UID:a90e7fc6-654b-433b-9833-fc154fe30d3e,ResourceVersion:18142646,Generation:0,CreationTimestamp:2019-12-26 13:25:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rollover-deployment-854595fc44 8890569e-66af-4714-8ef0-df1543ac50fa 0xc00285a0c7 0xc00285a0c8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-dktv5 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dktv5,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-dktv5 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00285a140} {node.kubernetes.io/unreachable Exists NoExecute 0xc00285a160}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-26 13:25:18 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-26 13:25:25 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-26 13:25:25 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-26 13:25:18 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.2,StartTime:2019-12-26 13:25:18 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2019-12-26 13:25:25 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 docker://e87755944a9b3e198797786211fa67139646b9416b9631d7c8b6625de1fe3b60}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 26 13:25:36.215: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-7770" for this suite. Dec 26 13:25:42.266: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 26 13:25:42.354: INFO: namespace deployment-7770 deletion completed in 6.132393109s • [SLOW TEST:37.635 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 26 13:25:42.354: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0644 on tmpfs Dec 26 13:25:42.570: INFO: Waiting up to 5m0s for pod "pod-42c43f11-3acb-4cba-ad8b-8f507a516082" in namespace "emptydir-8949" to be "success or failure" Dec 26 13:25:42.618: INFO: Pod "pod-42c43f11-3acb-4cba-ad8b-8f507a516082": Phase="Pending", Reason="", readiness=false. Elapsed: 47.842295ms Dec 26 13:25:44.633: INFO: Pod "pod-42c43f11-3acb-4cba-ad8b-8f507a516082": Phase="Pending", Reason="", readiness=false. Elapsed: 2.062225263s Dec 26 13:25:46.646: INFO: Pod "pod-42c43f11-3acb-4cba-ad8b-8f507a516082": Phase="Pending", Reason="", readiness=false. Elapsed: 4.075786843s Dec 26 13:25:48.655: INFO: Pod "pod-42c43f11-3acb-4cba-ad8b-8f507a516082": Phase="Pending", Reason="", readiness=false. Elapsed: 6.084388442s Dec 26 13:25:50.664: INFO: Pod "pod-42c43f11-3acb-4cba-ad8b-8f507a516082": Phase="Pending", Reason="", readiness=false. Elapsed: 8.093617861s Dec 26 13:25:52.673: INFO: Pod "pod-42c43f11-3acb-4cba-ad8b-8f507a516082": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.102421703s STEP: Saw pod success Dec 26 13:25:52.673: INFO: Pod "pod-42c43f11-3acb-4cba-ad8b-8f507a516082" satisfied condition "success or failure" Dec 26 13:25:52.677: INFO: Trying to get logs from node iruya-node pod pod-42c43f11-3acb-4cba-ad8b-8f507a516082 container test-container: STEP: delete the pod Dec 26 13:25:52.718: INFO: Waiting for pod pod-42c43f11-3acb-4cba-ad8b-8f507a516082 to disappear Dec 26 13:25:52.723: INFO: Pod pod-42c43f11-3acb-4cba-ad8b-8f507a516082 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 26 13:25:52.723: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8949" for this suite. Dec 26 13:25:58.745: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 26 13:25:58.837: INFO: namespace emptydir-8949 deletion completed in 6.109447014s • [SLOW TEST:16.483 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 26 13:25:58.838: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Performing setup for networking test in namespace pod-network-test-515 STEP: creating a selector STEP: Creating the service pods in kubernetes Dec 26 13:25:58.960: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Dec 26 13:26:33.098: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.32.0.4 8081 | grep -v '^\s*$'] Namespace:pod-network-test-515 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Dec 26 13:26:33.098: INFO: >>> kubeConfig: /root/.kube/config Dec 26 13:26:34.616: INFO: Found all expected endpoints: [netserver-0] Dec 26 13:26:34.627: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.44.0.1 8081 | grep -v '^\s*$'] Namespace:pod-network-test-515 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Dec 26 13:26:34.627: INFO: >>> kubeConfig: /root/.kube/config Dec 26 13:26:36.062: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 26 13:26:36.063: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-515" for this suite. Dec 26 13:27:02.108: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 26 13:27:02.290: INFO: namespace pod-network-test-515 deletion completed in 26.21607161s • [SLOW TEST:63.453 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 26 13:27:02.291: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Performing setup for networking test in namespace pod-network-test-5092 STEP: creating a selector STEP: Creating the service pods in kubernetes Dec 26 13:27:02.505: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Dec 26 13:27:40.687: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostName&protocol=udp&host=10.44.0.1&port=8081&tries=1'] Namespace:pod-network-test-5092 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Dec 26 13:27:40.687: INFO: >>> kubeConfig: /root/.kube/config Dec 26 13:27:41.232: INFO: Waiting for endpoints: map[] Dec 26 13:27:41.245: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostName&protocol=udp&host=10.32.0.4&port=8081&tries=1'] Namespace:pod-network-test-5092 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Dec 26 13:27:41.245: INFO: >>> kubeConfig: /root/.kube/config Dec 26 13:27:41.611: INFO: Waiting for endpoints: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 26 13:27:41.612: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-5092" for this suite. Dec 26 13:28:05.648: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 26 13:28:05.851: INFO: namespace pod-network-test-5092 deletion completed in 24.23174217s • [SLOW TEST:63.560 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 26 13:28:05.851: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-6004 [It] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace statefulset-6004 STEP: Creating statefulset with conflicting port in namespace statefulset-6004 STEP: Waiting until pod test-pod will start running in namespace statefulset-6004 STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-6004 Dec 26 13:28:14.216: INFO: Observed stateful pod in namespace: statefulset-6004, name: ss-0, uid: 743e76b6-823b-481c-80a8-6e1192bc69c5, status phase: Pending. Waiting for statefulset controller to delete. Dec 26 13:28:16.489: INFO: Observed stateful pod in namespace: statefulset-6004, name: ss-0, uid: 743e76b6-823b-481c-80a8-6e1192bc69c5, status phase: Failed. Waiting for statefulset controller to delete. Dec 26 13:28:16.544: INFO: Observed stateful pod in namespace: statefulset-6004, name: ss-0, uid: 743e76b6-823b-481c-80a8-6e1192bc69c5, status phase: Failed. Waiting for statefulset controller to delete. Dec 26 13:28:16.548: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-6004 STEP: Removing pod with conflicting port in namespace statefulset-6004 STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-6004 and will be in running state [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 Dec 26 13:28:24.843: INFO: Deleting all statefulset in ns statefulset-6004 Dec 26 13:28:24.848: INFO: Scaling statefulset ss to 0 Dec 26 13:28:44.881: INFO: Waiting for statefulset status.replicas updated to 0 Dec 26 13:28:44.886: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 26 13:28:44.937: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-6004" for this suite. Dec 26 13:28:50.974: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 26 13:28:51.085: INFO: namespace statefulset-6004 deletion completed in 6.142466126s • [SLOW TEST:45.234 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 26 13:28:51.086: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name s-test-opt-del-231bcf49-04e2-4b04-937f-9423eee6c76a STEP: Creating secret with name s-test-opt-upd-49ac0208-75c3-4d5f-bb83-8d6d7ad6ae56 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-231bcf49-04e2-4b04-937f-9423eee6c76a STEP: Updating secret s-test-opt-upd-49ac0208-75c3-4d5f-bb83-8d6d7ad6ae56 STEP: Creating secret with name s-test-opt-create-e001e5f0-081c-44a0-b925-1a9ada0951a8 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 26 13:30:33.721: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-221" for this suite. Dec 26 13:30:55.754: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 26 13:30:55.903: INFO: namespace secrets-221 deletion completed in 22.172962145s • [SLOW TEST:124.818 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 26 13:30:55.904: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars Dec 26 13:30:55.991: INFO: Waiting up to 5m0s for pod "downward-api-73fb10b3-e48f-49fd-9cf1-5db1f5d0af73" in namespace "downward-api-2484" to be "success or failure" Dec 26 13:30:56.013: INFO: Pod "downward-api-73fb10b3-e48f-49fd-9cf1-5db1f5d0af73": Phase="Pending", Reason="", readiness=false. Elapsed: 22.309652ms Dec 26 13:30:58.021: INFO: Pod "downward-api-73fb10b3-e48f-49fd-9cf1-5db1f5d0af73": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030186837s Dec 26 13:31:00.153: INFO: Pod "downward-api-73fb10b3-e48f-49fd-9cf1-5db1f5d0af73": Phase="Pending", Reason="", readiness=false. Elapsed: 4.162351665s Dec 26 13:31:02.177: INFO: Pod "downward-api-73fb10b3-e48f-49fd-9cf1-5db1f5d0af73": Phase="Pending", Reason="", readiness=false. Elapsed: 6.186340649s Dec 26 13:31:04.185: INFO: Pod "downward-api-73fb10b3-e48f-49fd-9cf1-5db1f5d0af73": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.194182609s STEP: Saw pod success Dec 26 13:31:04.185: INFO: Pod "downward-api-73fb10b3-e48f-49fd-9cf1-5db1f5d0af73" satisfied condition "success or failure" Dec 26 13:31:04.188: INFO: Trying to get logs from node iruya-node pod downward-api-73fb10b3-e48f-49fd-9cf1-5db1f5d0af73 container dapi-container: STEP: delete the pod Dec 26 13:31:04.282: INFO: Waiting for pod downward-api-73fb10b3-e48f-49fd-9cf1-5db1f5d0af73 to disappear Dec 26 13:31:04.287: INFO: Pod downward-api-73fb10b3-e48f-49fd-9cf1-5db1f5d0af73 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 26 13:31:04.287: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2484" for this suite. Dec 26 13:31:10.307: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 26 13:31:10.499: INFO: namespace downward-api-2484 deletion completed in 6.209100279s • [SLOW TEST:14.596 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 26 13:31:10.500: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name cm-test-opt-del-2ed0ddae-f8b8-4a8b-99d4-5947e5904f3c STEP: Creating configMap with name cm-test-opt-upd-e1cf47a9-7113-4a11-9c43-5fa648cb0e21 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-2ed0ddae-f8b8-4a8b-99d4-5947e5904f3c STEP: Updating configmap cm-test-opt-upd-e1cf47a9-7113-4a11-9c43-5fa648cb0e21 STEP: Creating configMap with name cm-test-opt-create-08c84515-2f46-4632-b85c-05a9373a7e55 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 26 13:31:27.003: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5174" for this suite. Dec 26 13:31:49.023: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 26 13:31:49.090: INFO: namespace projected-5174 deletion completed in 22.081975251s • [SLOW TEST:38.589 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 26 13:31:49.090: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-map-d61204f8-09e7-4245-ac93-66e039b79384 STEP: Creating a pod to test consume secrets Dec 26 13:31:49.170: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-a876a727-cca3-4fae-856b-6ef6807a87da" in namespace "projected-6190" to be "success or failure" Dec 26 13:31:49.174: INFO: Pod "pod-projected-secrets-a876a727-cca3-4fae-856b-6ef6807a87da": Phase="Pending", Reason="", readiness=false. Elapsed: 3.138286ms Dec 26 13:31:51.190: INFO: Pod "pod-projected-secrets-a876a727-cca3-4fae-856b-6ef6807a87da": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019026216s Dec 26 13:31:53.204: INFO: Pod "pod-projected-secrets-a876a727-cca3-4fae-856b-6ef6807a87da": Phase="Pending", Reason="", readiness=false. Elapsed: 4.033393178s Dec 26 13:31:55.213: INFO: Pod "pod-projected-secrets-a876a727-cca3-4fae-856b-6ef6807a87da": Phase="Pending", Reason="", readiness=false. Elapsed: 6.04268775s Dec 26 13:31:57.224: INFO: Pod "pod-projected-secrets-a876a727-cca3-4fae-856b-6ef6807a87da": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.053197863s STEP: Saw pod success Dec 26 13:31:57.224: INFO: Pod "pod-projected-secrets-a876a727-cca3-4fae-856b-6ef6807a87da" satisfied condition "success or failure" Dec 26 13:31:57.229: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-a876a727-cca3-4fae-856b-6ef6807a87da container projected-secret-volume-test: STEP: delete the pod Dec 26 13:31:57.272: INFO: Waiting for pod pod-projected-secrets-a876a727-cca3-4fae-856b-6ef6807a87da to disappear Dec 26 13:31:57.318: INFO: Pod pod-projected-secrets-a876a727-cca3-4fae-856b-6ef6807a87da no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 26 13:31:57.318: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6190" for this suite. Dec 26 13:32:03.377: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 26 13:32:03.514: INFO: namespace projected-6190 deletion completed in 6.165902767s • [SLOW TEST:14.424 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 26 13:32:03.514: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Dec 26 13:32:03.798: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"c91c0478-b0a0-4f75-9afa-62f9fffb64fb", Controller:(*bool)(0xc002ce8a3a), BlockOwnerDeletion:(*bool)(0xc002ce8a3b)}} Dec 26 13:32:03.852: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"c3854b98-e127-4dd4-8f7c-3f9a7de6b7c5", Controller:(*bool)(0xc001b1e122), BlockOwnerDeletion:(*bool)(0xc001b1e123)}} Dec 26 13:32:03.952: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"5db5bc01-1cc5-479a-98d2-04ef67b241fb", Controller:(*bool)(0xc002ce8bf2), BlockOwnerDeletion:(*bool)(0xc002ce8bf3)}} [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 26 13:32:09.022: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-4918" for this suite. Dec 26 13:32:15.093: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 26 13:32:15.212: INFO: namespace gc-4918 deletion completed in 6.175445039s • [SLOW TEST:11.698 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 26 13:32:15.213: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Dec 26 13:32:15.340: INFO: Waiting up to 5m0s for pod "downwardapi-volume-91d8de71-5c92-4356-ad79-ffe5d56762bd" in namespace "downward-api-6883" to be "success or failure" Dec 26 13:32:15.344: INFO: Pod "downwardapi-volume-91d8de71-5c92-4356-ad79-ffe5d56762bd": Phase="Pending", Reason="", readiness=false. Elapsed: 3.954112ms Dec 26 13:32:17.359: INFO: Pod "downwardapi-volume-91d8de71-5c92-4356-ad79-ffe5d56762bd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018981972s Dec 26 13:32:19.365: INFO: Pod "downwardapi-volume-91d8de71-5c92-4356-ad79-ffe5d56762bd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.025418595s Dec 26 13:32:21.376: INFO: Pod "downwardapi-volume-91d8de71-5c92-4356-ad79-ffe5d56762bd": Phase="Pending", Reason="", readiness=false. Elapsed: 6.03593145s Dec 26 13:32:23.382: INFO: Pod "downwardapi-volume-91d8de71-5c92-4356-ad79-ffe5d56762bd": Phase="Pending", Reason="", readiness=false. Elapsed: 8.04231902s Dec 26 13:32:25.390: INFO: Pod "downwardapi-volume-91d8de71-5c92-4356-ad79-ffe5d56762bd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.049921307s STEP: Saw pod success Dec 26 13:32:25.390: INFO: Pod "downwardapi-volume-91d8de71-5c92-4356-ad79-ffe5d56762bd" satisfied condition "success or failure" Dec 26 13:32:25.392: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-91d8de71-5c92-4356-ad79-ffe5d56762bd container client-container: STEP: delete the pod Dec 26 13:32:25.453: INFO: Waiting for pod downwardapi-volume-91d8de71-5c92-4356-ad79-ffe5d56762bd to disappear Dec 26 13:32:25.456: INFO: Pod downwardapi-volume-91d8de71-5c92-4356-ad79-ffe5d56762bd no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 26 13:32:25.457: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6883" for this suite. Dec 26 13:32:31.485: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 26 13:32:31.607: INFO: namespace downward-api-6883 deletion completed in 6.147244703s • [SLOW TEST:16.394 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 26 13:32:31.608: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44 [It] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod Dec 26 13:32:31.705: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 26 13:32:48.963: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-2959" for this suite. Dec 26 13:33:11.110: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 26 13:33:11.256: INFO: namespace init-container-2959 deletion completed in 22.181621223s • [SLOW TEST:39.649 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 26 13:33:11.257: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Dec 26 13:33:27.546: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Dec 26 13:33:27.568: INFO: Pod pod-with-poststart-http-hook still exists Dec 26 13:33:29.568: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Dec 26 13:33:29.579: INFO: Pod pod-with-poststart-http-hook still exists Dec 26 13:33:31.568: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Dec 26 13:33:31.582: INFO: Pod pod-with-poststart-http-hook still exists Dec 26 13:33:33.568: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Dec 26 13:33:33.576: INFO: Pod pod-with-poststart-http-hook still exists Dec 26 13:33:35.568: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Dec 26 13:33:35.577: INFO: Pod pod-with-poststart-http-hook still exists Dec 26 13:33:37.568: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Dec 26 13:33:37.576: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 26 13:33:37.576: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-6606" for this suite. Dec 26 13:33:59.605: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 26 13:33:59.820: INFO: namespace container-lifecycle-hook-6606 deletion completed in 22.238137657s • [SLOW TEST:48.564 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-network] Services should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 26 13:33:59.821: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88 [It] should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 26 13:33:59.943: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-9716" for this suite. Dec 26 13:34:05.986: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 26 13:34:06.080: INFO: namespace services-9716 deletion completed in 6.130574343s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92 • [SLOW TEST:6.259 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 26 13:34:06.080: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-55578a49-c6fd-4a6d-9fdc-f7cfda042d5b STEP: Creating a pod to test consume secrets Dec 26 13:34:06.751: INFO: Waiting up to 5m0s for pod "pod-secrets-34f636e5-71de-481e-a42e-3460fb193d93" in namespace "secrets-1368" to be "success or failure" Dec 26 13:34:06.834: INFO: Pod "pod-secrets-34f636e5-71de-481e-a42e-3460fb193d93": Phase="Pending", Reason="", readiness=false. Elapsed: 82.57665ms Dec 26 13:34:08.848: INFO: Pod "pod-secrets-34f636e5-71de-481e-a42e-3460fb193d93": Phase="Pending", Reason="", readiness=false. Elapsed: 2.096777098s Dec 26 13:34:10.857: INFO: Pod "pod-secrets-34f636e5-71de-481e-a42e-3460fb193d93": Phase="Pending", Reason="", readiness=false. Elapsed: 4.106306669s Dec 26 13:34:12.870: INFO: Pod "pod-secrets-34f636e5-71de-481e-a42e-3460fb193d93": Phase="Pending", Reason="", readiness=false. Elapsed: 6.118698556s Dec 26 13:34:14.879: INFO: Pod "pod-secrets-34f636e5-71de-481e-a42e-3460fb193d93": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.127752954s STEP: Saw pod success Dec 26 13:34:14.879: INFO: Pod "pod-secrets-34f636e5-71de-481e-a42e-3460fb193d93" satisfied condition "success or failure" Dec 26 13:34:14.883: INFO: Trying to get logs from node iruya-node pod pod-secrets-34f636e5-71de-481e-a42e-3460fb193d93 container secret-volume-test: STEP: delete the pod Dec 26 13:34:14.959: INFO: Waiting for pod pod-secrets-34f636e5-71de-481e-a42e-3460fb193d93 to disappear Dec 26 13:34:14.963: INFO: Pod pod-secrets-34f636e5-71de-481e-a42e-3460fb193d93 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 26 13:34:14.964: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1368" for this suite. Dec 26 13:34:21.005: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 26 13:34:21.106: INFO: namespace secrets-1368 deletion completed in 6.138425608s STEP: Destroying namespace "secret-namespace-684" for this suite. Dec 26 13:34:27.397: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 26 13:34:27.544: INFO: namespace secret-namespace-684 deletion completed in 6.437819745s • [SLOW TEST:21.464 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 26 13:34:27.545: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 26 13:34:27.782: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-9751" for this suite. Dec 26 13:34:33.824: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 26 13:34:34.038: INFO: namespace kubelet-test-9751 deletion completed in 6.240843367s • [SLOW TEST:6.493 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78 should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 26 13:34:34.040: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod busybox-058dc712-5493-4af5-9cbe-afefb782e4d1 in namespace container-probe-4952 Dec 26 13:34:42.223: INFO: Started pod busybox-058dc712-5493-4af5-9cbe-afefb782e4d1 in namespace container-probe-4952 STEP: checking the pod's current state and verifying that restartCount is present Dec 26 13:34:42.225: INFO: Initial restart count of pod busybox-058dc712-5493-4af5-9cbe-afefb782e4d1 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 26 13:38:43.757: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-4952" for this suite. Dec 26 13:38:49.889: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 26 13:38:50.064: INFO: namespace container-probe-4952 deletion completed in 6.297923975s • [SLOW TEST:256.025 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 26 13:38:50.065: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update Dec 26 13:38:50.278: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:watch-6489,SelfLink:/api/v1/namespaces/watch-6489/configmaps/e2e-watch-test-resource-version,UID:03cc335d-1735-43f2-b43b-2ad3438a62e8,ResourceVersion:18144384,Generation:0,CreationTimestamp:2019-12-26 13:38:50 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Dec 26 13:38:50.279: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:watch-6489,SelfLink:/api/v1/namespaces/watch-6489/configmaps/e2e-watch-test-resource-version,UID:03cc335d-1735-43f2-b43b-2ad3438a62e8,ResourceVersion:18144385,Generation:0,CreationTimestamp:2019-12-26 13:38:50 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 26 13:38:50.279: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-6489" for this suite. Dec 26 13:38:56.398: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 26 13:38:56.549: INFO: namespace watch-6489 deletion completed in 6.261837837s • [SLOW TEST:6.484 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 26 13:38:56.549: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name s-test-opt-del-a1d418f0-26b5-401e-aa3b-ed1dabea9cee STEP: Creating secret with name s-test-opt-upd-46713284-b69c-4c91-89e1-967c9fdead74 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-a1d418f0-26b5-401e-aa3b-ed1dabea9cee STEP: Updating secret s-test-opt-upd-46713284-b69c-4c91-89e1-967c9fdead74 STEP: Creating secret with name s-test-opt-create-a31be7b0-5cf6-4f39-9513-c49b4cb4a045 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 26 13:40:32.960: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3821" for this suite. Dec 26 13:40:55.378: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 26 13:40:55.511: INFO: namespace projected-3821 deletion completed in 22.546707171s • [SLOW TEST:118.962 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl expose should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 26 13:40:55.512: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating Redis RC Dec 26 13:40:55.582: INFO: namespace kubectl-2196 Dec 26 13:40:55.582: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2196' Dec 26 13:40:57.906: INFO: stderr: "" Dec 26 13:40:57.906: INFO: stdout: "replicationcontroller/redis-master created\n" STEP: Waiting for Redis master to start. Dec 26 13:40:58.925: INFO: Selector matched 1 pods for map[app:redis] Dec 26 13:40:58.925: INFO: Found 0 / 1 Dec 26 13:40:59.916: INFO: Selector matched 1 pods for map[app:redis] Dec 26 13:40:59.916: INFO: Found 0 / 1 Dec 26 13:41:00.921: INFO: Selector matched 1 pods for map[app:redis] Dec 26 13:41:00.921: INFO: Found 0 / 1 Dec 26 13:41:01.914: INFO: Selector matched 1 pods for map[app:redis] Dec 26 13:41:01.914: INFO: Found 0 / 1 Dec 26 13:41:02.914: INFO: Selector matched 1 pods for map[app:redis] Dec 26 13:41:02.914: INFO: Found 0 / 1 Dec 26 13:41:03.923: INFO: Selector matched 1 pods for map[app:redis] Dec 26 13:41:03.923: INFO: Found 0 / 1 Dec 26 13:41:04.921: INFO: Selector matched 1 pods for map[app:redis] Dec 26 13:41:04.921: INFO: Found 1 / 1 Dec 26 13:41:04.921: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Dec 26 13:41:04.926: INFO: Selector matched 1 pods for map[app:redis] Dec 26 13:41:04.926: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Dec 26 13:41:04.926: INFO: wait on redis-master startup in kubectl-2196 Dec 26 13:41:04.926: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-fmr26 redis-master --namespace=kubectl-2196' Dec 26 13:41:05.129: INFO: stderr: "" Dec 26 13:41:05.129: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 26 Dec 13:41:03.866 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 26 Dec 13:41:03.867 # Server started, Redis version 3.2.12\n1:M 26 Dec 13:41:03.868 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 26 Dec 13:41:03.868 * The server is now ready to accept connections on port 6379\n" STEP: exposing RC Dec 26 13:41:05.129: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc redis-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-2196' Dec 26 13:41:05.296: INFO: stderr: "" Dec 26 13:41:05.296: INFO: stdout: "service/rm2 exposed\n" Dec 26 13:41:05.304: INFO: Service rm2 in namespace kubectl-2196 found. STEP: exposing service Dec 26 13:41:07.322: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-2196' Dec 26 13:41:07.582: INFO: stderr: "" Dec 26 13:41:07.582: INFO: stdout: "service/rm3 exposed\n" Dec 26 13:41:07.595: INFO: Service rm3 in namespace kubectl-2196 found. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 26 13:41:09.616: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2196" for this suite. Dec 26 13:41:31.676: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 26 13:41:31.812: INFO: namespace kubectl-2196 deletion completed in 22.187312886s • [SLOW TEST:36.300 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl expose /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 26 13:41:31.813: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir volume type on tmpfs Dec 26 13:41:31.929: INFO: Waiting up to 5m0s for pod "pod-db32e9f4-0edc-4a78-8be0-60f726d16686" in namespace "emptydir-7676" to be "success or failure" Dec 26 13:41:31.937: INFO: Pod "pod-db32e9f4-0edc-4a78-8be0-60f726d16686": Phase="Pending", Reason="", readiness=false. Elapsed: 8.246588ms Dec 26 13:41:33.947: INFO: Pod "pod-db32e9f4-0edc-4a78-8be0-60f726d16686": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017507415s Dec 26 13:41:35.955: INFO: Pod "pod-db32e9f4-0edc-4a78-8be0-60f726d16686": Phase="Pending", Reason="", readiness=false. Elapsed: 4.025548779s Dec 26 13:41:37.962: INFO: Pod "pod-db32e9f4-0edc-4a78-8be0-60f726d16686": Phase="Pending", Reason="", readiness=false. Elapsed: 6.032512342s Dec 26 13:41:39.969: INFO: Pod "pod-db32e9f4-0edc-4a78-8be0-60f726d16686": Phase="Running", Reason="", readiness=true. Elapsed: 8.040082956s Dec 26 13:41:41.980: INFO: Pod "pod-db32e9f4-0edc-4a78-8be0-60f726d16686": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.050443845s STEP: Saw pod success Dec 26 13:41:41.980: INFO: Pod "pod-db32e9f4-0edc-4a78-8be0-60f726d16686" satisfied condition "success or failure" Dec 26 13:41:41.984: INFO: Trying to get logs from node iruya-node pod pod-db32e9f4-0edc-4a78-8be0-60f726d16686 container test-container: STEP: delete the pod Dec 26 13:41:42.147: INFO: Waiting for pod pod-db32e9f4-0edc-4a78-8be0-60f726d16686 to disappear Dec 26 13:41:42.153: INFO: Pod pod-db32e9f4-0edc-4a78-8be0-60f726d16686 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 26 13:41:42.153: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7676" for this suite. Dec 26 13:41:48.196: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 26 13:41:48.343: INFO: namespace emptydir-7676 deletion completed in 6.184974431s • [SLOW TEST:16.530 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 26 13:41:48.343: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0666 on node default medium Dec 26 13:41:48.446: INFO: Waiting up to 5m0s for pod "pod-6e9ccede-e6bf-4c04-b4ad-befb34cf6526" in namespace "emptydir-3164" to be "success or failure" Dec 26 13:41:48.493: INFO: Pod "pod-6e9ccede-e6bf-4c04-b4ad-befb34cf6526": Phase="Pending", Reason="", readiness=false. Elapsed: 47.396784ms Dec 26 13:41:50.529: INFO: Pod "pod-6e9ccede-e6bf-4c04-b4ad-befb34cf6526": Phase="Pending", Reason="", readiness=false. Elapsed: 2.08347476s Dec 26 13:41:52.551: INFO: Pod "pod-6e9ccede-e6bf-4c04-b4ad-befb34cf6526": Phase="Pending", Reason="", readiness=false. Elapsed: 4.104621614s Dec 26 13:41:54.567: INFO: Pod "pod-6e9ccede-e6bf-4c04-b4ad-befb34cf6526": Phase="Pending", Reason="", readiness=false. Elapsed: 6.120510192s Dec 26 13:41:56.590: INFO: Pod "pod-6e9ccede-e6bf-4c04-b4ad-befb34cf6526": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.143633225s STEP: Saw pod success Dec 26 13:41:56.590: INFO: Pod "pod-6e9ccede-e6bf-4c04-b4ad-befb34cf6526" satisfied condition "success or failure" Dec 26 13:41:56.605: INFO: Trying to get logs from node iruya-node pod pod-6e9ccede-e6bf-4c04-b4ad-befb34cf6526 container test-container: STEP: delete the pod Dec 26 13:41:56.682: INFO: Waiting for pod pod-6e9ccede-e6bf-4c04-b4ad-befb34cf6526 to disappear Dec 26 13:41:56.685: INFO: Pod pod-6e9ccede-e6bf-4c04-b4ad-befb34cf6526 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 26 13:41:56.685: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3164" for this suite. Dec 26 13:42:02.711: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 26 13:42:02.833: INFO: namespace emptydir-3164 deletion completed in 6.1391038s • [SLOW TEST:14.490 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 26 13:42:02.833: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test override arguments Dec 26 13:42:02.983: INFO: Waiting up to 5m0s for pod "client-containers-d25e6f8a-3f69-451a-9562-d47c56132d9b" in namespace "containers-3196" to be "success or failure" Dec 26 13:42:03.053: INFO: Pod "client-containers-d25e6f8a-3f69-451a-9562-d47c56132d9b": Phase="Pending", Reason="", readiness=false. Elapsed: 69.618101ms Dec 26 13:42:05.065: INFO: Pod "client-containers-d25e6f8a-3f69-451a-9562-d47c56132d9b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.082025195s Dec 26 13:42:07.072: INFO: Pod "client-containers-d25e6f8a-3f69-451a-9562-d47c56132d9b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.088962958s Dec 26 13:42:09.150: INFO: Pod "client-containers-d25e6f8a-3f69-451a-9562-d47c56132d9b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.167074531s Dec 26 13:42:11.160: INFO: Pod "client-containers-d25e6f8a-3f69-451a-9562-d47c56132d9b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.176490387s STEP: Saw pod success Dec 26 13:42:11.160: INFO: Pod "client-containers-d25e6f8a-3f69-451a-9562-d47c56132d9b" satisfied condition "success or failure" Dec 26 13:42:11.165: INFO: Trying to get logs from node iruya-node pod client-containers-d25e6f8a-3f69-451a-9562-d47c56132d9b container test-container: STEP: delete the pod Dec 26 13:42:11.420: INFO: Waiting for pod client-containers-d25e6f8a-3f69-451a-9562-d47c56132d9b to disappear Dec 26 13:42:11.430: INFO: Pod client-containers-d25e6f8a-3f69-451a-9562-d47c56132d9b no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 26 13:42:11.430: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-3196" for this suite. Dec 26 13:42:17.457: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 26 13:42:17.666: INFO: namespace containers-3196 deletion completed in 6.228506378s • [SLOW TEST:14.833 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run rc should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 26 13:42:17.667: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1456 [It] should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Dec 26 13:42:17.787: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=kubectl-8766' Dec 26 13:42:17.939: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Dec 26 13:42:17.939: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n" STEP: verifying the rc e2e-test-nginx-rc was created STEP: verifying the pod controlled by rc e2e-test-nginx-rc was created STEP: confirm that you can get logs from an rc Dec 26 13:42:18.017: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-nginx-rc-smbrc] Dec 26 13:42:18.017: INFO: Waiting up to 5m0s for pod "e2e-test-nginx-rc-smbrc" in namespace "kubectl-8766" to be "running and ready" Dec 26 13:42:18.027: INFO: Pod "e2e-test-nginx-rc-smbrc": Phase="Pending", Reason="", readiness=false. Elapsed: 9.89716ms Dec 26 13:42:20.088: INFO: Pod "e2e-test-nginx-rc-smbrc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.07038228s Dec 26 13:42:22.100: INFO: Pod "e2e-test-nginx-rc-smbrc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.082797716s Dec 26 13:42:24.108: INFO: Pod "e2e-test-nginx-rc-smbrc": Phase="Pending", Reason="", readiness=false. Elapsed: 6.090746513s Dec 26 13:42:26.117: INFO: Pod "e2e-test-nginx-rc-smbrc": Phase="Running", Reason="", readiness=true. Elapsed: 8.099721793s Dec 26 13:42:26.117: INFO: Pod "e2e-test-nginx-rc-smbrc" satisfied condition "running and ready" Dec 26 13:42:26.117: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-nginx-rc-smbrc] Dec 26 13:42:26.117: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-nginx-rc --namespace=kubectl-8766' Dec 26 13:42:26.293: INFO: stderr: "" Dec 26 13:42:26.293: INFO: stdout: "" [AfterEach] [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1461 Dec 26 13:42:26.294: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=kubectl-8766' Dec 26 13:42:26.421: INFO: stderr: "" Dec 26 13:42:26.422: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 26 13:42:26.422: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8766" for this suite. Dec 26 13:42:48.453: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 26 13:42:48.607: INFO: namespace kubectl-8766 deletion completed in 22.1766604s • [SLOW TEST:30.940 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 26 13:42:48.608: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 26 13:43:19.154: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-3343" for this suite. Dec 26 13:43:25.276: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 26 13:43:25.407: INFO: namespace namespaces-3343 deletion completed in 6.249474463s STEP: Destroying namespace "nsdeletetest-3111" for this suite. Dec 26 13:43:25.410: INFO: Namespace nsdeletetest-3111 was already deleted STEP: Destroying namespace "nsdeletetest-7398" for this suite. Dec 26 13:43:31.430: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 26 13:43:31.546: INFO: namespace nsdeletetest-7398 deletion completed in 6.136443679s • [SLOW TEST:42.939 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 26 13:43:31.547: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-8ff24a31-ebd3-4a38-ab44-936fea37ecc8 STEP: Creating a pod to test consume configMaps Dec 26 13:43:31.755: INFO: Waiting up to 5m0s for pod "pod-configmaps-01f4bb3f-745f-4f45-97fb-712eaba8fbcf" in namespace "configmap-3667" to be "success or failure" Dec 26 13:43:31.768: INFO: Pod "pod-configmaps-01f4bb3f-745f-4f45-97fb-712eaba8fbcf": Phase="Pending", Reason="", readiness=false. Elapsed: 12.18583ms Dec 26 13:43:33.809: INFO: Pod "pod-configmaps-01f4bb3f-745f-4f45-97fb-712eaba8fbcf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.053533538s Dec 26 13:43:35.817: INFO: Pod "pod-configmaps-01f4bb3f-745f-4f45-97fb-712eaba8fbcf": Phase="Pending", Reason="", readiness=false. Elapsed: 4.061096357s Dec 26 13:43:37.825: INFO: Pod "pod-configmaps-01f4bb3f-745f-4f45-97fb-712eaba8fbcf": Phase="Pending", Reason="", readiness=false. Elapsed: 6.06923051s Dec 26 13:43:39.832: INFO: Pod "pod-configmaps-01f4bb3f-745f-4f45-97fb-712eaba8fbcf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.076445597s STEP: Saw pod success Dec 26 13:43:39.832: INFO: Pod "pod-configmaps-01f4bb3f-745f-4f45-97fb-712eaba8fbcf" satisfied condition "success or failure" Dec 26 13:43:39.836: INFO: Trying to get logs from node iruya-node pod pod-configmaps-01f4bb3f-745f-4f45-97fb-712eaba8fbcf container configmap-volume-test: STEP: delete the pod Dec 26 13:43:39.909: INFO: Waiting for pod pod-configmaps-01f4bb3f-745f-4f45-97fb-712eaba8fbcf to disappear Dec 26 13:43:40.019: INFO: Pod pod-configmaps-01f4bb3f-745f-4f45-97fb-712eaba8fbcf no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 26 13:43:40.019: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3667" for this suite. Dec 26 13:43:46.060: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 26 13:43:46.187: INFO: namespace configmap-3667 deletion completed in 6.158659563s • [SLOW TEST:14.640 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-apps] ReplicationController should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 26 13:43:46.188: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Given a Pod with a 'name' label pod-adoption is created STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 26 13:43:55.389: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-8961" for this suite. Dec 26 13:44:17.470: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 26 13:44:17.650: INFO: namespace replication-controller-8961 deletion completed in 22.251848348s • [SLOW TEST:31.462 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Proxy server should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 26 13:44:17.650: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Starting the proxy Dec 26 13:44:17.769: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix665890885/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 26 13:44:17.883: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7804" for this suite. Dec 26 13:44:23.924: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 26 13:44:24.016: INFO: namespace kubectl-7804 deletion completed in 6.118248234s • [SLOW TEST:6.366 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Proxy server /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 26 13:44:24.017: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test override command Dec 26 13:44:24.125: INFO: Waiting up to 5m0s for pod "client-containers-234c325e-b7af-4c85-8846-571dd552e043" in namespace "containers-5341" to be "success or failure" Dec 26 13:44:24.133: INFO: Pod "client-containers-234c325e-b7af-4c85-8846-571dd552e043": Phase="Pending", Reason="", readiness=false. Elapsed: 8.019525ms Dec 26 13:44:26.144: INFO: Pod "client-containers-234c325e-b7af-4c85-8846-571dd552e043": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019371759s Dec 26 13:44:28.165: INFO: Pod "client-containers-234c325e-b7af-4c85-8846-571dd552e043": Phase="Pending", Reason="", readiness=false. Elapsed: 4.040887463s Dec 26 13:44:30.172: INFO: Pod "client-containers-234c325e-b7af-4c85-8846-571dd552e043": Phase="Pending", Reason="", readiness=false. Elapsed: 6.047323286s Dec 26 13:44:32.189: INFO: Pod "client-containers-234c325e-b7af-4c85-8846-571dd552e043": Phase="Pending", Reason="", readiness=false. Elapsed: 8.063961606s Dec 26 13:44:34.198: INFO: Pod "client-containers-234c325e-b7af-4c85-8846-571dd552e043": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.073506558s STEP: Saw pod success Dec 26 13:44:34.198: INFO: Pod "client-containers-234c325e-b7af-4c85-8846-571dd552e043" satisfied condition "success or failure" Dec 26 13:44:34.205: INFO: Trying to get logs from node iruya-node pod client-containers-234c325e-b7af-4c85-8846-571dd552e043 container test-container: STEP: delete the pod Dec 26 13:44:34.281: INFO: Waiting for pod client-containers-234c325e-b7af-4c85-8846-571dd552e043 to disappear Dec 26 13:44:34.293: INFO: Pod client-containers-234c325e-b7af-4c85-8846-571dd552e043 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 26 13:44:34.294: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-5341" for this suite. Dec 26 13:44:40.406: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 26 13:44:40.550: INFO: namespace containers-5341 deletion completed in 6.248283892s • [SLOW TEST:16.533 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 26 13:44:40.550: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name secret-emptykey-test-fd3adeb4-5449-4ebb-a9a4-bedbb2768087 [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 26 13:44:40.676: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-5124" for this suite. Dec 26 13:44:46.693: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 26 13:44:46.807: INFO: namespace secrets-5124 deletion completed in 6.127317408s • [SLOW TEST:6.257 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31 should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 26 13:44:46.808: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Dec 26 13:44:46.929: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 26 13:44:48.135: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-878" for this suite. Dec 26 13:44:54.167: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 26 13:44:54.294: INFO: namespace custom-resource-definition-878 deletion completed in 6.151422639s • [SLOW TEST:7.486 seconds] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Simple CustomResourceDefinition /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:35 creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl api-versions should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 26 13:44:54.294: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: validating api versions Dec 26 13:44:54.376: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions' Dec 26 13:44:54.554: INFO: stderr: "" Dec 26 13:44:54.554: INFO: stdout: "admissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\napps/v1beta1\napps/v1beta2\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 26 13:44:54.554: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6455" for this suite. Dec 26 13:45:00.597: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 26 13:45:00.715: INFO: namespace kubectl-6455 deletion completed in 6.154054292s • [SLOW TEST:6.421 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl api-versions /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 26 13:45:00.716: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars Dec 26 13:45:00.798: INFO: Waiting up to 5m0s for pod "downward-api-2617c88d-3749-44ae-868e-4d347ce528c9" in namespace "downward-api-4737" to be "success or failure" Dec 26 13:45:00.827: INFO: Pod "downward-api-2617c88d-3749-44ae-868e-4d347ce528c9": Phase="Pending", Reason="", readiness=false. Elapsed: 28.284787ms Dec 26 13:45:02.905: INFO: Pod "downward-api-2617c88d-3749-44ae-868e-4d347ce528c9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.106330465s Dec 26 13:45:04.919: INFO: Pod "downward-api-2617c88d-3749-44ae-868e-4d347ce528c9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.120666385s Dec 26 13:45:06.930: INFO: Pod "downward-api-2617c88d-3749-44ae-868e-4d347ce528c9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.131522678s Dec 26 13:45:08.951: INFO: Pod "downward-api-2617c88d-3749-44ae-868e-4d347ce528c9": Phase="Running", Reason="", readiness=true. Elapsed: 8.152211061s Dec 26 13:45:11.017: INFO: Pod "downward-api-2617c88d-3749-44ae-868e-4d347ce528c9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.218278608s STEP: Saw pod success Dec 26 13:45:11.017: INFO: Pod "downward-api-2617c88d-3749-44ae-868e-4d347ce528c9" satisfied condition "success or failure" Dec 26 13:45:11.029: INFO: Trying to get logs from node iruya-node pod downward-api-2617c88d-3749-44ae-868e-4d347ce528c9 container dapi-container: STEP: delete the pod Dec 26 13:45:11.089: INFO: Waiting for pod downward-api-2617c88d-3749-44ae-868e-4d347ce528c9 to disappear Dec 26 13:45:11.199: INFO: Pod downward-api-2617c88d-3749-44ae-868e-4d347ce528c9 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 26 13:45:11.199: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4737" for this suite. Dec 26 13:45:17.237: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 26 13:45:17.449: INFO: namespace downward-api-4737 deletion completed in 6.241212635s • [SLOW TEST:16.733 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 26 13:45:17.449: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-projected-all-test-volume-0a819cea-2b92-4823-9e98-6aac7b753cd9 STEP: Creating secret with name secret-projected-all-test-volume-eb79bf6d-35cb-4cf7-b145-9e34ca087a3f STEP: Creating a pod to test Check all projections for projected volume plugin Dec 26 13:45:17.634: INFO: Waiting up to 5m0s for pod "projected-volume-1b390e57-037a-42fb-ba0e-83442aea6ddc" in namespace "projected-1184" to be "success or failure" Dec 26 13:45:17.640: INFO: Pod "projected-volume-1b390e57-037a-42fb-ba0e-83442aea6ddc": Phase="Pending", Reason="", readiness=false. Elapsed: 5.478817ms Dec 26 13:45:19.650: INFO: Pod "projected-volume-1b390e57-037a-42fb-ba0e-83442aea6ddc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015498561s Dec 26 13:45:21.660: INFO: Pod "projected-volume-1b390e57-037a-42fb-ba0e-83442aea6ddc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.025736407s Dec 26 13:45:23.674: INFO: Pod "projected-volume-1b390e57-037a-42fb-ba0e-83442aea6ddc": Phase="Pending", Reason="", readiness=false. Elapsed: 6.039359746s Dec 26 13:45:25.686: INFO: Pod "projected-volume-1b390e57-037a-42fb-ba0e-83442aea6ddc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.051662332s STEP: Saw pod success Dec 26 13:45:25.686: INFO: Pod "projected-volume-1b390e57-037a-42fb-ba0e-83442aea6ddc" satisfied condition "success or failure" Dec 26 13:45:25.689: INFO: Trying to get logs from node iruya-node pod projected-volume-1b390e57-037a-42fb-ba0e-83442aea6ddc container projected-all-volume-test: STEP: delete the pod Dec 26 13:45:25.738: INFO: Waiting for pod projected-volume-1b390e57-037a-42fb-ba0e-83442aea6ddc to disappear Dec 26 13:45:25.785: INFO: Pod projected-volume-1b390e57-037a-42fb-ba0e-83442aea6ddc no longer exists [AfterEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 26 13:45:25.785: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1184" for this suite. Dec 26 13:45:31.845: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 26 13:45:31.952: INFO: namespace projected-1184 deletion completed in 6.130951082s • [SLOW TEST:14.502 seconds] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:31 should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 26 13:45:31.952: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the container STEP: wait for the container to reach Failed STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Dec 26 13:45:41.288: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 26 13:45:41.357: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-2496" for this suite. Dec 26 13:45:47.634: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 26 13:45:47.840: INFO: namespace container-runtime-2496 deletion completed in 6.477078707s • [SLOW TEST:15.888 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129 should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 26 13:45:47.841: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-upd-a310b136-1b6b-42f7-9e60-be0ef0a800ca STEP: Creating the pod STEP: Updating configmap configmap-test-upd-a310b136-1b6b-42f7-9e60-be0ef0a800ca STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 26 13:46:00.231: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8277" for this suite. Dec 26 13:46:22.267: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 26 13:46:22.361: INFO: namespace configmap-8277 deletion completed in 22.121456274s • [SLOW TEST:34.520 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 26 13:46:22.361: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Performing setup for networking test in namespace pod-network-test-7641 STEP: creating a selector STEP: Creating the service pods in kubernetes Dec 26 13:46:22.446: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Dec 26 13:47:00.603: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostName&protocol=http&host=10.44.0.1&port=8080&tries=1'] Namespace:pod-network-test-7641 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Dec 26 13:47:00.603: INFO: >>> kubeConfig: /root/.kube/config Dec 26 13:47:00.991: INFO: Waiting for endpoints: map[] Dec 26 13:47:00.999: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostName&protocol=http&host=10.32.0.4&port=8080&tries=1'] Namespace:pod-network-test-7641 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Dec 26 13:47:00.999: INFO: >>> kubeConfig: /root/.kube/config Dec 26 13:47:01.370: INFO: Waiting for endpoints: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 26 13:47:01.370: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-7641" for this suite. Dec 26 13:47:25.429: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 26 13:47:25.558: INFO: namespace pod-network-test-7641 deletion completed in 24.177269018s • [SLOW TEST:63.197 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 26 13:47:25.559: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Dec 26 13:47:33.945: INFO: Waiting up to 5m0s for pod "client-envvars-ea8a54b5-3af2-4b08-87d7-03462235980e" in namespace "pods-7885" to be "success or failure" Dec 26 13:47:33.968: INFO: Pod "client-envvars-ea8a54b5-3af2-4b08-87d7-03462235980e": Phase="Pending", Reason="", readiness=false. Elapsed: 22.837383ms Dec 26 13:47:35.974: INFO: Pod "client-envvars-ea8a54b5-3af2-4b08-87d7-03462235980e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029506539s Dec 26 13:47:37.982: INFO: Pod "client-envvars-ea8a54b5-3af2-4b08-87d7-03462235980e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.037005254s Dec 26 13:47:40.066: INFO: Pod "client-envvars-ea8a54b5-3af2-4b08-87d7-03462235980e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.120991331s Dec 26 13:47:42.078: INFO: Pod "client-envvars-ea8a54b5-3af2-4b08-87d7-03462235980e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.132780543s STEP: Saw pod success Dec 26 13:47:42.078: INFO: Pod "client-envvars-ea8a54b5-3af2-4b08-87d7-03462235980e" satisfied condition "success or failure" Dec 26 13:47:42.083: INFO: Trying to get logs from node iruya-node pod client-envvars-ea8a54b5-3af2-4b08-87d7-03462235980e container env3cont: STEP: delete the pod Dec 26 13:47:42.200: INFO: Waiting for pod client-envvars-ea8a54b5-3af2-4b08-87d7-03462235980e to disappear Dec 26 13:47:42.224: INFO: Pod client-envvars-ea8a54b5-3af2-4b08-87d7-03462235980e no longer exists [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 26 13:47:42.224: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-7885" for this suite. Dec 26 13:48:28.263: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 26 13:48:28.381: INFO: namespace pods-7885 deletion completed in 46.14818312s • [SLOW TEST:62.823 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 26 13:48:28.383: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating Pod STEP: Waiting for the pod running STEP: Geting the pod STEP: Reading file content from the nginx-container Dec 26 13:48:40.522: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec pod-sharedvolume-4635458c-a76f-4332-ba34-2b73430b0e3c -c busybox-main-container --namespace=emptydir-2704 -- cat /usr/share/volumeshare/shareddata.txt' Dec 26 13:48:41.147: INFO: stderr: "" Dec 26 13:48:41.147: INFO: stdout: "Hello from the busy-box sub-container\n" [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 26 13:48:41.148: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2704" for this suite. Dec 26 13:48:47.259: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 26 13:48:47.494: INFO: namespace emptydir-2704 deletion completed in 6.338893196s • [SLOW TEST:19.110 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 26 13:48:47.495: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a test externalName service STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-4399.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-4399.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-4399.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-4399.svc.cluster.local; sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Dec 26 13:49:01.698: INFO: File wheezy_udp@dns-test-service-3.dns-4399.svc.cluster.local from pod dns-4399/dns-test-4687b7da-b8df-4899-bb84-b5db73a1b194 contains '' instead of 'foo.example.com.' Dec 26 13:49:01.707: INFO: File jessie_udp@dns-test-service-3.dns-4399.svc.cluster.local from pod dns-4399/dns-test-4687b7da-b8df-4899-bb84-b5db73a1b194 contains '' instead of 'foo.example.com.' Dec 26 13:49:01.707: INFO: Lookups using dns-4399/dns-test-4687b7da-b8df-4899-bb84-b5db73a1b194 failed for: [wheezy_udp@dns-test-service-3.dns-4399.svc.cluster.local jessie_udp@dns-test-service-3.dns-4399.svc.cluster.local] Dec 26 13:49:06.726: INFO: DNS probes using dns-test-4687b7da-b8df-4899-bb84-b5db73a1b194 succeeded STEP: deleting the pod STEP: changing the externalName to bar.example.com STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-4399.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-4399.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-4399.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-4399.svc.cluster.local; sleep 1; done STEP: creating a second pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Dec 26 13:49:21.055: INFO: File wheezy_udp@dns-test-service-3.dns-4399.svc.cluster.local from pod dns-4399/dns-test-4a36596c-d923-43ec-9f2a-b58f4a0ac41c contains '' instead of 'bar.example.com.' Dec 26 13:49:21.060: INFO: File jessie_udp@dns-test-service-3.dns-4399.svc.cluster.local from pod dns-4399/dns-test-4a36596c-d923-43ec-9f2a-b58f4a0ac41c contains '' instead of 'bar.example.com.' Dec 26 13:49:21.060: INFO: Lookups using dns-4399/dns-test-4a36596c-d923-43ec-9f2a-b58f4a0ac41c failed for: [wheezy_udp@dns-test-service-3.dns-4399.svc.cluster.local jessie_udp@dns-test-service-3.dns-4399.svc.cluster.local] Dec 26 13:49:26.073: INFO: File wheezy_udp@dns-test-service-3.dns-4399.svc.cluster.local from pod dns-4399/dns-test-4a36596c-d923-43ec-9f2a-b58f4a0ac41c contains 'foo.example.com. ' instead of 'bar.example.com.' Dec 26 13:49:26.078: INFO: File jessie_udp@dns-test-service-3.dns-4399.svc.cluster.local from pod dns-4399/dns-test-4a36596c-d923-43ec-9f2a-b58f4a0ac41c contains 'foo.example.com. ' instead of 'bar.example.com.' Dec 26 13:49:26.078: INFO: Lookups using dns-4399/dns-test-4a36596c-d923-43ec-9f2a-b58f4a0ac41c failed for: [wheezy_udp@dns-test-service-3.dns-4399.svc.cluster.local jessie_udp@dns-test-service-3.dns-4399.svc.cluster.local] Dec 26 13:49:31.096: INFO: File wheezy_udp@dns-test-service-3.dns-4399.svc.cluster.local from pod dns-4399/dns-test-4a36596c-d923-43ec-9f2a-b58f4a0ac41c contains 'foo.example.com. ' instead of 'bar.example.com.' Dec 26 13:49:31.104: INFO: File jessie_udp@dns-test-service-3.dns-4399.svc.cluster.local from pod dns-4399/dns-test-4a36596c-d923-43ec-9f2a-b58f4a0ac41c contains 'foo.example.com. ' instead of 'bar.example.com.' Dec 26 13:49:31.104: INFO: Lookups using dns-4399/dns-test-4a36596c-d923-43ec-9f2a-b58f4a0ac41c failed for: [wheezy_udp@dns-test-service-3.dns-4399.svc.cluster.local jessie_udp@dns-test-service-3.dns-4399.svc.cluster.local] Dec 26 13:49:36.086: INFO: DNS probes using dns-test-4a36596c-d923-43ec-9f2a-b58f4a0ac41c succeeded STEP: deleting the pod STEP: changing the service to type=ClusterIP STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-4399.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-4399.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-4399.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-4399.svc.cluster.local; sleep 1; done STEP: creating a third pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Dec 26 13:49:50.627: INFO: File wheezy_udp@dns-test-service-3.dns-4399.svc.cluster.local from pod dns-4399/dns-test-5552ada9-acc8-45d6-a089-7a6f5a73b907 contains '' instead of '10.97.7.234' Dec 26 13:49:50.633: INFO: File jessie_udp@dns-test-service-3.dns-4399.svc.cluster.local from pod dns-4399/dns-test-5552ada9-acc8-45d6-a089-7a6f5a73b907 contains '' instead of '10.97.7.234' Dec 26 13:49:50.633: INFO: Lookups using dns-4399/dns-test-5552ada9-acc8-45d6-a089-7a6f5a73b907 failed for: [wheezy_udp@dns-test-service-3.dns-4399.svc.cluster.local jessie_udp@dns-test-service-3.dns-4399.svc.cluster.local] Dec 26 13:49:55.656: INFO: DNS probes using dns-test-5552ada9-acc8-45d6-a089-7a6f5a73b907 succeeded STEP: deleting the pod STEP: deleting the test externalName service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 26 13:49:55.735: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-4399" for this suite. Dec 26 13:50:03.900: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 26 13:50:04.050: INFO: namespace dns-4399 deletion completed in 8.251419115s • [SLOW TEST:76.555 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 26 13:50:04.051: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Performing setup for networking test in namespace pod-network-test-5050 STEP: creating a selector STEP: Creating the service pods in kubernetes Dec 26 13:50:04.133: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Dec 26 13:50:44.914: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.44.0.1:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-5050 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Dec 26 13:50:44.915: INFO: >>> kubeConfig: /root/.kube/config Dec 26 13:50:45.441: INFO: Found all expected endpoints: [netserver-0] Dec 26 13:50:45.450: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.32.0.4:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-5050 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Dec 26 13:50:45.450: INFO: >>> kubeConfig: /root/.kube/config Dec 26 13:50:45.728: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 26 13:50:45.729: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-5050" for this suite. Dec 26 13:51:11.762: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 26 13:51:11.926: INFO: namespace pod-network-test-5050 deletion completed in 26.186542719s • [SLOW TEST:67.875 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 26 13:51:11.927: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-2700.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-2700.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2700.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-2700.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-2700.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2700.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe /etc/hosts STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Dec 26 13:51:24.142: INFO: Unable to read wheezy_udp@PodARecord from pod dns-2700/dns-test-609ca4ac-cf5d-415f-9462-8bbab19846e4: the server could not find the requested resource (get pods dns-test-609ca4ac-cf5d-415f-9462-8bbab19846e4) Dec 26 13:51:24.145: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-2700/dns-test-609ca4ac-cf5d-415f-9462-8bbab19846e4: the server could not find the requested resource (get pods dns-test-609ca4ac-cf5d-415f-9462-8bbab19846e4) Dec 26 13:51:24.148: INFO: Unable to read jessie_hosts@dns-querier-1.dns-test-service.dns-2700.svc.cluster.local from pod dns-2700/dns-test-609ca4ac-cf5d-415f-9462-8bbab19846e4: the server could not find the requested resource (get pods dns-test-609ca4ac-cf5d-415f-9462-8bbab19846e4) Dec 26 13:51:24.153: INFO: Unable to read jessie_hosts@dns-querier-1 from pod dns-2700/dns-test-609ca4ac-cf5d-415f-9462-8bbab19846e4: the server could not find the requested resource (get pods dns-test-609ca4ac-cf5d-415f-9462-8bbab19846e4) Dec 26 13:51:24.159: INFO: Unable to read jessie_udp@PodARecord from pod dns-2700/dns-test-609ca4ac-cf5d-415f-9462-8bbab19846e4: the server could not find the requested resource (get pods dns-test-609ca4ac-cf5d-415f-9462-8bbab19846e4) Dec 26 13:51:24.164: INFO: Unable to read jessie_tcp@PodARecord from pod dns-2700/dns-test-609ca4ac-cf5d-415f-9462-8bbab19846e4: the server could not find the requested resource (get pods dns-test-609ca4ac-cf5d-415f-9462-8bbab19846e4) Dec 26 13:51:24.164: INFO: Lookups using dns-2700/dns-test-609ca4ac-cf5d-415f-9462-8bbab19846e4 failed for: [wheezy_udp@PodARecord wheezy_tcp@PodARecord jessie_hosts@dns-querier-1.dns-test-service.dns-2700.svc.cluster.local jessie_hosts@dns-querier-1 jessie_udp@PodARecord jessie_tcp@PodARecord] Dec 26 13:51:29.287: INFO: DNS probes using dns-2700/dns-test-609ca4ac-cf5d-415f-9462-8bbab19846e4 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 26 13:51:29.385: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-2700" for this suite. Dec 26 13:51:35.434: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 26 13:51:35.594: INFO: namespace dns-2700 deletion completed in 6.196832578s • [SLOW TEST:23.667 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 26 13:51:35.595: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test env composition Dec 26 13:51:35.723: INFO: Waiting up to 5m0s for pod "var-expansion-fdd966d3-be4b-4bd7-819d-7ed697b55c15" in namespace "var-expansion-1924" to be "success or failure" Dec 26 13:51:35.736: INFO: Pod "var-expansion-fdd966d3-be4b-4bd7-819d-7ed697b55c15": Phase="Pending", Reason="", readiness=false. Elapsed: 12.273751ms Dec 26 13:51:37.742: INFO: Pod "var-expansion-fdd966d3-be4b-4bd7-819d-7ed697b55c15": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018227182s Dec 26 13:51:39.753: INFO: Pod "var-expansion-fdd966d3-be4b-4bd7-819d-7ed697b55c15": Phase="Pending", Reason="", readiness=false. Elapsed: 4.029065795s Dec 26 13:51:41.762: INFO: Pod "var-expansion-fdd966d3-be4b-4bd7-819d-7ed697b55c15": Phase="Pending", Reason="", readiness=false. Elapsed: 6.038727047s Dec 26 13:51:43.776: INFO: Pod "var-expansion-fdd966d3-be4b-4bd7-819d-7ed697b55c15": Phase="Pending", Reason="", readiness=false. Elapsed: 8.052202874s Dec 26 13:51:45.785: INFO: Pod "var-expansion-fdd966d3-be4b-4bd7-819d-7ed697b55c15": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.061386729s STEP: Saw pod success Dec 26 13:51:45.785: INFO: Pod "var-expansion-fdd966d3-be4b-4bd7-819d-7ed697b55c15" satisfied condition "success or failure" Dec 26 13:51:45.792: INFO: Trying to get logs from node iruya-node pod var-expansion-fdd966d3-be4b-4bd7-819d-7ed697b55c15 container dapi-container: STEP: delete the pod Dec 26 13:51:45.879: INFO: Waiting for pod var-expansion-fdd966d3-be4b-4bd7-819d-7ed697b55c15 to disappear Dec 26 13:51:45.885: INFO: Pod var-expansion-fdd966d3-be4b-4bd7-819d-7ed697b55c15 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 26 13:51:45.885: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-1924" for this suite. Dec 26 13:51:52.000: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 26 13:51:52.180: INFO: namespace var-expansion-1924 deletion completed in 6.283840667s • [SLOW TEST:16.586 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 26 13:51:52.181: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Dec 26 13:52:01.534: INFO: Expected: &{} to match Container's Termination Message: -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 26 13:52:01.572: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-7028" for this suite. Dec 26 13:52:07.682: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 26 13:52:07.844: INFO: namespace container-runtime-7028 deletion completed in 6.206086459s • [SLOW TEST:15.663 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129 should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Guestbook application should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 26 13:52:07.845: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating all guestbook components Dec 26 13:52:07.954: INFO: apiVersion: v1 kind: Service metadata: name: redis-slave labels: app: redis role: slave tier: backend spec: ports: - port: 6379 selector: app: redis role: slave tier: backend Dec 26 13:52:07.954: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6649' Dec 26 13:52:10.152: INFO: stderr: "" Dec 26 13:52:10.152: INFO: stdout: "service/redis-slave created\n" Dec 26 13:52:10.153: INFO: apiVersion: v1 kind: Service metadata: name: redis-master labels: app: redis role: master tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: redis role: master tier: backend Dec 26 13:52:10.153: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6649' Dec 26 13:52:10.602: INFO: stderr: "" Dec 26 13:52:10.602: INFO: stdout: "service/redis-master created\n" Dec 26 13:52:10.603: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend Dec 26 13:52:10.603: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6649' Dec 26 13:52:11.040: INFO: stderr: "" Dec 26 13:52:11.040: INFO: stdout: "service/frontend created\n" Dec 26 13:52:11.041: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: frontend spec: replicas: 3 selector: matchLabels: app: guestbook tier: frontend template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: php-redis image: gcr.io/google-samples/gb-frontend:v6 resources: requests: cpu: 100m memory: 100Mi env: - name: GET_HOSTS_FROM value: dns # If your cluster config does not include a dns service, then to # instead access environment variables to find service host # info, comment out the 'value: dns' line above, and uncomment the # line below: # value: env ports: - containerPort: 80 Dec 26 13:52:11.041: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6649' Dec 26 13:52:11.455: INFO: stderr: "" Dec 26 13:52:11.455: INFO: stdout: "deployment.apps/frontend created\n" Dec 26 13:52:11.455: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: redis-master spec: replicas: 1 selector: matchLabels: app: redis role: master tier: backend template: metadata: labels: app: redis role: master tier: backend spec: containers: - name: master image: gcr.io/kubernetes-e2e-test-images/redis:1.0 resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Dec 26 13:52:11.456: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6649' Dec 26 13:52:11.893: INFO: stderr: "" Dec 26 13:52:11.894: INFO: stdout: "deployment.apps/redis-master created\n" Dec 26 13:52:11.895: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: redis-slave spec: replicas: 2 selector: matchLabels: app: redis role: slave tier: backend template: metadata: labels: app: redis role: slave tier: backend spec: containers: - name: slave image: gcr.io/google-samples/gb-redisslave:v3 resources: requests: cpu: 100m memory: 100Mi env: - name: GET_HOSTS_FROM value: dns # If your cluster config does not include a dns service, then to # instead access an environment variable to find the master # service's host, comment out the 'value: dns' line above, and # uncomment the line below: # value: env ports: - containerPort: 6379 Dec 26 13:52:11.896: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6649' Dec 26 13:52:14.046: INFO: stderr: "" Dec 26 13:52:14.046: INFO: stdout: "deployment.apps/redis-slave created\n" STEP: validating guestbook app Dec 26 13:52:14.046: INFO: Waiting for all frontend pods to be Running. Dec 26 13:52:34.098: INFO: Waiting for frontend to serve content. Dec 26 13:52:37.246: INFO: Trying to add a new entry to the guestbook. Dec 26 13:52:37.307: INFO: Verifying that added entry can be retrieved. Dec 26 13:52:37.337: INFO: Failed to get response from guestbook. err: , response: {"data": ""} STEP: using delete to clean up resources Dec 26 13:52:42.353: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6649' Dec 26 13:52:42.737: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Dec 26 13:52:42.737: INFO: stdout: "service \"redis-slave\" force deleted\n" STEP: using delete to clean up resources Dec 26 13:52:42.738: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6649' Dec 26 13:52:43.021: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Dec 26 13:52:43.021: INFO: stdout: "service \"redis-master\" force deleted\n" STEP: using delete to clean up resources Dec 26 13:52:43.022: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6649' Dec 26 13:52:43.308: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Dec 26 13:52:43.308: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources Dec 26 13:52:43.309: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6649' Dec 26 13:52:43.488: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Dec 26 13:52:43.488: INFO: stdout: "deployment.apps \"frontend\" force deleted\n" STEP: using delete to clean up resources Dec 26 13:52:43.489: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6649' Dec 26 13:52:43.610: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Dec 26 13:52:43.610: INFO: stdout: "deployment.apps \"redis-master\" force deleted\n" STEP: using delete to clean up resources Dec 26 13:52:43.611: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6649' Dec 26 13:52:43.761: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Dec 26 13:52:43.761: INFO: stdout: "deployment.apps \"redis-slave\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 26 13:52:43.761: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6649" for this suite. Dec 26 13:53:28.046: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 26 13:53:28.183: INFO: namespace kubectl-6649 deletion completed in 44.300390617s • [SLOW TEST:80.339 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Guestbook application /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 26 13:53:28.184: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating pod Dec 26 13:53:36.380: INFO: Pod pod-hostip-7d00dbcd-3b32-4899-a360-eab2d24c6de5 has hostIP: 10.96.3.65 [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 26 13:53:36.380: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-9669" for this suite. Dec 26 13:53:58.412: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 26 13:53:58.533: INFO: namespace pods-9669 deletion completed in 22.143150492s • [SLOW TEST:30.349 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 26 13:53:58.533: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods Dec 26 13:53:59.470: INFO: Pod name wrapped-volume-race-6bd95bb9-be5d-496a-8d7b-82d3f9c12f5e: Found 0 pods out of 5 Dec 26 13:54:04.500: INFO: Pod name wrapped-volume-race-6bd95bb9-be5d-496a-8d7b-82d3f9c12f5e: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-6bd95bb9-be5d-496a-8d7b-82d3f9c12f5e in namespace emptydir-wrapper-4833, will wait for the garbage collector to delete the pods Dec 26 13:54:30.628: INFO: Deleting ReplicationController wrapped-volume-race-6bd95bb9-be5d-496a-8d7b-82d3f9c12f5e took: 25.150471ms Dec 26 13:54:31.029: INFO: Terminating ReplicationController wrapped-volume-race-6bd95bb9-be5d-496a-8d7b-82d3f9c12f5e pods took: 400.641674ms STEP: Creating RC which spawns configmap-volume pods Dec 26 13:55:13.682: INFO: Pod name wrapped-volume-race-90a92ca3-a9e1-458c-ad46-1170221aa765: Found 0 pods out of 5 Dec 26 13:55:18.705: INFO: Pod name wrapped-volume-race-90a92ca3-a9e1-458c-ad46-1170221aa765: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-90a92ca3-a9e1-458c-ad46-1170221aa765 in namespace emptydir-wrapper-4833, will wait for the garbage collector to delete the pods Dec 26 13:55:50.875: INFO: Deleting ReplicationController wrapped-volume-race-90a92ca3-a9e1-458c-ad46-1170221aa765 took: 35.513504ms Dec 26 13:55:51.176: INFO: Terminating ReplicationController wrapped-volume-race-90a92ca3-a9e1-458c-ad46-1170221aa765 pods took: 300.990778ms STEP: Creating RC which spawns configmap-volume pods Dec 26 13:56:36.924: INFO: Pod name wrapped-volume-race-5e16e143-7b58-49ed-b6d9-24e48ffb11cf: Found 0 pods out of 5 Dec 26 13:56:41.940: INFO: Pod name wrapped-volume-race-5e16e143-7b58-49ed-b6d9-24e48ffb11cf: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-5e16e143-7b58-49ed-b6d9-24e48ffb11cf in namespace emptydir-wrapper-4833, will wait for the garbage collector to delete the pods Dec 26 13:57:12.064: INFO: Deleting ReplicationController wrapped-volume-race-5e16e143-7b58-49ed-b6d9-24e48ffb11cf took: 17.055928ms Dec 26 13:57:12.665: INFO: Terminating ReplicationController wrapped-volume-race-5e16e143-7b58-49ed-b6d9-24e48ffb11cf pods took: 601.009767ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 26 13:57:57.556: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-4833" for this suite. Dec 26 13:58:07.594: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 26 13:58:07.712: INFO: namespace emptydir-wrapper-4833 deletion completed in 10.144488601s • [SLOW TEST:249.178 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 26 13:58:07.712: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Dec 26 13:58:07.895: INFO: Waiting up to 5m0s for pod "downwardapi-volume-be13dcce-4eee-4d4f-aa46-0afd3c2b0857" in namespace "downward-api-8203" to be "success or failure" Dec 26 13:58:07.989: INFO: Pod "downwardapi-volume-be13dcce-4eee-4d4f-aa46-0afd3c2b0857": Phase="Pending", Reason="", readiness=false. Elapsed: 94.103944ms Dec 26 13:58:10.003: INFO: Pod "downwardapi-volume-be13dcce-4eee-4d4f-aa46-0afd3c2b0857": Phase="Pending", Reason="", readiness=false. Elapsed: 2.107576704s Dec 26 13:58:12.015: INFO: Pod "downwardapi-volume-be13dcce-4eee-4d4f-aa46-0afd3c2b0857": Phase="Pending", Reason="", readiness=false. Elapsed: 4.12002503s Dec 26 13:58:14.031: INFO: Pod "downwardapi-volume-be13dcce-4eee-4d4f-aa46-0afd3c2b0857": Phase="Pending", Reason="", readiness=false. Elapsed: 6.136156218s Dec 26 13:58:16.042: INFO: Pod "downwardapi-volume-be13dcce-4eee-4d4f-aa46-0afd3c2b0857": Phase="Pending", Reason="", readiness=false. Elapsed: 8.146576332s Dec 26 13:58:18.051: INFO: Pod "downwardapi-volume-be13dcce-4eee-4d4f-aa46-0afd3c2b0857": Phase="Pending", Reason="", readiness=false. Elapsed: 10.156383131s Dec 26 13:58:20.100: INFO: Pod "downwardapi-volume-be13dcce-4eee-4d4f-aa46-0afd3c2b0857": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.205159706s STEP: Saw pod success Dec 26 13:58:20.100: INFO: Pod "downwardapi-volume-be13dcce-4eee-4d4f-aa46-0afd3c2b0857" satisfied condition "success or failure" Dec 26 13:58:20.107: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-be13dcce-4eee-4d4f-aa46-0afd3c2b0857 container client-container: STEP: delete the pod Dec 26 13:58:20.522: INFO: Waiting for pod downwardapi-volume-be13dcce-4eee-4d4f-aa46-0afd3c2b0857 to disappear Dec 26 13:58:20.535: INFO: Pod downwardapi-volume-be13dcce-4eee-4d4f-aa46-0afd3c2b0857 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 26 13:58:20.536: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8203" for this suite. Dec 26 13:58:26.589: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 26 13:58:26.697: INFO: namespace downward-api-8203 deletion completed in 6.149729202s • [SLOW TEST:18.985 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 26 13:58:26.697: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Dec 26 13:58:26.833: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b8f89eb7-fdf0-4701-92ed-98715eb62241" in namespace "projected-1162" to be "success or failure" Dec 26 13:58:26.841: INFO: Pod "downwardapi-volume-b8f89eb7-fdf0-4701-92ed-98715eb62241": Phase="Pending", Reason="", readiness=false. Elapsed: 7.57636ms Dec 26 13:58:28.852: INFO: Pod "downwardapi-volume-b8f89eb7-fdf0-4701-92ed-98715eb62241": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018322629s Dec 26 13:58:30.898: INFO: Pod "downwardapi-volume-b8f89eb7-fdf0-4701-92ed-98715eb62241": Phase="Pending", Reason="", readiness=false. Elapsed: 4.064317791s Dec 26 13:58:32.906: INFO: Pod "downwardapi-volume-b8f89eb7-fdf0-4701-92ed-98715eb62241": Phase="Pending", Reason="", readiness=false. Elapsed: 6.072404246s Dec 26 13:58:34.920: INFO: Pod "downwardapi-volume-b8f89eb7-fdf0-4701-92ed-98715eb62241": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.087134825s STEP: Saw pod success Dec 26 13:58:34.921: INFO: Pod "downwardapi-volume-b8f89eb7-fdf0-4701-92ed-98715eb62241" satisfied condition "success or failure" Dec 26 13:58:34.941: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-b8f89eb7-fdf0-4701-92ed-98715eb62241 container client-container: STEP: delete the pod Dec 26 13:58:35.147: INFO: Waiting for pod downwardapi-volume-b8f89eb7-fdf0-4701-92ed-98715eb62241 to disappear Dec 26 13:58:35.228: INFO: Pod downwardapi-volume-b8f89eb7-fdf0-4701-92ed-98715eb62241 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 26 13:58:35.228: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1162" for this suite. Dec 26 13:58:41.273: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 26 13:58:41.378: INFO: namespace projected-1162 deletion completed in 6.142114747s • [SLOW TEST:14.681 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 26 13:58:41.379: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Dec 26 13:58:41.484: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a56b381e-263b-4219-ab5e-a3b7ced6565f" in namespace "downward-api-5505" to be "success or failure" Dec 26 13:58:41.489: INFO: Pod "downwardapi-volume-a56b381e-263b-4219-ab5e-a3b7ced6565f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.564174ms Dec 26 13:58:43.499: INFO: Pod "downwardapi-volume-a56b381e-263b-4219-ab5e-a3b7ced6565f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014681987s Dec 26 13:58:45.508: INFO: Pod "downwardapi-volume-a56b381e-263b-4219-ab5e-a3b7ced6565f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.024215116s Dec 26 13:58:47.521: INFO: Pod "downwardapi-volume-a56b381e-263b-4219-ab5e-a3b7ced6565f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.036915301s Dec 26 13:58:49.539: INFO: Pod "downwardapi-volume-a56b381e-263b-4219-ab5e-a3b7ced6565f": Phase="Pending", Reason="", readiness=false. Elapsed: 8.054443883s Dec 26 13:58:51.551: INFO: Pod "downwardapi-volume-a56b381e-263b-4219-ab5e-a3b7ced6565f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.066816966s STEP: Saw pod success Dec 26 13:58:51.551: INFO: Pod "downwardapi-volume-a56b381e-263b-4219-ab5e-a3b7ced6565f" satisfied condition "success or failure" Dec 26 13:58:51.558: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-a56b381e-263b-4219-ab5e-a3b7ced6565f container client-container: STEP: delete the pod Dec 26 13:58:51.623: INFO: Waiting for pod downwardapi-volume-a56b381e-263b-4219-ab5e-a3b7ced6565f to disappear Dec 26 13:58:51.695: INFO: Pod downwardapi-volume-a56b381e-263b-4219-ab5e-a3b7ced6565f no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 26 13:58:51.695: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5505" for this suite. Dec 26 13:58:57.729: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 26 13:58:57.908: INFO: namespace downward-api-5505 deletion completed in 6.206001607s • [SLOW TEST:16.530 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 26 13:58:57.910: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-map-ce3a776b-18e1-45a0-911d-ae8fe11e5aa9 STEP: Creating a pod to test consume configMaps Dec 26 13:58:58.042: INFO: Waiting up to 5m0s for pod "pod-configmaps-4c963604-66a0-40b7-9863-b9eb0557babb" in namespace "configmap-6649" to be "success or failure" Dec 26 13:58:58.056: INFO: Pod "pod-configmaps-4c963604-66a0-40b7-9863-b9eb0557babb": Phase="Pending", Reason="", readiness=false. Elapsed: 13.193411ms Dec 26 13:59:00.064: INFO: Pod "pod-configmaps-4c963604-66a0-40b7-9863-b9eb0557babb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021207117s Dec 26 13:59:02.071: INFO: Pod "pod-configmaps-4c963604-66a0-40b7-9863-b9eb0557babb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.029111581s Dec 26 13:59:04.093: INFO: Pod "pod-configmaps-4c963604-66a0-40b7-9863-b9eb0557babb": Phase="Pending", Reason="", readiness=false. Elapsed: 6.050115939s Dec 26 13:59:06.100: INFO: Pod "pod-configmaps-4c963604-66a0-40b7-9863-b9eb0557babb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.057821464s STEP: Saw pod success Dec 26 13:59:06.100: INFO: Pod "pod-configmaps-4c963604-66a0-40b7-9863-b9eb0557babb" satisfied condition "success or failure" Dec 26 13:59:06.105: INFO: Trying to get logs from node iruya-node pod pod-configmaps-4c963604-66a0-40b7-9863-b9eb0557babb container configmap-volume-test: STEP: delete the pod Dec 26 13:59:06.369: INFO: Waiting for pod pod-configmaps-4c963604-66a0-40b7-9863-b9eb0557babb to disappear Dec 26 13:59:06.402: INFO: Pod pod-configmaps-4c963604-66a0-40b7-9863-b9eb0557babb no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 26 13:59:06.403: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6649" for this suite. Dec 26 13:59:12.459: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 26 13:59:12.677: INFO: namespace configmap-6649 deletion completed in 6.263075112s • [SLOW TEST:14.767 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 26 13:59:12.678: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-configmap-g85j STEP: Creating a pod to test atomic-volume-subpath Dec 26 13:59:12.906: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-g85j" in namespace "subpath-4115" to be "success or failure" Dec 26 13:59:12.994: INFO: Pod "pod-subpath-test-configmap-g85j": Phase="Pending", Reason="", readiness=false. Elapsed: 88.743215ms Dec 26 13:59:15.024: INFO: Pod "pod-subpath-test-configmap-g85j": Phase="Pending", Reason="", readiness=false. Elapsed: 2.118739506s Dec 26 13:59:17.066: INFO: Pod "pod-subpath-test-configmap-g85j": Phase="Pending", Reason="", readiness=false. Elapsed: 4.160667358s Dec 26 13:59:19.078: INFO: Pod "pod-subpath-test-configmap-g85j": Phase="Pending", Reason="", readiness=false. Elapsed: 6.172125552s Dec 26 13:59:21.090: INFO: Pod "pod-subpath-test-configmap-g85j": Phase="Running", Reason="", readiness=true. Elapsed: 8.184706723s Dec 26 13:59:23.102: INFO: Pod "pod-subpath-test-configmap-g85j": Phase="Running", Reason="", readiness=true. Elapsed: 10.196650761s Dec 26 13:59:25.110: INFO: Pod "pod-subpath-test-configmap-g85j": Phase="Running", Reason="", readiness=true. Elapsed: 12.204615601s Dec 26 13:59:27.117: INFO: Pod "pod-subpath-test-configmap-g85j": Phase="Running", Reason="", readiness=true. Elapsed: 14.210852175s Dec 26 13:59:29.128: INFO: Pod "pod-subpath-test-configmap-g85j": Phase="Running", Reason="", readiness=true. Elapsed: 16.22184881s Dec 26 13:59:31.148: INFO: Pod "pod-subpath-test-configmap-g85j": Phase="Running", Reason="", readiness=true. Elapsed: 18.24232715s Dec 26 13:59:33.155: INFO: Pod "pod-subpath-test-configmap-g85j": Phase="Running", Reason="", readiness=true. Elapsed: 20.249227466s Dec 26 13:59:35.192: INFO: Pod "pod-subpath-test-configmap-g85j": Phase="Running", Reason="", readiness=true. Elapsed: 22.285873625s Dec 26 13:59:37.210: INFO: Pod "pod-subpath-test-configmap-g85j": Phase="Running", Reason="", readiness=true. Elapsed: 24.304523891s Dec 26 13:59:39.223: INFO: Pod "pod-subpath-test-configmap-g85j": Phase="Running", Reason="", readiness=true. Elapsed: 26.317009628s Dec 26 13:59:41.230: INFO: Pod "pod-subpath-test-configmap-g85j": Phase="Succeeded", Reason="", readiness=false. Elapsed: 28.323968355s STEP: Saw pod success Dec 26 13:59:41.230: INFO: Pod "pod-subpath-test-configmap-g85j" satisfied condition "success or failure" Dec 26 13:59:41.234: INFO: Trying to get logs from node iruya-node pod pod-subpath-test-configmap-g85j container test-container-subpath-configmap-g85j: STEP: delete the pod Dec 26 13:59:41.289: INFO: Waiting for pod pod-subpath-test-configmap-g85j to disappear Dec 26 13:59:41.450: INFO: Pod pod-subpath-test-configmap-g85j no longer exists STEP: Deleting pod pod-subpath-test-configmap-g85j Dec 26 13:59:41.450: INFO: Deleting pod "pod-subpath-test-configmap-g85j" in namespace "subpath-4115" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 26 13:59:41.461: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-4115" for this suite. Dec 26 13:59:47.762: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 26 13:59:47.962: INFO: namespace subpath-4115 deletion completed in 6.484510552s • [SLOW TEST:35.285 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 26 13:59:47.964: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Dec 26 13:59:48.023: INFO: Waiting up to 5m0s for pod "downwardapi-volume-816db9c3-ddae-4667-96d5-043e02aca7bf" in namespace "downward-api-989" to be "success or failure" Dec 26 13:59:48.036: INFO: Pod "downwardapi-volume-816db9c3-ddae-4667-96d5-043e02aca7bf": Phase="Pending", Reason="", readiness=false. Elapsed: 12.426447ms Dec 26 13:59:50.041: INFO: Pod "downwardapi-volume-816db9c3-ddae-4667-96d5-043e02aca7bf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017969888s Dec 26 13:59:52.048: INFO: Pod "downwardapi-volume-816db9c3-ddae-4667-96d5-043e02aca7bf": Phase="Pending", Reason="", readiness=false. Elapsed: 4.024978585s Dec 26 13:59:54.078: INFO: Pod "downwardapi-volume-816db9c3-ddae-4667-96d5-043e02aca7bf": Phase="Pending", Reason="", readiness=false. Elapsed: 6.054432951s Dec 26 13:59:56.090: INFO: Pod "downwardapi-volume-816db9c3-ddae-4667-96d5-043e02aca7bf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.066860635s STEP: Saw pod success Dec 26 13:59:56.090: INFO: Pod "downwardapi-volume-816db9c3-ddae-4667-96d5-043e02aca7bf" satisfied condition "success or failure" Dec 26 13:59:56.132: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-816db9c3-ddae-4667-96d5-043e02aca7bf container client-container: STEP: delete the pod Dec 26 13:59:56.197: INFO: Waiting for pod downwardapi-volume-816db9c3-ddae-4667-96d5-043e02aca7bf to disappear Dec 26 13:59:56.208: INFO: Pod downwardapi-volume-816db9c3-ddae-4667-96d5-043e02aca7bf no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 26 13:59:56.209: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-989" for this suite. Dec 26 14:00:02.229: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 26 14:00:02.316: INFO: namespace downward-api-989 deletion completed in 6.102870208s • [SLOW TEST:14.353 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 26 14:00:02.317: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Dec 26 14:00:20.583: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Dec 26 14:00:20.602: INFO: Pod pod-with-prestop-exec-hook still exists Dec 26 14:00:22.603: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Dec 26 14:00:22.635: INFO: Pod pod-with-prestop-exec-hook still exists Dec 26 14:00:24.603: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Dec 26 14:00:24.616: INFO: Pod pod-with-prestop-exec-hook still exists Dec 26 14:00:26.603: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Dec 26 14:00:26.614: INFO: Pod pod-with-prestop-exec-hook still exists Dec 26 14:00:28.603: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Dec 26 14:00:28.612: INFO: Pod pod-with-prestop-exec-hook still exists Dec 26 14:00:30.603: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Dec 26 14:00:30.625: INFO: Pod pod-with-prestop-exec-hook still exists Dec 26 14:00:32.603: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Dec 26 14:00:32.643: INFO: Pod pod-with-prestop-exec-hook still exists Dec 26 14:00:34.603: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Dec 26 14:00:34.611: INFO: Pod pod-with-prestop-exec-hook still exists Dec 26 14:00:36.603: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Dec 26 14:00:36.615: INFO: Pod pod-with-prestop-exec-hook still exists Dec 26 14:00:38.603: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Dec 26 14:00:38.627: INFO: Pod pod-with-prestop-exec-hook still exists Dec 26 14:00:40.603: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Dec 26 14:00:40.613: INFO: Pod pod-with-prestop-exec-hook still exists Dec 26 14:00:42.603: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Dec 26 14:00:42.622: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 26 14:00:42.679: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-6390" for this suite. Dec 26 14:01:04.718: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 26 14:01:04.893: INFO: namespace container-lifecycle-hook-6390 deletion completed in 22.203196284s • [SLOW TEST:62.576 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl rolling-update should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 26 14:01:04.894: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1516 [It] should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Dec 26 14:01:04.997: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=kubectl-3946' Dec 26 14:01:05.208: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Dec 26 14:01:05.209: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n" STEP: verifying the rc e2e-test-nginx-rc was created Dec 26 14:01:05.299: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0 STEP: rolling-update to same image controller Dec 26 14:01:05.302: INFO: scanned /root for discovery docs: Dec 26 14:01:05.302: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-nginx-rc --update-period=1s --image=docker.io/library/nginx:1.14-alpine --image-pull-policy=IfNotPresent --namespace=kubectl-3946' Dec 26 14:01:27.556: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" Dec 26 14:01:27.557: INFO: stdout: "Created e2e-test-nginx-rc-95c800c3df8eb85cd46ba53923ba8532\nScaling up e2e-test-nginx-rc-95c800c3df8eb85cd46ba53923ba8532 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-95c800c3df8eb85cd46ba53923ba8532 up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-95c800c3df8eb85cd46ba53923ba8532 to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n" Dec 26 14:01:27.557: INFO: stdout: "Created e2e-test-nginx-rc-95c800c3df8eb85cd46ba53923ba8532\nScaling up e2e-test-nginx-rc-95c800c3df8eb85cd46ba53923ba8532 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-95c800c3df8eb85cd46ba53923ba8532 up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-95c800c3df8eb85cd46ba53923ba8532 to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n" STEP: waiting for all containers in run=e2e-test-nginx-rc pods to come up. Dec 26 14:01:27.558: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-3946' Dec 26 14:01:27.747: INFO: stderr: "" Dec 26 14:01:27.747: INFO: stdout: "e2e-test-nginx-rc-95c800c3df8eb85cd46ba53923ba8532-wg8tr " Dec 26 14:01:27.748: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-95c800c3df8eb85cd46ba53923ba8532-wg8tr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-nginx-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3946' Dec 26 14:01:27.845: INFO: stderr: "" Dec 26 14:01:27.845: INFO: stdout: "true" Dec 26 14:01:27.845: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-95c800c3df8eb85cd46ba53923ba8532-wg8tr -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-nginx-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3946' Dec 26 14:01:27.949: INFO: stderr: "" Dec 26 14:01:27.949: INFO: stdout: "docker.io/library/nginx:1.14-alpine" Dec 26 14:01:27.949: INFO: e2e-test-nginx-rc-95c800c3df8eb85cd46ba53923ba8532-wg8tr is verified up and running [AfterEach] [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1522 Dec 26 14:01:27.949: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=kubectl-3946' Dec 26 14:01:28.082: INFO: stderr: "" Dec 26 14:01:28.082: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 26 14:01:28.082: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3946" for this suite. Dec 26 14:01:34.139: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 26 14:01:34.236: INFO: namespace kubectl-3946 deletion completed in 6.128441998s • [SLOW TEST:29.342 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 26 14:01:34.236: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 26 14:01:40.883: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-4730" for this suite. Dec 26 14:01:46.924: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 26 14:01:47.111: INFO: namespace namespaces-4730 deletion completed in 6.215592313s STEP: Destroying namespace "nsdeletetest-1816" for this suite. Dec 26 14:01:47.116: INFO: Namespace nsdeletetest-1816 was already deleted STEP: Destroying namespace "nsdeletetest-9330" for this suite. Dec 26 14:01:53.142: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 26 14:01:53.338: INFO: namespace nsdeletetest-9330 deletion completed in 6.221669441s • [SLOW TEST:19.102 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl label should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 26 14:01:53.339: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1210 STEP: creating the pod Dec 26 14:01:53.390: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2236' Dec 26 14:01:53.701: INFO: stderr: "" Dec 26 14:01:53.701: INFO: stdout: "pod/pause created\n" Dec 26 14:01:53.702: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] Dec 26 14:01:53.702: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-2236" to be "running and ready" Dec 26 14:01:53.707: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 5.870155ms Dec 26 14:01:55.717: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015590438s Dec 26 14:01:57.734: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 4.031999148s Dec 26 14:01:59.743: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 6.041221223s Dec 26 14:02:01.750: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 8.048371545s Dec 26 14:02:01.750: INFO: Pod "pause" satisfied condition "running and ready" Dec 26 14:02:01.750: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: adding the label testing-label with value testing-label-value to a pod Dec 26 14:02:01.751: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-2236' Dec 26 14:02:01.957: INFO: stderr: "" Dec 26 14:02:01.957: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value Dec 26 14:02:01.958: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-2236' Dec 26 14:02:02.110: INFO: stderr: "" Dec 26 14:02:02.110: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 9s testing-label-value\n" STEP: removing the label testing-label of a pod Dec 26 14:02:02.110: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-2236' Dec 26 14:02:02.233: INFO: stderr: "" Dec 26 14:02:02.233: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label Dec 26 14:02:02.234: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-2236' Dec 26 14:02:02.339: INFO: stderr: "" Dec 26 14:02:02.339: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 9s \n" [AfterEach] [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1217 STEP: using delete to clean up resources Dec 26 14:02:02.339: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2236' Dec 26 14:02:02.491: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Dec 26 14:02:02.491: INFO: stdout: "pod \"pause\" force deleted\n" Dec 26 14:02:02.491: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-2236' Dec 26 14:02:02.634: INFO: stderr: "No resources found.\n" Dec 26 14:02:02.634: INFO: stdout: "" Dec 26 14:02:02.635: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-2236 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Dec 26 14:02:02.800: INFO: stderr: "" Dec 26 14:02:02.801: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 26 14:02:02.801: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2236" for this suite. Dec 26 14:02:08.861: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 26 14:02:09.005: INFO: namespace kubectl-2236 deletion completed in 6.195032263s • [SLOW TEST:15.666 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 26 14:02:09.006: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-54796fe1-ca8b-4af3-b366-e919d0676612 STEP: Creating a pod to test consume secrets Dec 26 14:02:09.209: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-63fc881d-a1d9-4ce7-aa2a-052bc963e440" in namespace "projected-2481" to be "success or failure" Dec 26 14:02:09.225: INFO: Pod "pod-projected-secrets-63fc881d-a1d9-4ce7-aa2a-052bc963e440": Phase="Pending", Reason="", readiness=false. Elapsed: 16.094776ms Dec 26 14:02:11.230: INFO: Pod "pod-projected-secrets-63fc881d-a1d9-4ce7-aa2a-052bc963e440": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021155656s Dec 26 14:02:13.324: INFO: Pod "pod-projected-secrets-63fc881d-a1d9-4ce7-aa2a-052bc963e440": Phase="Pending", Reason="", readiness=false. Elapsed: 4.114806084s Dec 26 14:02:15.333: INFO: Pod "pod-projected-secrets-63fc881d-a1d9-4ce7-aa2a-052bc963e440": Phase="Running", Reason="", readiness=true. Elapsed: 6.123725724s Dec 26 14:02:17.377: INFO: Pod "pod-projected-secrets-63fc881d-a1d9-4ce7-aa2a-052bc963e440": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.167382749s STEP: Saw pod success Dec 26 14:02:17.377: INFO: Pod "pod-projected-secrets-63fc881d-a1d9-4ce7-aa2a-052bc963e440" satisfied condition "success or failure" Dec 26 14:02:17.389: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-63fc881d-a1d9-4ce7-aa2a-052bc963e440 container projected-secret-volume-test: STEP: delete the pod Dec 26 14:02:17.836: INFO: Waiting for pod pod-projected-secrets-63fc881d-a1d9-4ce7-aa2a-052bc963e440 to disappear Dec 26 14:02:17.855: INFO: Pod pod-projected-secrets-63fc881d-a1d9-4ce7-aa2a-052bc963e440 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 26 14:02:17.856: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2481" for this suite. Dec 26 14:02:23.910: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 26 14:02:24.008: INFO: namespace projected-2481 deletion completed in 6.142983609s • [SLOW TEST:15.003 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 26 14:02:24.009: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted STEP: Gathering metrics W1226 14:02:39.347926 9 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Dec 26 14:02:39.348: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 26 14:02:39.348: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-2595" for this suite. Dec 26 14:02:48.255: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 26 14:02:49.739: INFO: namespace gc-2595 deletion completed in 10.374721761s • [SLOW TEST:25.730 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 26 14:02:49.739: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0666 on tmpfs Dec 26 14:02:50.076: INFO: Waiting up to 5m0s for pod "pod-ab5213e4-e870-4e23-8915-a4b6c09e5795" in namespace "emptydir-1688" to be "success or failure" Dec 26 14:02:50.114: INFO: Pod "pod-ab5213e4-e870-4e23-8915-a4b6c09e5795": Phase="Pending", Reason="", readiness=false. Elapsed: 38.427232ms Dec 26 14:02:52.122: INFO: Pod "pod-ab5213e4-e870-4e23-8915-a4b6c09e5795": Phase="Pending", Reason="", readiness=false. Elapsed: 2.04656145s Dec 26 14:02:54.130: INFO: Pod "pod-ab5213e4-e870-4e23-8915-a4b6c09e5795": Phase="Pending", Reason="", readiness=false. Elapsed: 4.054799769s Dec 26 14:02:56.152: INFO: Pod "pod-ab5213e4-e870-4e23-8915-a4b6c09e5795": Phase="Pending", Reason="", readiness=false. Elapsed: 6.076143602s Dec 26 14:02:58.160: INFO: Pod "pod-ab5213e4-e870-4e23-8915-a4b6c09e5795": Phase="Pending", Reason="", readiness=false. Elapsed: 8.084668305s Dec 26 14:03:00.185: INFO: Pod "pod-ab5213e4-e870-4e23-8915-a4b6c09e5795": Phase="Pending", Reason="", readiness=false. Elapsed: 10.109671526s Dec 26 14:03:02.196: INFO: Pod "pod-ab5213e4-e870-4e23-8915-a4b6c09e5795": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.119982975s STEP: Saw pod success Dec 26 14:03:02.196: INFO: Pod "pod-ab5213e4-e870-4e23-8915-a4b6c09e5795" satisfied condition "success or failure" Dec 26 14:03:02.199: INFO: Trying to get logs from node iruya-node pod pod-ab5213e4-e870-4e23-8915-a4b6c09e5795 container test-container: STEP: delete the pod Dec 26 14:03:02.262: INFO: Waiting for pod pod-ab5213e4-e870-4e23-8915-a4b6c09e5795 to disappear Dec 26 14:03:02.267: INFO: Pod pod-ab5213e4-e870-4e23-8915-a4b6c09e5795 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 26 14:03:02.267: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1688" for this suite. Dec 26 14:03:08.342: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 26 14:03:08.480: INFO: namespace emptydir-1688 deletion completed in 6.207089699s • [SLOW TEST:18.741 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 26 14:03:08.482: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-upd-6bb1a15f-ca62-4033-a4b2-d4cbdf172e8e STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 26 14:03:20.810: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4824" for this suite. Dec 26 14:03:42.845: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 26 14:03:42.985: INFO: namespace configmap-4824 deletion completed in 22.166812026s • [SLOW TEST:34.503 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 26 14:03:42.985: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 26 14:04:31.554: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-288" for this suite. Dec 26 14:04:37.596: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 26 14:04:37.746: INFO: namespace container-runtime-288 deletion completed in 6.183127775s • [SLOW TEST:54.761 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 when starting a container that exits /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:39 should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 26 14:04:37.747: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-map-c99e0c5b-08ab-433d-813d-c12a9ada0d1e STEP: Creating a pod to test consume configMaps Dec 26 14:04:37.944: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-a6574679-101d-4b4a-9049-db701db0d4c0" in namespace "projected-9276" to be "success or failure" Dec 26 14:04:37.955: INFO: Pod "pod-projected-configmaps-a6574679-101d-4b4a-9049-db701db0d4c0": Phase="Pending", Reason="", readiness=false. Elapsed: 10.025115ms Dec 26 14:04:39.967: INFO: Pod "pod-projected-configmaps-a6574679-101d-4b4a-9049-db701db0d4c0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022154392s Dec 26 14:04:41.980: INFO: Pod "pod-projected-configmaps-a6574679-101d-4b4a-9049-db701db0d4c0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.034990927s Dec 26 14:04:43.992: INFO: Pod "pod-projected-configmaps-a6574679-101d-4b4a-9049-db701db0d4c0": Phase="Pending", Reason="", readiness=false. Elapsed: 6.047111388s Dec 26 14:04:45.998: INFO: Pod "pod-projected-configmaps-a6574679-101d-4b4a-9049-db701db0d4c0": Phase="Running", Reason="", readiness=true. Elapsed: 8.053167266s Dec 26 14:04:48.006: INFO: Pod "pod-projected-configmaps-a6574679-101d-4b4a-9049-db701db0d4c0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.061295983s STEP: Saw pod success Dec 26 14:04:48.006: INFO: Pod "pod-projected-configmaps-a6574679-101d-4b4a-9049-db701db0d4c0" satisfied condition "success or failure" Dec 26 14:04:48.009: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-a6574679-101d-4b4a-9049-db701db0d4c0 container projected-configmap-volume-test: STEP: delete the pod Dec 26 14:04:48.121: INFO: Waiting for pod pod-projected-configmaps-a6574679-101d-4b4a-9049-db701db0d4c0 to disappear Dec 26 14:04:48.191: INFO: Pod pod-projected-configmaps-a6574679-101d-4b4a-9049-db701db0d4c0 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 26 14:04:48.191: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9276" for this suite. Dec 26 14:04:54.236: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 26 14:04:54.352: INFO: namespace projected-9276 deletion completed in 6.147929236s • [SLOW TEST:16.605 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 26 14:04:54.353: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Dec 26 14:04:54.530: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e8d9db00-3159-48e1-b623-3d48d52b4c0a" in namespace "projected-8786" to be "success or failure" Dec 26 14:04:54.540: INFO: Pod "downwardapi-volume-e8d9db00-3159-48e1-b623-3d48d52b4c0a": Phase="Pending", Reason="", readiness=false. Elapsed: 9.664794ms Dec 26 14:04:56.599: INFO: Pod "downwardapi-volume-e8d9db00-3159-48e1-b623-3d48d52b4c0a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.068902112s Dec 26 14:04:58.612: INFO: Pod "downwardapi-volume-e8d9db00-3159-48e1-b623-3d48d52b4c0a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.081534503s Dec 26 14:05:00.630: INFO: Pod "downwardapi-volume-e8d9db00-3159-48e1-b623-3d48d52b4c0a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.099471503s Dec 26 14:05:02.648: INFO: Pod "downwardapi-volume-e8d9db00-3159-48e1-b623-3d48d52b4c0a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.117542491s STEP: Saw pod success Dec 26 14:05:02.648: INFO: Pod "downwardapi-volume-e8d9db00-3159-48e1-b623-3d48d52b4c0a" satisfied condition "success or failure" Dec 26 14:05:02.654: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-e8d9db00-3159-48e1-b623-3d48d52b4c0a container client-container: STEP: delete the pod Dec 26 14:05:02.754: INFO: Waiting for pod downwardapi-volume-e8d9db00-3159-48e1-b623-3d48d52b4c0a to disappear Dec 26 14:05:02.765: INFO: Pod downwardapi-volume-e8d9db00-3159-48e1-b623-3d48d52b4c0a no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 26 14:05:02.766: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8786" for this suite. Dec 26 14:05:08.830: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 26 14:05:08.919: INFO: namespace projected-8786 deletion completed in 6.139664928s • [SLOW TEST:14.567 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 26 14:05:08.919: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 26 14:05:17.110: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-5353" for this suite. Dec 26 14:05:23.144: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 26 14:05:23.249: INFO: namespace kubelet-test-5353 deletion completed in 6.132884283s • [SLOW TEST:14.330 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78 should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Job should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 26 14:05:23.249: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: delete a job STEP: deleting Job.batch foo in namespace job-129, will wait for the garbage collector to delete the pods Dec 26 14:05:35.482: INFO: Deleting Job.batch foo took: 10.699092ms Dec 26 14:05:35.783: INFO: Terminating Job.batch foo pods took: 300.711335ms STEP: Ensuring job was deleted [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 26 14:06:16.702: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-129" for this suite. Dec 26 14:06:22.730: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 26 14:06:22.848: INFO: namespace job-129 deletion completed in 6.140080338s • [SLOW TEST:59.598 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-api-machinery] Aggregator Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 26 14:06:22.848: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename aggregator STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76 Dec 26 14:06:22.945: INFO: >>> kubeConfig: /root/.kube/config [It] Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Registering the sample API server. Dec 26 14:06:23.417: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set Dec 26 14:06:25.662: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712965983, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712965983, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712965983, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712965983, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 26 14:06:27.673: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712965983, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712965983, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712965983, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712965983, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 26 14:06:29.703: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712965983, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712965983, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712965983, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712965983, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 26 14:06:31.674: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712965983, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712965983, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712965983, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712965983, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 26 14:06:33.672: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712965983, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712965983, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712965983, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712965983, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 26 14:06:39.529: INFO: Waited 3.848805614s for the sample-apiserver to be ready to handle requests. [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67 [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 26 14:06:40.310: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "aggregator-402" for this suite. Dec 26 14:06:46.363: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 26 14:06:46.482: INFO: namespace aggregator-402 deletion completed in 6.164677523s • [SLOW TEST:23.634 seconds] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 26 14:06:46.483: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-map-9cd9161c-c842-4718-bd24-1ba009bc04b1 STEP: Creating a pod to test consume configMaps Dec 26 14:06:46.664: INFO: Waiting up to 5m0s for pod "pod-configmaps-a8e2e82b-a08f-421e-a0c0-9c7bb895e85a" in namespace "configmap-5854" to be "success or failure" Dec 26 14:06:46.690: INFO: Pod "pod-configmaps-a8e2e82b-a08f-421e-a0c0-9c7bb895e85a": Phase="Pending", Reason="", readiness=false. Elapsed: 26.3511ms Dec 26 14:06:48.699: INFO: Pod "pod-configmaps-a8e2e82b-a08f-421e-a0c0-9c7bb895e85a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03513061s Dec 26 14:06:50.709: INFO: Pod "pod-configmaps-a8e2e82b-a08f-421e-a0c0-9c7bb895e85a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.044893717s Dec 26 14:06:52.724: INFO: Pod "pod-configmaps-a8e2e82b-a08f-421e-a0c0-9c7bb895e85a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.059867256s Dec 26 14:06:54.730: INFO: Pod "pod-configmaps-a8e2e82b-a08f-421e-a0c0-9c7bb895e85a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.066239383s STEP: Saw pod success Dec 26 14:06:54.730: INFO: Pod "pod-configmaps-a8e2e82b-a08f-421e-a0c0-9c7bb895e85a" satisfied condition "success or failure" Dec 26 14:06:54.733: INFO: Trying to get logs from node iruya-node pod pod-configmaps-a8e2e82b-a08f-421e-a0c0-9c7bb895e85a container configmap-volume-test: STEP: delete the pod Dec 26 14:06:54.869: INFO: Waiting for pod pod-configmaps-a8e2e82b-a08f-421e-a0c0-9c7bb895e85a to disappear Dec 26 14:06:54.881: INFO: Pod pod-configmaps-a8e2e82b-a08f-421e-a0c0-9c7bb895e85a no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 26 14:06:54.881: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5854" for this suite. Dec 26 14:07:02.907: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 26 14:07:02.994: INFO: namespace configmap-5854 deletion completed in 8.108246948s • [SLOW TEST:16.512 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 26 14:07:02.995: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Dec 26 14:07:03.082: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a885aecd-77b0-4993-837d-e0f076208fc7" in namespace "downward-api-5693" to be "success or failure" Dec 26 14:07:03.089: INFO: Pod "downwardapi-volume-a885aecd-77b0-4993-837d-e0f076208fc7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.829268ms Dec 26 14:07:05.099: INFO: Pod "downwardapi-volume-a885aecd-77b0-4993-837d-e0f076208fc7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017102957s Dec 26 14:07:07.112: INFO: Pod "downwardapi-volume-a885aecd-77b0-4993-837d-e0f076208fc7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.02971688s Dec 26 14:07:09.120: INFO: Pod "downwardapi-volume-a885aecd-77b0-4993-837d-e0f076208fc7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.037187403s Dec 26 14:07:11.130: INFO: Pod "downwardapi-volume-a885aecd-77b0-4993-837d-e0f076208fc7": Phase="Pending", Reason="", readiness=false. Elapsed: 8.047786976s Dec 26 14:07:13.150: INFO: Pod "downwardapi-volume-a885aecd-77b0-4993-837d-e0f076208fc7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.067884693s STEP: Saw pod success Dec 26 14:07:13.150: INFO: Pod "downwardapi-volume-a885aecd-77b0-4993-837d-e0f076208fc7" satisfied condition "success or failure" Dec 26 14:07:13.159: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-a885aecd-77b0-4993-837d-e0f076208fc7 container client-container: STEP: delete the pod Dec 26 14:07:13.373: INFO: Waiting for pod downwardapi-volume-a885aecd-77b0-4993-837d-e0f076208fc7 to disappear Dec 26 14:07:13.390: INFO: Pod downwardapi-volume-a885aecd-77b0-4993-837d-e0f076208fc7 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 26 14:07:13.390: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5693" for this suite. Dec 26 14:07:19.435: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 26 14:07:19.651: INFO: namespace downward-api-5693 deletion completed in 6.250438021s • [SLOW TEST:16.656 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 26 14:07:19.652: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Dec 26 14:07:19.782: INFO: Creating ReplicaSet my-hostname-basic-2d1c1b27-f0cf-48e7-8a3e-8ad33eedaef5 Dec 26 14:07:19.801: INFO: Pod name my-hostname-basic-2d1c1b27-f0cf-48e7-8a3e-8ad33eedaef5: Found 0 pods out of 1 Dec 26 14:07:24.810: INFO: Pod name my-hostname-basic-2d1c1b27-f0cf-48e7-8a3e-8ad33eedaef5: Found 1 pods out of 1 Dec 26 14:07:24.810: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-2d1c1b27-f0cf-48e7-8a3e-8ad33eedaef5" is running Dec 26 14:07:26.824: INFO: Pod "my-hostname-basic-2d1c1b27-f0cf-48e7-8a3e-8ad33eedaef5-v8xvp" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-26 14:07:19 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-26 14:07:19 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-2d1c1b27-f0cf-48e7-8a3e-8ad33eedaef5]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-26 14:07:19 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-2d1c1b27-f0cf-48e7-8a3e-8ad33eedaef5]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-26 14:07:19 +0000 UTC Reason: Message:}]) Dec 26 14:07:26.825: INFO: Trying to dial the pod Dec 26 14:07:31.862: INFO: Controller my-hostname-basic-2d1c1b27-f0cf-48e7-8a3e-8ad33eedaef5: Got expected result from replica 1 [my-hostname-basic-2d1c1b27-f0cf-48e7-8a3e-8ad33eedaef5-v8xvp]: "my-hostname-basic-2d1c1b27-f0cf-48e7-8a3e-8ad33eedaef5-v8xvp", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 26 14:07:31.863: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-416" for this suite. Dec 26 14:07:37.917: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 26 14:07:38.024: INFO: namespace replicaset-416 deletion completed in 6.150688849s • [SLOW TEST:18.372 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 26 14:07:38.024: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-9dff0a55-2595-4195-8048-78f977b1f2d5 STEP: Creating a pod to test consume secrets Dec 26 14:07:38.171: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-82e033a3-98dd-41f3-88db-6868c481c728" in namespace "projected-7365" to be "success or failure" Dec 26 14:07:38.299: INFO: Pod "pod-projected-secrets-82e033a3-98dd-41f3-88db-6868c481c728": Phase="Pending", Reason="", readiness=false. Elapsed: 128.085732ms Dec 26 14:07:40.310: INFO: Pod "pod-projected-secrets-82e033a3-98dd-41f3-88db-6868c481c728": Phase="Pending", Reason="", readiness=false. Elapsed: 2.139435376s Dec 26 14:07:42.320: INFO: Pod "pod-projected-secrets-82e033a3-98dd-41f3-88db-6868c481c728": Phase="Pending", Reason="", readiness=false. Elapsed: 4.14891148s Dec 26 14:07:44.357: INFO: Pod "pod-projected-secrets-82e033a3-98dd-41f3-88db-6868c481c728": Phase="Pending", Reason="", readiness=false. Elapsed: 6.186186484s Dec 26 14:07:46.365: INFO: Pod "pod-projected-secrets-82e033a3-98dd-41f3-88db-6868c481c728": Phase="Pending", Reason="", readiness=false. Elapsed: 8.194145418s Dec 26 14:07:48.376: INFO: Pod "pod-projected-secrets-82e033a3-98dd-41f3-88db-6868c481c728": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.205469404s STEP: Saw pod success Dec 26 14:07:48.377: INFO: Pod "pod-projected-secrets-82e033a3-98dd-41f3-88db-6868c481c728" satisfied condition "success or failure" Dec 26 14:07:48.382: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-82e033a3-98dd-41f3-88db-6868c481c728 container projected-secret-volume-test: STEP: delete the pod Dec 26 14:07:48.498: INFO: Waiting for pod pod-projected-secrets-82e033a3-98dd-41f3-88db-6868c481c728 to disappear Dec 26 14:07:48.504: INFO: Pod pod-projected-secrets-82e033a3-98dd-41f3-88db-6868c481c728 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 26 14:07:48.504: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7365" for this suite. Dec 26 14:07:54.597: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 26 14:07:54.688: INFO: namespace projected-7365 deletion completed in 6.178228611s • [SLOW TEST:16.664 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 26 14:07:54.689: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars Dec 26 14:07:54.860: INFO: Waiting up to 5m0s for pod "downward-api-c4a1bff9-6699-410a-a739-54d3b9cce281" in namespace "downward-api-5711" to be "success or failure" Dec 26 14:07:54.887: INFO: Pod "downward-api-c4a1bff9-6699-410a-a739-54d3b9cce281": Phase="Pending", Reason="", readiness=false. Elapsed: 26.332401ms Dec 26 14:07:56.896: INFO: Pod "downward-api-c4a1bff9-6699-410a-a739-54d3b9cce281": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034922214s Dec 26 14:07:58.903: INFO: Pod "downward-api-c4a1bff9-6699-410a-a739-54d3b9cce281": Phase="Pending", Reason="", readiness=false. Elapsed: 4.042233724s Dec 26 14:08:00.910: INFO: Pod "downward-api-c4a1bff9-6699-410a-a739-54d3b9cce281": Phase="Pending", Reason="", readiness=false. Elapsed: 6.049303088s Dec 26 14:08:02.944: INFO: Pod "downward-api-c4a1bff9-6699-410a-a739-54d3b9cce281": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.083479064s STEP: Saw pod success Dec 26 14:08:02.945: INFO: Pod "downward-api-c4a1bff9-6699-410a-a739-54d3b9cce281" satisfied condition "success or failure" Dec 26 14:08:02.975: INFO: Trying to get logs from node iruya-node pod downward-api-c4a1bff9-6699-410a-a739-54d3b9cce281 container dapi-container: STEP: delete the pod Dec 26 14:08:03.103: INFO: Waiting for pod downward-api-c4a1bff9-6699-410a-a739-54d3b9cce281 to disappear Dec 26 14:08:03.117: INFO: Pod downward-api-c4a1bff9-6699-410a-a739-54d3b9cce281 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 26 14:08:03.117: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5711" for this suite. Dec 26 14:08:09.167: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 26 14:08:09.284: INFO: namespace downward-api-5711 deletion completed in 6.163922887s • [SLOW TEST:14.596 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 26 14:08:09.286: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod liveness-3bb600c6-16b9-4f03-b6af-7a6966a78143 in namespace container-probe-5268 Dec 26 14:08:17.437: INFO: Started pod liveness-3bb600c6-16b9-4f03-b6af-7a6966a78143 in namespace container-probe-5268 STEP: checking the pod's current state and verifying that restartCount is present Dec 26 14:08:17.442: INFO: Initial restart count of pod liveness-3bb600c6-16b9-4f03-b6af-7a6966a78143 is 0 Dec 26 14:08:35.537: INFO: Restart count of pod container-probe-5268/liveness-3bb600c6-16b9-4f03-b6af-7a6966a78143 is now 1 (18.094715994s elapsed) Dec 26 14:08:57.662: INFO: Restart count of pod container-probe-5268/liveness-3bb600c6-16b9-4f03-b6af-7a6966a78143 is now 2 (40.219609539s elapsed) Dec 26 14:09:18.190: INFO: Restart count of pod container-probe-5268/liveness-3bb600c6-16b9-4f03-b6af-7a6966a78143 is now 3 (1m0.747091005s elapsed) Dec 26 14:09:36.360: INFO: Restart count of pod container-probe-5268/liveness-3bb600c6-16b9-4f03-b6af-7a6966a78143 is now 4 (1m18.917245575s elapsed) Dec 26 14:10:50.787: INFO: Restart count of pod container-probe-5268/liveness-3bb600c6-16b9-4f03-b6af-7a6966a78143 is now 5 (2m33.344494181s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 26 14:10:50.813: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-5268" for this suite. Dec 26 14:10:58.846: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 26 14:10:58.962: INFO: namespace container-probe-5268 deletion completed in 8.134540687s • [SLOW TEST:169.676 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 26 14:10:58.962: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir volume type on node default medium Dec 26 14:10:59.096: INFO: Waiting up to 5m0s for pod "pod-45434a25-a966-4e69-84b9-020d364a5b71" in namespace "emptydir-9443" to be "success or failure" Dec 26 14:10:59.111: INFO: Pod "pod-45434a25-a966-4e69-84b9-020d364a5b71": Phase="Pending", Reason="", readiness=false. Elapsed: 14.402574ms Dec 26 14:11:01.119: INFO: Pod "pod-45434a25-a966-4e69-84b9-020d364a5b71": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022100423s Dec 26 14:11:03.126: INFO: Pod "pod-45434a25-a966-4e69-84b9-020d364a5b71": Phase="Pending", Reason="", readiness=false. Elapsed: 4.029471069s Dec 26 14:11:05.141: INFO: Pod "pod-45434a25-a966-4e69-84b9-020d364a5b71": Phase="Pending", Reason="", readiness=false. Elapsed: 6.044055912s Dec 26 14:11:07.156: INFO: Pod "pod-45434a25-a966-4e69-84b9-020d364a5b71": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.059700447s STEP: Saw pod success Dec 26 14:11:07.156: INFO: Pod "pod-45434a25-a966-4e69-84b9-020d364a5b71" satisfied condition "success or failure" Dec 26 14:11:07.165: INFO: Trying to get logs from node iruya-node pod pod-45434a25-a966-4e69-84b9-020d364a5b71 container test-container: STEP: delete the pod Dec 26 14:11:07.295: INFO: Waiting for pod pod-45434a25-a966-4e69-84b9-020d364a5b71 to disappear Dec 26 14:11:07.480: INFO: Pod pod-45434a25-a966-4e69-84b9-020d364a5b71 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 26 14:11:07.480: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9443" for this suite. Dec 26 14:11:13.542: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 26 14:11:13.691: INFO: namespace emptydir-9443 deletion completed in 6.199435028s • [SLOW TEST:14.730 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run --rm job should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 26 14:11:13.692: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: executing a command with run --rm and attach with stdin Dec 26 14:11:13.847: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-2797 run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed'' Dec 26 14:11:26.047: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\n" Dec 26 14:11:26.047: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n" STEP: verifying the job e2e-test-rm-busybox-job was deleted [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 26 14:11:28.062: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2797" for this suite. Dec 26 14:11:34.109: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 26 14:11:34.180: INFO: namespace kubectl-2797 deletion completed in 6.110552656s • [SLOW TEST:20.488 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run --rm job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 26 14:11:34.181: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-map-924b5977-c94c-41a8-bb80-eead98719bc7 STEP: Creating a pod to test consume configMaps Dec 26 14:11:34.340: INFO: Waiting up to 5m0s for pod "pod-configmaps-d691815c-b55e-45ef-a063-6458d7804b52" in namespace "configmap-5377" to be "success or failure" Dec 26 14:11:34.415: INFO: Pod "pod-configmaps-d691815c-b55e-45ef-a063-6458d7804b52": Phase="Pending", Reason="", readiness=false. Elapsed: 74.516332ms Dec 26 14:11:36.424: INFO: Pod "pod-configmaps-d691815c-b55e-45ef-a063-6458d7804b52": Phase="Pending", Reason="", readiness=false. Elapsed: 2.083378363s Dec 26 14:11:38.432: INFO: Pod "pod-configmaps-d691815c-b55e-45ef-a063-6458d7804b52": Phase="Pending", Reason="", readiness=false. Elapsed: 4.09178729s Dec 26 14:11:41.841: INFO: Pod "pod-configmaps-d691815c-b55e-45ef-a063-6458d7804b52": Phase="Pending", Reason="", readiness=false. Elapsed: 7.500281683s Dec 26 14:11:43.867: INFO: Pod "pod-configmaps-d691815c-b55e-45ef-a063-6458d7804b52": Phase="Succeeded", Reason="", readiness=false. Elapsed: 9.526328893s STEP: Saw pod success Dec 26 14:11:43.867: INFO: Pod "pod-configmaps-d691815c-b55e-45ef-a063-6458d7804b52" satisfied condition "success or failure" Dec 26 14:11:43.892: INFO: Trying to get logs from node iruya-node pod pod-configmaps-d691815c-b55e-45ef-a063-6458d7804b52 container configmap-volume-test: STEP: delete the pod Dec 26 14:11:44.046: INFO: Waiting for pod pod-configmaps-d691815c-b55e-45ef-a063-6458d7804b52 to disappear Dec 26 14:11:44.058: INFO: Pod pod-configmaps-d691815c-b55e-45ef-a063-6458d7804b52 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 26 14:11:44.058: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5377" for this suite. Dec 26 14:11:50.160: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 26 14:11:50.277: INFO: namespace configmap-5377 deletion completed in 6.164206619s • [SLOW TEST:16.096 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 26 14:11:50.277: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-map-95389314-f1e2-4638-a29a-47a602eeef21 STEP: Creating a pod to test consume secrets Dec 26 14:11:53.253: INFO: Waiting up to 5m0s for pod "pod-secrets-9d268f83-c1ed-48cf-8462-c4e7cefcbb72" in namespace "secrets-6981" to be "success or failure" Dec 26 14:11:53.276: INFO: Pod "pod-secrets-9d268f83-c1ed-48cf-8462-c4e7cefcbb72": Phase="Pending", Reason="", readiness=false. Elapsed: 22.065817ms Dec 26 14:11:55.283: INFO: Pod "pod-secrets-9d268f83-c1ed-48cf-8462-c4e7cefcbb72": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029277757s Dec 26 14:11:57.301: INFO: Pod "pod-secrets-9d268f83-c1ed-48cf-8462-c4e7cefcbb72": Phase="Pending", Reason="", readiness=false. Elapsed: 4.04725577s Dec 26 14:11:59.345: INFO: Pod "pod-secrets-9d268f83-c1ed-48cf-8462-c4e7cefcbb72": Phase="Pending", Reason="", readiness=false. Elapsed: 6.091423922s Dec 26 14:12:01.352: INFO: Pod "pod-secrets-9d268f83-c1ed-48cf-8462-c4e7cefcbb72": Phase="Pending", Reason="", readiness=false. Elapsed: 8.098310142s Dec 26 14:12:04.852: INFO: Pod "pod-secrets-9d268f83-c1ed-48cf-8462-c4e7cefcbb72": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.59863048s STEP: Saw pod success Dec 26 14:12:04.853: INFO: Pod "pod-secrets-9d268f83-c1ed-48cf-8462-c4e7cefcbb72" satisfied condition "success or failure" Dec 26 14:12:06.682: INFO: Trying to get logs from node iruya-node pod pod-secrets-9d268f83-c1ed-48cf-8462-c4e7cefcbb72 container secret-volume-test: STEP: delete the pod Dec 26 14:12:07.023: INFO: Waiting for pod pod-secrets-9d268f83-c1ed-48cf-8462-c4e7cefcbb72 to disappear Dec 26 14:12:07.041: INFO: Pod pod-secrets-9d268f83-c1ed-48cf-8462-c4e7cefcbb72 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 26 14:12:07.042: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-6981" for this suite. Dec 26 14:12:15.107: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 26 14:12:15.219: INFO: namespace secrets-6981 deletion completed in 8.16411788s • [SLOW TEST:24.942 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 26 14:12:15.219: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-map-ae758111-1e01-4abc-8312-a484a3d08e43 STEP: Creating a pod to test consume configMaps Dec 26 14:12:15.388: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-d4046840-12f9-4d12-a64a-cc9522a27395" in namespace "projected-8568" to be "success or failure" Dec 26 14:12:15.394: INFO: Pod "pod-projected-configmaps-d4046840-12f9-4d12-a64a-cc9522a27395": Phase="Pending", Reason="", readiness=false. Elapsed: 6.24749ms Dec 26 14:12:18.286: INFO: Pod "pod-projected-configmaps-d4046840-12f9-4d12-a64a-cc9522a27395": Phase="Pending", Reason="", readiness=false. Elapsed: 2.897731719s Dec 26 14:12:20.303: INFO: Pod "pod-projected-configmaps-d4046840-12f9-4d12-a64a-cc9522a27395": Phase="Pending", Reason="", readiness=false. Elapsed: 4.915178257s Dec 26 14:12:22.316: INFO: Pod "pod-projected-configmaps-d4046840-12f9-4d12-a64a-cc9522a27395": Phase="Pending", Reason="", readiness=false. Elapsed: 6.928474394s Dec 26 14:12:24.327: INFO: Pod "pod-projected-configmaps-d4046840-12f9-4d12-a64a-cc9522a27395": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.938681691s STEP: Saw pod success Dec 26 14:12:24.327: INFO: Pod "pod-projected-configmaps-d4046840-12f9-4d12-a64a-cc9522a27395" satisfied condition "success or failure" Dec 26 14:12:24.338: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-d4046840-12f9-4d12-a64a-cc9522a27395 container projected-configmap-volume-test: STEP: delete the pod Dec 26 14:12:24.575: INFO: Waiting for pod pod-projected-configmaps-d4046840-12f9-4d12-a64a-cc9522a27395 to disappear Dec 26 14:12:24.587: INFO: Pod pod-projected-configmaps-d4046840-12f9-4d12-a64a-cc9522a27395 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 26 14:12:24.587: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8568" for this suite. Dec 26 14:12:30.659: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 26 14:12:30.751: INFO: namespace projected-8568 deletion completed in 6.148147683s • [SLOW TEST:15.532 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 26 14:12:30.752: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod busybox-6747470a-17ad-44bf-a27c-df7d24ba7a31 in namespace container-probe-5165 Dec 26 14:12:38.900: INFO: Started pod busybox-6747470a-17ad-44bf-a27c-df7d24ba7a31 in namespace container-probe-5165 STEP: checking the pod's current state and verifying that restartCount is present Dec 26 14:12:38.903: INFO: Initial restart count of pod busybox-6747470a-17ad-44bf-a27c-df7d24ba7a31 is 0 Dec 26 14:13:38.952: INFO: Restart count of pod container-probe-5165/busybox-6747470a-17ad-44bf-a27c-df7d24ba7a31 is now 1 (1m0.049014375s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 26 14:13:39.011: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-5165" for this suite. Dec 26 14:13:45.141: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 26 14:13:45.284: INFO: namespace container-probe-5165 deletion completed in 6.192334683s • [SLOW TEST:74.533 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 26 14:13:45.285: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0777 on node default medium Dec 26 14:13:45.408: INFO: Waiting up to 5m0s for pod "pod-cf7e3854-d8b5-40f9-9894-4d811c4bddb8" in namespace "emptydir-9353" to be "success or failure" Dec 26 14:13:45.429: INFO: Pod "pod-cf7e3854-d8b5-40f9-9894-4d811c4bddb8": Phase="Pending", Reason="", readiness=false. Elapsed: 20.754681ms Dec 26 14:13:47.660: INFO: Pod "pod-cf7e3854-d8b5-40f9-9894-4d811c4bddb8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.251465891s Dec 26 14:13:49.673: INFO: Pod "pod-cf7e3854-d8b5-40f9-9894-4d811c4bddb8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.264757777s Dec 26 14:13:51.682: INFO: Pod "pod-cf7e3854-d8b5-40f9-9894-4d811c4bddb8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.273666231s Dec 26 14:13:53.693: INFO: Pod "pod-cf7e3854-d8b5-40f9-9894-4d811c4bddb8": Phase="Pending", Reason="", readiness=false. Elapsed: 8.28467671s Dec 26 14:13:55.702: INFO: Pod "pod-cf7e3854-d8b5-40f9-9894-4d811c4bddb8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.293940759s STEP: Saw pod success Dec 26 14:13:55.702: INFO: Pod "pod-cf7e3854-d8b5-40f9-9894-4d811c4bddb8" satisfied condition "success or failure" Dec 26 14:13:55.705: INFO: Trying to get logs from node iruya-node pod pod-cf7e3854-d8b5-40f9-9894-4d811c4bddb8 container test-container: STEP: delete the pod Dec 26 14:13:55.761: INFO: Waiting for pod pod-cf7e3854-d8b5-40f9-9894-4d811c4bddb8 to disappear Dec 26 14:13:55.931: INFO: Pod pod-cf7e3854-d8b5-40f9-9894-4d811c4bddb8 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 26 14:13:55.931: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9353" for this suite. Dec 26 14:14:01.984: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 26 14:14:02.179: INFO: namespace emptydir-9353 deletion completed in 6.22248277s • [SLOW TEST:16.895 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 26 14:14:02.180: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-3e36ed8c-7761-4589-99c0-fad267e70aba STEP: Creating a pod to test consume configMaps Dec 26 14:14:02.317: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-4f8e41a0-5729-453c-a2c2-95c3a7842f49" in namespace "projected-8954" to be "success or failure" Dec 26 14:14:02.321: INFO: Pod "pod-projected-configmaps-4f8e41a0-5729-453c-a2c2-95c3a7842f49": Phase="Pending", Reason="", readiness=false. Elapsed: 3.991093ms Dec 26 14:14:04.331: INFO: Pod "pod-projected-configmaps-4f8e41a0-5729-453c-a2c2-95c3a7842f49": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0138997s Dec 26 14:14:06.340: INFO: Pod "pod-projected-configmaps-4f8e41a0-5729-453c-a2c2-95c3a7842f49": Phase="Pending", Reason="", readiness=false. Elapsed: 4.022525429s Dec 26 14:14:08.351: INFO: Pod "pod-projected-configmaps-4f8e41a0-5729-453c-a2c2-95c3a7842f49": Phase="Pending", Reason="", readiness=false. Elapsed: 6.034136737s Dec 26 14:14:10.367: INFO: Pod "pod-projected-configmaps-4f8e41a0-5729-453c-a2c2-95c3a7842f49": Phase="Pending", Reason="", readiness=false. Elapsed: 8.050397378s Dec 26 14:14:12.647: INFO: Pod "pod-projected-configmaps-4f8e41a0-5729-453c-a2c2-95c3a7842f49": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.330059398s STEP: Saw pod success Dec 26 14:14:12.647: INFO: Pod "pod-projected-configmaps-4f8e41a0-5729-453c-a2c2-95c3a7842f49" satisfied condition "success or failure" Dec 26 14:14:12.678: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-4f8e41a0-5729-453c-a2c2-95c3a7842f49 container projected-configmap-volume-test: STEP: delete the pod Dec 26 14:14:12.894: INFO: Waiting for pod pod-projected-configmaps-4f8e41a0-5729-453c-a2c2-95c3a7842f49 to disappear Dec 26 14:14:12.939: INFO: Pod pod-projected-configmaps-4f8e41a0-5729-453c-a2c2-95c3a7842f49 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 26 14:14:12.939: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8954" for this suite. Dec 26 14:14:18.970: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 26 14:14:19.100: INFO: namespace projected-8954 deletion completed in 6.153835137s • [SLOW TEST:16.920 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 26 14:14:19.101: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating the pod Dec 26 14:14:27.865: INFO: Successfully updated pod "labelsupdatee32f4af6-b72f-42cb-bc20-818f25f703ee" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 26 14:14:29.915: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-325" for this suite. Dec 26 14:14:51.973: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 26 14:14:52.056: INFO: namespace downward-api-325 deletion completed in 22.134834573s • [SLOW TEST:32.955 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 26 14:14:52.056: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-map-6cad56e2-8185-4d69-8c57-2bdf4392292d STEP: Creating a pod to test consume secrets Dec 26 14:14:52.255: INFO: Waiting up to 5m0s for pod "pod-secrets-c54a8223-6111-4c06-bca8-3647297533b2" in namespace "secrets-3835" to be "success or failure" Dec 26 14:14:52.270: INFO: Pod "pod-secrets-c54a8223-6111-4c06-bca8-3647297533b2": Phase="Pending", Reason="", readiness=false. Elapsed: 14.471946ms Dec 26 14:14:54.278: INFO: Pod "pod-secrets-c54a8223-6111-4c06-bca8-3647297533b2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021558508s Dec 26 14:14:56.287: INFO: Pod "pod-secrets-c54a8223-6111-4c06-bca8-3647297533b2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.031341304s Dec 26 14:14:58.294: INFO: Pod "pod-secrets-c54a8223-6111-4c06-bca8-3647297533b2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.03793357s Dec 26 14:15:00.300: INFO: Pod "pod-secrets-c54a8223-6111-4c06-bca8-3647297533b2": Phase="Pending", Reason="", readiness=false. Elapsed: 8.044086538s Dec 26 14:15:02.309: INFO: Pod "pod-secrets-c54a8223-6111-4c06-bca8-3647297533b2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.053419257s STEP: Saw pod success Dec 26 14:15:02.310: INFO: Pod "pod-secrets-c54a8223-6111-4c06-bca8-3647297533b2" satisfied condition "success or failure" Dec 26 14:15:02.319: INFO: Trying to get logs from node iruya-node pod pod-secrets-c54a8223-6111-4c06-bca8-3647297533b2 container secret-volume-test: STEP: delete the pod Dec 26 14:15:02.390: INFO: Waiting for pod pod-secrets-c54a8223-6111-4c06-bca8-3647297533b2 to disappear Dec 26 14:15:02.398: INFO: Pod pod-secrets-c54a8223-6111-4c06-bca8-3647297533b2 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 26 14:15:02.398: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-3835" for this suite. Dec 26 14:15:08.507: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 26 14:15:08.702: INFO: namespace secrets-3835 deletion completed in 6.299713219s • [SLOW TEST:16.646 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 26 14:15:08.703: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-f5e2e421-4767-483a-94ee-ff8d5c55d3a0 STEP: Creating a pod to test consume secrets Dec 26 14:15:08.961: INFO: Waiting up to 5m0s for pod "pod-secrets-54506573-5b3a-4158-9121-ab559b3401b4" in namespace "secrets-9891" to be "success or failure" Dec 26 14:15:08.996: INFO: Pod "pod-secrets-54506573-5b3a-4158-9121-ab559b3401b4": Phase="Pending", Reason="", readiness=false. Elapsed: 34.469804ms Dec 26 14:15:12.619: INFO: Pod "pod-secrets-54506573-5b3a-4158-9121-ab559b3401b4": Phase="Pending", Reason="", readiness=false. Elapsed: 3.657623408s Dec 26 14:15:14.633: INFO: Pod "pod-secrets-54506573-5b3a-4158-9121-ab559b3401b4": Phase="Pending", Reason="", readiness=false. Elapsed: 5.671483565s Dec 26 14:15:16.647: INFO: Pod "pod-secrets-54506573-5b3a-4158-9121-ab559b3401b4": Phase="Pending", Reason="", readiness=false. Elapsed: 7.68624452s Dec 26 14:15:19.453: INFO: Pod "pod-secrets-54506573-5b3a-4158-9121-ab559b3401b4": Phase="Pending", Reason="", readiness=false. Elapsed: 10.491623434s Dec 26 14:15:21.462: INFO: Pod "pod-secrets-54506573-5b3a-4158-9121-ab559b3401b4": Phase="Pending", Reason="", readiness=false. Elapsed: 12.501218189s Dec 26 14:15:23.469: INFO: Pod "pod-secrets-54506573-5b3a-4158-9121-ab559b3401b4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.507884391s STEP: Saw pod success Dec 26 14:15:23.469: INFO: Pod "pod-secrets-54506573-5b3a-4158-9121-ab559b3401b4" satisfied condition "success or failure" Dec 26 14:15:23.472: INFO: Trying to get logs from node iruya-node pod pod-secrets-54506573-5b3a-4158-9121-ab559b3401b4 container secret-volume-test: STEP: delete the pod Dec 26 14:15:23.623: INFO: Waiting for pod pod-secrets-54506573-5b3a-4158-9121-ab559b3401b4 to disappear Dec 26 14:15:23.709: INFO: Pod pod-secrets-54506573-5b3a-4158-9121-ab559b3401b4 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 26 14:15:23.710: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-9891" for this suite. Dec 26 14:15:29.740: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 26 14:15:29.835: INFO: namespace secrets-9891 deletion completed in 6.115517592s • [SLOW TEST:21.133 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 26 14:15:29.836: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Dec 26 14:15:29.936: INFO: Waiting up to 5m0s for pod "downwardapi-volume-da04ca40-3f5b-4d3d-b96b-eb717d13d144" in namespace "projected-8429" to be "success or failure" Dec 26 14:15:29.952: INFO: Pod "downwardapi-volume-da04ca40-3f5b-4d3d-b96b-eb717d13d144": Phase="Pending", Reason="", readiness=false. Elapsed: 15.394664ms Dec 26 14:15:31.958: INFO: Pod "downwardapi-volume-da04ca40-3f5b-4d3d-b96b-eb717d13d144": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021727426s Dec 26 14:15:33.973: INFO: Pod "downwardapi-volume-da04ca40-3f5b-4d3d-b96b-eb717d13d144": Phase="Pending", Reason="", readiness=false. Elapsed: 4.036372628s Dec 26 14:15:35.980: INFO: Pod "downwardapi-volume-da04ca40-3f5b-4d3d-b96b-eb717d13d144": Phase="Pending", Reason="", readiness=false. Elapsed: 6.04337339s Dec 26 14:15:37.988: INFO: Pod "downwardapi-volume-da04ca40-3f5b-4d3d-b96b-eb717d13d144": Phase="Pending", Reason="", readiness=false. Elapsed: 8.051190877s Dec 26 14:15:39.994: INFO: Pod "downwardapi-volume-da04ca40-3f5b-4d3d-b96b-eb717d13d144": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.057100627s STEP: Saw pod success Dec 26 14:15:39.994: INFO: Pod "downwardapi-volume-da04ca40-3f5b-4d3d-b96b-eb717d13d144" satisfied condition "success or failure" Dec 26 14:15:39.997: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-da04ca40-3f5b-4d3d-b96b-eb717d13d144 container client-container: STEP: delete the pod Dec 26 14:15:40.107: INFO: Waiting for pod downwardapi-volume-da04ca40-3f5b-4d3d-b96b-eb717d13d144 to disappear Dec 26 14:15:40.113: INFO: Pod downwardapi-volume-da04ca40-3f5b-4d3d-b96b-eb717d13d144 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 26 14:15:40.113: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8429" for this suite. Dec 26 14:15:46.285: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 26 14:15:46.361: INFO: namespace projected-8429 deletion completed in 6.225698694s • [SLOW TEST:16.525 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 26 14:15:46.362: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test override all Dec 26 14:15:46.919: INFO: Waiting up to 5m0s for pod "client-containers-18a56c48-905a-41f5-b23a-f6ed6e4387cd" in namespace "containers-7602" to be "success or failure" Dec 26 14:15:46.945: INFO: Pod "client-containers-18a56c48-905a-41f5-b23a-f6ed6e4387cd": Phase="Pending", Reason="", readiness=false. Elapsed: 26.248553ms Dec 26 14:15:48.953: INFO: Pod "client-containers-18a56c48-905a-41f5-b23a-f6ed6e4387cd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033780594s Dec 26 14:15:50.964: INFO: Pod "client-containers-18a56c48-905a-41f5-b23a-f6ed6e4387cd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.045003564s Dec 26 14:15:52.972: INFO: Pod "client-containers-18a56c48-905a-41f5-b23a-f6ed6e4387cd": Phase="Pending", Reason="", readiness=false. Elapsed: 6.053301962s Dec 26 14:15:54.978: INFO: Pod "client-containers-18a56c48-905a-41f5-b23a-f6ed6e4387cd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.059174465s STEP: Saw pod success Dec 26 14:15:54.978: INFO: Pod "client-containers-18a56c48-905a-41f5-b23a-f6ed6e4387cd" satisfied condition "success or failure" Dec 26 14:15:54.983: INFO: Trying to get logs from node iruya-node pod client-containers-18a56c48-905a-41f5-b23a-f6ed6e4387cd container test-container: STEP: delete the pod Dec 26 14:15:55.056: INFO: Waiting for pod client-containers-18a56c48-905a-41f5-b23a-f6ed6e4387cd to disappear Dec 26 14:15:55.149: INFO: Pod client-containers-18a56c48-905a-41f5-b23a-f6ed6e4387cd no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 26 14:15:55.150: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-7602" for this suite. Dec 26 14:16:01.187: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 26 14:16:01.289: INFO: namespace containers-7602 deletion completed in 6.131916653s • [SLOW TEST:14.927 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 26 14:16:01.289: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating the pod Dec 26 14:16:12.031: INFO: Successfully updated pod "annotationupdate080aa6a8-324a-4f0c-82c7-4c271ac6391a" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 26 14:16:16.188: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-961" for this suite. Dec 26 14:16:38.224: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 26 14:16:38.326: INFO: namespace downward-api-961 deletion completed in 22.128610602s • [SLOW TEST:37.037 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 26 14:16:38.326: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap configmap-9393/configmap-test-39e3f352-1cb6-486f-a5b8-ea62f5adf407 STEP: Creating a pod to test consume configMaps Dec 26 14:16:38.400: INFO: Waiting up to 5m0s for pod "pod-configmaps-8cd090b9-8ef2-43aa-bc75-3dcad648a46b" in namespace "configmap-9393" to be "success or failure" Dec 26 14:16:38.410: INFO: Pod "pod-configmaps-8cd090b9-8ef2-43aa-bc75-3dcad648a46b": Phase="Pending", Reason="", readiness=false. Elapsed: 9.847866ms Dec 26 14:16:40.416: INFO: Pod "pod-configmaps-8cd090b9-8ef2-43aa-bc75-3dcad648a46b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016382434s Dec 26 14:16:42.427: INFO: Pod "pod-configmaps-8cd090b9-8ef2-43aa-bc75-3dcad648a46b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.027473072s Dec 26 14:16:44.444: INFO: Pod "pod-configmaps-8cd090b9-8ef2-43aa-bc75-3dcad648a46b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.044173832s Dec 26 14:16:46.451: INFO: Pod "pod-configmaps-8cd090b9-8ef2-43aa-bc75-3dcad648a46b": Phase="Pending", Reason="", readiness=false. Elapsed: 8.050910343s Dec 26 14:16:48.461: INFO: Pod "pod-configmaps-8cd090b9-8ef2-43aa-bc75-3dcad648a46b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.060645288s STEP: Saw pod success Dec 26 14:16:48.461: INFO: Pod "pod-configmaps-8cd090b9-8ef2-43aa-bc75-3dcad648a46b" satisfied condition "success or failure" Dec 26 14:16:48.464: INFO: Trying to get logs from node iruya-node pod pod-configmaps-8cd090b9-8ef2-43aa-bc75-3dcad648a46b container env-test: STEP: delete the pod Dec 26 14:16:48.518: INFO: Waiting for pod pod-configmaps-8cd090b9-8ef2-43aa-bc75-3dcad648a46b to disappear Dec 26 14:16:48.604: INFO: Pod pod-configmaps-8cd090b9-8ef2-43aa-bc75-3dcad648a46b no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 26 14:16:48.605: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9393" for this suite. Dec 26 14:16:54.672: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 26 14:16:54.941: INFO: namespace configmap-9393 deletion completed in 6.303553183s • [SLOW TEST:16.615 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl patch should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 26 14:16:54.941: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating Redis RC Dec 26 14:16:54.993: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4681' Dec 26 14:16:55.297: INFO: stderr: "" Dec 26 14:16:55.297: INFO: stdout: "replicationcontroller/redis-master created\n" STEP: Waiting for Redis master to start. Dec 26 14:16:56.308: INFO: Selector matched 1 pods for map[app:redis] Dec 26 14:16:56.308: INFO: Found 0 / 1 Dec 26 14:16:58.607: INFO: Selector matched 1 pods for map[app:redis] Dec 26 14:16:58.607: INFO: Found 0 / 1 Dec 26 14:16:59.312: INFO: Selector matched 1 pods for map[app:redis] Dec 26 14:16:59.313: INFO: Found 0 / 1 Dec 26 14:17:00.304: INFO: Selector matched 1 pods for map[app:redis] Dec 26 14:17:00.304: INFO: Found 0 / 1 Dec 26 14:17:01.590: INFO: Selector matched 1 pods for map[app:redis] Dec 26 14:17:01.590: INFO: Found 0 / 1 Dec 26 14:17:02.304: INFO: Selector matched 1 pods for map[app:redis] Dec 26 14:17:02.304: INFO: Found 0 / 1 Dec 26 14:17:03.306: INFO: Selector matched 1 pods for map[app:redis] Dec 26 14:17:03.306: INFO: Found 0 / 1 Dec 26 14:17:04.310: INFO: Selector matched 1 pods for map[app:redis] Dec 26 14:17:04.310: INFO: Found 0 / 1 Dec 26 14:17:05.304: INFO: Selector matched 1 pods for map[app:redis] Dec 26 14:17:05.304: INFO: Found 0 / 1 Dec 26 14:17:06.304: INFO: Selector matched 1 pods for map[app:redis] Dec 26 14:17:06.304: INFO: Found 1 / 1 Dec 26 14:17:06.304: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods Dec 26 14:17:06.310: INFO: Selector matched 1 pods for map[app:redis] Dec 26 14:17:06.310: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Dec 26 14:17:06.310: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod redis-master-wgnpv --namespace=kubectl-4681 -p {"metadata":{"annotations":{"x":"y"}}}' Dec 26 14:17:06.474: INFO: stderr: "" Dec 26 14:17:06.474: INFO: stdout: "pod/redis-master-wgnpv patched\n" STEP: checking annotations Dec 26 14:17:06.481: INFO: Selector matched 1 pods for map[app:redis] Dec 26 14:17:06.482: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 26 14:17:06.482: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4681" for this suite. Dec 26 14:17:28.535: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 26 14:17:28.745: INFO: namespace kubectl-4681 deletion completed in 22.258405124s • [SLOW TEST:33.804 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl patch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 26 14:17:28.745: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-2c1070cf-7c8b-47a5-854b-438e73006df3 STEP: Creating a pod to test consume secrets Dec 26 14:17:28.898: INFO: Waiting up to 5m0s for pod "pod-secrets-0fd8f2f4-63f2-4078-b6bf-1c3a487632bc" in namespace "secrets-5548" to be "success or failure" Dec 26 14:17:28.916: INFO: Pod "pod-secrets-0fd8f2f4-63f2-4078-b6bf-1c3a487632bc": Phase="Pending", Reason="", readiness=false. Elapsed: 15.582423ms Dec 26 14:17:30.922: INFO: Pod "pod-secrets-0fd8f2f4-63f2-4078-b6bf-1c3a487632bc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021619557s Dec 26 14:17:33.256: INFO: Pod "pod-secrets-0fd8f2f4-63f2-4078-b6bf-1c3a487632bc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.355036165s Dec 26 14:17:35.275: INFO: Pod "pod-secrets-0fd8f2f4-63f2-4078-b6bf-1c3a487632bc": Phase="Pending", Reason="", readiness=false. Elapsed: 6.374656688s Dec 26 14:17:37.291: INFO: Pod "pod-secrets-0fd8f2f4-63f2-4078-b6bf-1c3a487632bc": Phase="Pending", Reason="", readiness=false. Elapsed: 8.390567644s Dec 26 14:17:39.299: INFO: Pod "pod-secrets-0fd8f2f4-63f2-4078-b6bf-1c3a487632bc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.398674474s STEP: Saw pod success Dec 26 14:17:39.300: INFO: Pod "pod-secrets-0fd8f2f4-63f2-4078-b6bf-1c3a487632bc" satisfied condition "success or failure" Dec 26 14:17:39.303: INFO: Trying to get logs from node iruya-node pod pod-secrets-0fd8f2f4-63f2-4078-b6bf-1c3a487632bc container secret-volume-test: STEP: delete the pod Dec 26 14:17:39.488: INFO: Waiting for pod pod-secrets-0fd8f2f4-63f2-4078-b6bf-1c3a487632bc to disappear Dec 26 14:17:39.507: INFO: Pod pod-secrets-0fd8f2f4-63f2-4078-b6bf-1c3a487632bc no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 26 14:17:39.507: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-5548" for this suite. Dec 26 14:17:45.572: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 26 14:17:45.712: INFO: namespace secrets-5548 deletion completed in 6.188357205s • [SLOW TEST:16.967 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 26 14:17:45.713: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Dec 26 14:18:07.913: INFO: Container started at 2019-12-26 14:17:52 +0000 UTC, pod became ready at 2019-12-26 14:18:07 +0000 UTC [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 26 14:18:07.914: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-7510" for this suite. Dec 26 14:18:29.944: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 26 14:18:30.028: INFO: namespace container-probe-7510 deletion completed in 22.108397753s • [SLOW TEST:44.315 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run deployment should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 26 14:18:30.029: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1557 [It] should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Dec 26 14:18:30.165: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --generator=deployment/apps.v1 --namespace=kubectl-1283' Dec 26 14:18:30.284: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Dec 26 14:18:30.284: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n" STEP: verifying the deployment e2e-test-nginx-deployment was created STEP: verifying the pod controlled by deployment e2e-test-nginx-deployment was created [AfterEach] [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1562 Dec 26 14:18:34.974: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=kubectl-1283' Dec 26 14:18:35.102: INFO: stderr: "" Dec 26 14:18:35.103: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 26 14:18:35.103: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1283" for this suite. Dec 26 14:18:59.137: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 26 14:18:59.230: INFO: namespace kubectl-1283 deletion completed in 24.120521152s • [SLOW TEST:29.202 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 26 14:18:59.231: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating the pod Dec 26 14:19:08.145: INFO: Successfully updated pod "labelsupdate18242824-b219-47b4-80de-16afde453688" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 26 14:19:10.798: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1687" for this suite. Dec 26 14:19:32.987: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 26 14:19:33.064: INFO: namespace projected-1687 deletion completed in 22.210176375s • [SLOW TEST:33.833 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 26 14:19:33.065: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Dec 26 14:19:33.134: INFO: Creating deployment "nginx-deployment" Dec 26 14:19:33.149: INFO: Waiting for observed generation 1 Dec 26 14:19:37.905: INFO: Waiting for all required pods to come up Dec 26 14:19:39.216: INFO: Pod name nginx: Found 10 pods out of 10 STEP: ensuring each pod is running Dec 26 14:20:11.240: INFO: Waiting for deployment "nginx-deployment" to complete Dec 26 14:20:11.247: INFO: Updating deployment "nginx-deployment" with a non-existent image Dec 26 14:20:11.257: INFO: Updating deployment nginx-deployment Dec 26 14:20:11.257: INFO: Waiting for observed generation 2 Dec 26 14:20:15.486: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 Dec 26 14:20:16.638: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 Dec 26 14:20:16.650: INFO: Waiting for the first rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas Dec 26 14:20:17.231: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 Dec 26 14:20:17.231: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 Dec 26 14:20:17.233: INFO: Waiting for the second rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas Dec 26 14:20:17.237: INFO: Verifying that deployment "nginx-deployment" has minimum required number of available replicas Dec 26 14:20:17.237: INFO: Scaling up the deployment "nginx-deployment" from 10 to 30 Dec 26 14:20:17.246: INFO: Updating deployment nginx-deployment Dec 26 14:20:17.246: INFO: Waiting for the replicasets of deployment "nginx-deployment" to have desired number of replicas Dec 26 14:20:17.785: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 Dec 26 14:20:17.804: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 Dec 26 14:20:29.444: INFO: Deployment "nginx-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment,GenerateName:,Namespace:deployment-8606,SelfLink:/apis/apps/v1/namespaces/deployment-8606/deployments/nginx-deployment,UID:431aeff0-8fdd-46dd-b746-bebd16c6a5c3,ResourceVersion:18151286,Generation:3,CreationTimestamp:2019-12-26 14:19:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*30,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:25,UpdatedReplicas:5,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[{Progressing True 2019-12-26 14:20:12 +0000 UTC 2019-12-26 14:19:33 +0000 UTC ReplicaSetUpdated ReplicaSet "nginx-deployment-55fb7cb77f" is progressing.} {Available False 2019-12-26 14:20:17 +0000 UTC 2019-12-26 14:20:17 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.}],ReadyReplicas:8,CollisionCount:nil,},} Dec 26 14:20:33.626: INFO: New ReplicaSet "nginx-deployment-55fb7cb77f" of Deployment "nginx-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f,GenerateName:,Namespace:deployment-8606,SelfLink:/apis/apps/v1/namespaces/deployment-8606/replicasets/nginx-deployment-55fb7cb77f,UID:8f445b02-2692-4cd7-b53f-996acc54a0ee,ResourceVersion:18151279,Generation:3,CreationTimestamp:2019-12-26 14:20:11 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment nginx-deployment 431aeff0-8fdd-46dd-b746-bebd16c6a5c3 0xc0028b1c57 0xc0028b1c58}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*13,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:5,FullyLabeledReplicas:5,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Dec 26 14:20:33.626: INFO: All old ReplicaSets of Deployment "nginx-deployment": Dec 26 14:20:33.626: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498,GenerateName:,Namespace:deployment-8606,SelfLink:/apis/apps/v1/namespaces/deployment-8606/replicasets/nginx-deployment-7b8c6f4498,UID:838977c2-933f-4661-a7b7-74c2ad196907,ResourceVersion:18151284,Generation:3,CreationTimestamp:2019-12-26 14:19:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment nginx-deployment 431aeff0-8fdd-46dd-b746-bebd16c6a5c3 0xc0028b1d27 0xc0028b1d28}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*20,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[],},} Dec 26 14:20:35.060: INFO: Pod "nginx-deployment-55fb7cb77f-297x5" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-297x5,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-8606,SelfLink:/api/v1/namespaces/deployment-8606/pods/nginx-deployment-55fb7cb77f-297x5,UID:33b5aa6e-9fa1-4e23-8e24-bd77cc522cae,ResourceVersion:18151275,Generation:0,CreationTimestamp:2019-12-26 14:20:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 8f445b02-2692-4cd7-b53f-996acc54a0ee 0xc002ca06c7 0xc002ca06c8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-q2grk {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-q2grk,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-q2grk true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002ca0740} {node.kubernetes.io/unreachable Exists NoExecute 0xc002ca0760}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-26 14:20:21 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 26 14:20:35.060: INFO: Pod "nginx-deployment-55fb7cb77f-488vw" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-488vw,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-8606,SelfLink:/api/v1/namespaces/deployment-8606/pods/nginx-deployment-55fb7cb77f-488vw,UID:94378ca3-845a-4f34-990a-fc881949f578,ResourceVersion:18151217,Generation:0,CreationTimestamp:2019-12-26 14:20:11 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 8f445b02-2692-4cd7-b53f-996acc54a0ee 0xc002ca07e7 0xc002ca07e8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-q2grk {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-q2grk,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-q2grk true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002ca0850} {node.kubernetes.io/unreachable Exists NoExecute 0xc002ca0870}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-26 14:20:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-26 14:20:12 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-26 14:20:12 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-26 14:20:11 +0000 UTC }],Message:,Reason:,HostIP:10.96.2.216,PodIP:,StartTime:2019-12-26 14:20:12 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 26 14:20:35.061: INFO: Pod "nginx-deployment-55fb7cb77f-5npkb" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-5npkb,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-8606,SelfLink:/api/v1/namespaces/deployment-8606/pods/nginx-deployment-55fb7cb77f-5npkb,UID:17864eb3-a1c8-4880-b6c3-1a138d4bb431,ResourceVersion:18151216,Generation:0,CreationTimestamp:2019-12-26 14:20:11 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 8f445b02-2692-4cd7-b53f-996acc54a0ee 0xc002ca0947 0xc002ca0948}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-q2grk {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-q2grk,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-q2grk true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002ca09c0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002ca09e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-26 14:20:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-26 14:20:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-26 14:20:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-26 14:20:11 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2019-12-26 14:20:13 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 26 14:20:35.061: INFO: Pod "nginx-deployment-55fb7cb77f-6fdp4" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-6fdp4,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-8606,SelfLink:/api/v1/namespaces/deployment-8606/pods/nginx-deployment-55fb7cb77f-6fdp4,UID:7d74a9a0-dca8-44c6-8feb-3db35b0eb092,ResourceVersion:18151274,Generation:0,CreationTimestamp:2019-12-26 14:20:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 8f445b02-2692-4cd7-b53f-996acc54a0ee 0xc002ca0ab7 0xc002ca0ab8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-q2grk {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-q2grk,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-q2grk true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002ca0b20} {node.kubernetes.io/unreachable Exists NoExecute 0xc002ca0b40}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-26 14:20:21 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 26 14:20:35.061: INFO: Pod "nginx-deployment-55fb7cb77f-6vtsn" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-6vtsn,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-8606,SelfLink:/api/v1/namespaces/deployment-8606/pods/nginx-deployment-55fb7cb77f-6vtsn,UID:1fc0398f-0170-4d24-bca3-7e67f292abf9,ResourceVersion:18151280,Generation:0,CreationTimestamp:2019-12-26 14:20:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 8f445b02-2692-4cd7-b53f-996acc54a0ee 0xc002ca0bc7 0xc002ca0bc8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-q2grk {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-q2grk,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-q2grk true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002ca0c40} {node.kubernetes.io/unreachable Exists NoExecute 0xc002ca0c60}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-26 14:20:23 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 26 14:20:35.061: INFO: Pod "nginx-deployment-55fb7cb77f-8hnxb" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-8hnxb,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-8606,SelfLink:/api/v1/namespaces/deployment-8606/pods/nginx-deployment-55fb7cb77f-8hnxb,UID:9fc9e3ee-ccff-434b-a05c-eb088f17184e,ResourceVersion:18151261,Generation:0,CreationTimestamp:2019-12-26 14:20:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 8f445b02-2692-4cd7-b53f-996acc54a0ee 0xc002ca0ce7 0xc002ca0ce8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-q2grk {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-q2grk,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-q2grk true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002ca0d50} {node.kubernetes.io/unreachable Exists NoExecute 0xc002ca0d70}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-26 14:20:20 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 26 14:20:35.061: INFO: Pod "nginx-deployment-55fb7cb77f-fbdp4" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-fbdp4,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-8606,SelfLink:/api/v1/namespaces/deployment-8606/pods/nginx-deployment-55fb7cb77f-fbdp4,UID:3a377ad8-e48e-4f1d-a0f1-57ac8c87567f,ResourceVersion:18151273,Generation:0,CreationTimestamp:2019-12-26 14:20:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 8f445b02-2692-4cd7-b53f-996acc54a0ee 0xc002ca0df7 0xc002ca0df8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-q2grk {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-q2grk,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-q2grk true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002ca0e60} {node.kubernetes.io/unreachable Exists NoExecute 0xc002ca0e80}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-26 14:20:21 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 26 14:20:35.061: INFO: Pod "nginx-deployment-55fb7cb77f-mbplk" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-mbplk,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-8606,SelfLink:/api/v1/namespaces/deployment-8606/pods/nginx-deployment-55fb7cb77f-mbplk,UID:56bace93-bee8-4110-96dc-9e5618ac7b7e,ResourceVersion:18151190,Generation:0,CreationTimestamp:2019-12-26 14:20:11 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 8f445b02-2692-4cd7-b53f-996acc54a0ee 0xc002ca0f07 0xc002ca0f08}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-q2grk {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-q2grk,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-q2grk true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002ca0f80} {node.kubernetes.io/unreachable Exists NoExecute 0xc002ca0fa0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-26 14:20:11 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-26 14:20:11 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-26 14:20:11 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-26 14:20:11 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2019-12-26 14:20:11 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 26 14:20:35.062: INFO: Pod "nginx-deployment-55fb7cb77f-rpgqc" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-rpgqc,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-8606,SelfLink:/api/v1/namespaces/deployment-8606/pods/nginx-deployment-55fb7cb77f-rpgqc,UID:31599be4-8a56-4f5e-83fe-2705611321a8,ResourceVersion:18151194,Generation:0,CreationTimestamp:2019-12-26 14:20:11 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 8f445b02-2692-4cd7-b53f-996acc54a0ee 0xc002ca1077 0xc002ca1078}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-q2grk {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-q2grk,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-q2grk true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002ca10e0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002ca1100}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-26 14:20:11 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-26 14:20:11 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-26 14:20:11 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-26 14:20:11 +0000 UTC }],Message:,Reason:,HostIP:10.96.2.216,PodIP:,StartTime:2019-12-26 14:20:11 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 26 14:20:35.062: INFO: Pod "nginx-deployment-55fb7cb77f-ts2v6" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-ts2v6,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-8606,SelfLink:/api/v1/namespaces/deployment-8606/pods/nginx-deployment-55fb7cb77f-ts2v6,UID:3765e152-a54f-4119-a686-ed00e114ad88,ResourceVersion:18151296,Generation:0,CreationTimestamp:2019-12-26 14:20:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 8f445b02-2692-4cd7-b53f-996acc54a0ee 0xc002ca11d7 0xc002ca11d8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-q2grk {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-q2grk,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-q2grk true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002ca1240} {node.kubernetes.io/unreachable Exists NoExecute 0xc002ca1260}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-26 14:20:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-26 14:20:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-26 14:20:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-26 14:20:17 +0000 UTC }],Message:,Reason:,HostIP:10.96.2.216,PodIP:,StartTime:2019-12-26 14:20:21 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 26 14:20:35.062: INFO: Pod "nginx-deployment-55fb7cb77f-w9xt8" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-w9xt8,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-8606,SelfLink:/api/v1/namespaces/deployment-8606/pods/nginx-deployment-55fb7cb77f-w9xt8,UID:cd081dca-4c91-4273-b5c3-61c595cbcfaf,ResourceVersion:18151203,Generation:0,CreationTimestamp:2019-12-26 14:20:11 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 8f445b02-2692-4cd7-b53f-996acc54a0ee 0xc002ca1337 0xc002ca1338}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-q2grk {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-q2grk,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-q2grk true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002ca13b0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002ca13d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-26 14:20:11 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-26 14:20:11 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-26 14:20:11 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-26 14:20:11 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2019-12-26 14:20:11 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 26 14:20:35.062: INFO: Pod "nginx-deployment-55fb7cb77f-wcnmf" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-wcnmf,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-8606,SelfLink:/api/v1/namespaces/deployment-8606/pods/nginx-deployment-55fb7cb77f-wcnmf,UID:e4f534d8-687e-47ee-aa97-107de7e448ad,ResourceVersion:18151259,Generation:0,CreationTimestamp:2019-12-26 14:20:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 8f445b02-2692-4cd7-b53f-996acc54a0ee 0xc002ca14a7 0xc002ca14a8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-q2grk {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-q2grk,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-q2grk true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002ca1520} {node.kubernetes.io/unreachable Exists NoExecute 0xc002ca1540}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-26 14:20:20 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 26 14:20:35.062: INFO: Pod "nginx-deployment-55fb7cb77f-xc7h5" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-xc7h5,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-8606,SelfLink:/api/v1/namespaces/deployment-8606/pods/nginx-deployment-55fb7cb77f-xc7h5,UID:2d520e03-a738-4a84-8be1-3b2a003175e6,ResourceVersion:18151272,Generation:0,CreationTimestamp:2019-12-26 14:20:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 8f445b02-2692-4cd7-b53f-996acc54a0ee 0xc002ca15c7 0xc002ca15c8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-q2grk {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-q2grk,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-q2grk true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002ca1660} {node.kubernetes.io/unreachable Exists NoExecute 0xc002ca1680}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-26 14:20:21 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 26 14:20:35.062: INFO: Pod "nginx-deployment-7b8c6f4498-286tk" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-286tk,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-8606,SelfLink:/api/v1/namespaces/deployment-8606/pods/nginx-deployment-7b8c6f4498-286tk,UID:4a09de54-8e5a-4215-bb6d-d7c602f6af3b,ResourceVersion:18151154,Generation:0,CreationTimestamp:2019-12-26 14:19:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 838977c2-933f-4661-a7b7-74c2ad196907 0xc002ca1707 0xc002ca1708}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-q2grk {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-q2grk,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-q2grk true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002ca1770} {node.kubernetes.io/unreachable Exists NoExecute 0xc002ca1790}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-26 14:19:36 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-26 14:20:09 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-26 14:20:09 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-26 14:19:33 +0000 UTC }],Message:,Reason:,HostIP:10.96.2.216,PodIP:10.32.0.7,StartTime:2019-12-26 14:19:36 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-12-26 14:20:08 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://0ba92dc68b89ced1ed67167c2519b4f97441d43c618ff095b579340cdbc57578}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 26 14:20:35.063: INFO: Pod "nginx-deployment-7b8c6f4498-45h24" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-45h24,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-8606,SelfLink:/api/v1/namespaces/deployment-8606/pods/nginx-deployment-7b8c6f4498-45h24,UID:e868c3e0-0dfd-4358-bd97-164f2fc2e621,ResourceVersion:18151313,Generation:0,CreationTimestamp:2019-12-26 14:20:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 838977c2-933f-4661-a7b7-74c2ad196907 0xc002ca1867 0xc002ca1868}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-q2grk {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-q2grk,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-q2grk true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002ca18e0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002ca1900}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-26 14:20:26 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-26 14:20:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-26 14:20:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-26 14:20:20 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2019-12-26 14:20:26 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 26 14:20:35.063: INFO: Pod "nginx-deployment-7b8c6f4498-4zj6s" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-4zj6s,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-8606,SelfLink:/api/v1/namespaces/deployment-8606/pods/nginx-deployment-7b8c6f4498-4zj6s,UID:84f932ce-056b-4813-a2b9-fd0c2d4193e8,ResourceVersion:18151297,Generation:0,CreationTimestamp:2019-12-26 14:20:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 838977c2-933f-4661-a7b7-74c2ad196907 0xc002ca19c7 0xc002ca19c8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-q2grk {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-q2grk,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-q2grk true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002ca1a40} {node.kubernetes.io/unreachable Exists NoExecute 0xc002ca1a60}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-26 14:20:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-26 14:20:23 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-26 14:20:23 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-26 14:20:20 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2019-12-26 14:20:23 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 26 14:20:35.063: INFO: Pod "nginx-deployment-7b8c6f4498-5zxh6" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-5zxh6,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-8606,SelfLink:/api/v1/namespaces/deployment-8606/pods/nginx-deployment-7b8c6f4498-5zxh6,UID:2e71aff4-ef76-4de3-8a95-b945659369c2,ResourceVersion:18151132,Generation:0,CreationTimestamp:2019-12-26 14:19:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 838977c2-933f-4661-a7b7-74c2ad196907 0xc002ca1b27 0xc002ca1b28}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-q2grk {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-q2grk,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-q2grk true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002ca1ba0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002ca1bc0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-26 14:19:33 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-26 14:20:06 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-26 14:20:06 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-26 14:19:33 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.3,StartTime:2019-12-26 14:19:33 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-12-26 14:20:04 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://0912602051d780dbb9acaa7c406ce1f2b1a5a443b58e920686785680af442fd4}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 26 14:20:35.064: INFO: Pod "nginx-deployment-7b8c6f4498-7n2c2" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-7n2c2,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-8606,SelfLink:/api/v1/namespaces/deployment-8606/pods/nginx-deployment-7b8c6f4498-7n2c2,UID:beb37190-79e6-42c9-9a93-f0bba0b2aeb6,ResourceVersion:18151139,Generation:0,CreationTimestamp:2019-12-26 14:19:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 838977c2-933f-4661-a7b7-74c2ad196907 0xc002ca1c97 0xc002ca1c98}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-q2grk {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-q2grk,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-q2grk true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002ca1d10} {node.kubernetes.io/unreachable Exists NoExecute 0xc002ca1d30}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-26 14:19:33 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-26 14:20:06 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-26 14:20:06 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-26 14:19:33 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.1,StartTime:2019-12-26 14:19:33 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-12-26 14:20:04 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://bbd4032af493ee097fa6a0e884917a218522c541fffb36b07d76185418aa6962}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 26 14:20:35.064: INFO: Pod "nginx-deployment-7b8c6f4498-7qxfr" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-7qxfr,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-8606,SelfLink:/api/v1/namespaces/deployment-8606/pods/nginx-deployment-7b8c6f4498-7qxfr,UID:86fcf194-1b14-4fe9-ad4d-927a7ce39de9,ResourceVersion:18151262,Generation:0,CreationTimestamp:2019-12-26 14:20:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 838977c2-933f-4661-a7b7-74c2ad196907 0xc002ca1e07 0xc002ca1e08}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-q2grk {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-q2grk,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-q2grk true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002ca1e80} {node.kubernetes.io/unreachable Exists NoExecute 0xc002ca1ea0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-26 14:20:20 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 26 14:20:35.064: INFO: Pod "nginx-deployment-7b8c6f4498-8zrls" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-8zrls,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-8606,SelfLink:/api/v1/namespaces/deployment-8606/pods/nginx-deployment-7b8c6f4498-8zrls,UID:54451b57-4b2d-45e4-be34-c3cff73dca51,ResourceVersion:18151277,Generation:0,CreationTimestamp:2019-12-26 14:20:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 838977c2-933f-4661-a7b7-74c2ad196907 0xc002ca1f27 0xc002ca1f28}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-q2grk {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-q2grk,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-q2grk true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002ca1f90} {node.kubernetes.io/unreachable Exists NoExecute 0xc002ca1fb0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-26 14:20:21 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 26 14:20:35.064: INFO: Pod "nginx-deployment-7b8c6f4498-98njh" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-98njh,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-8606,SelfLink:/api/v1/namespaces/deployment-8606/pods/nginx-deployment-7b8c6f4498-98njh,UID:33d6985f-d750-4de7-be26-0b0a1a39be1d,ResourceVersion:18151266,Generation:0,CreationTimestamp:2019-12-26 14:20:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 838977c2-933f-4661-a7b7-74c2ad196907 0xc002ce8037 0xc002ce8038}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-q2grk {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-q2grk,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-q2grk true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002ce80b0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002ce80d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-26 14:20:21 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 26 14:20:35.064: INFO: Pod "nginx-deployment-7b8c6f4498-fw8ht" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-fw8ht,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-8606,SelfLink:/api/v1/namespaces/deployment-8606/pods/nginx-deployment-7b8c6f4498-fw8ht,UID:e1131b76-c230-4bb1-abdb-bf5ef3046421,ResourceVersion:18151148,Generation:0,CreationTimestamp:2019-12-26 14:19:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 838977c2-933f-4661-a7b7-74c2ad196907 0xc002ce8157 0xc002ce8158}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-q2grk {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-q2grk,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-q2grk true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002ce81c0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002ce81e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-26 14:19:33 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-26 14:20:08 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-26 14:20:08 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-26 14:19:33 +0000 UTC }],Message:,Reason:,HostIP:10.96.2.216,PodIP:10.32.0.4,StartTime:2019-12-26 14:19:33 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-12-26 14:20:07 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://20ca800694457f627bf006f6c5a8adf017fcf5bec569cbe0b54cfeed8675d264}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 26 14:20:35.064: INFO: Pod "nginx-deployment-7b8c6f4498-jh95b" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-jh95b,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-8606,SelfLink:/api/v1/namespaces/deployment-8606/pods/nginx-deployment-7b8c6f4498-jh95b,UID:a2c34855-86ed-4967-9094-a62e4b125619,ResourceVersion:18151281,Generation:0,CreationTimestamp:2019-12-26 14:20:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 838977c2-933f-4661-a7b7-74c2ad196907 0xc002ce82b7 0xc002ce82b8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-q2grk {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-q2grk,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-q2grk true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002ce8330} {node.kubernetes.io/unreachable Exists NoExecute 0xc002ce8350}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-26 14:20:20 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-26 14:20:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-26 14:20:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-26 14:20:17 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2019-12-26 14:20:20 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 26 14:20:35.064: INFO: Pod "nginx-deployment-7b8c6f4498-lhqmg" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-lhqmg,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-8606,SelfLink:/api/v1/namespaces/deployment-8606/pods/nginx-deployment-7b8c6f4498-lhqmg,UID:1e78263e-273b-47e1-9d1c-02f820c1fd0c,ResourceVersion:18151124,Generation:0,CreationTimestamp:2019-12-26 14:19:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 838977c2-933f-4661-a7b7-74c2ad196907 0xc002ce8417 0xc002ce8418}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-q2grk {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-q2grk,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-q2grk true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002ce84a0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002ce84c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-26 14:19:33 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-26 14:20:06 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-26 14:20:06 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-26 14:19:33 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.4,StartTime:2019-12-26 14:19:33 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-12-26 14:20:04 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://6ec65eb727c3750fc94080ff284de09ca90e091b21cd4240c640e3471487a749}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 26 14:20:35.065: INFO: Pod "nginx-deployment-7b8c6f4498-lnwg7" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-lnwg7,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-8606,SelfLink:/api/v1/namespaces/deployment-8606/pods/nginx-deployment-7b8c6f4498-lnwg7,UID:d77ad677-7dc8-44e7-b70c-7163f4c59142,ResourceVersion:18151282,Generation:0,CreationTimestamp:2019-12-26 14:20:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 838977c2-933f-4661-a7b7-74c2ad196907 0xc002ce8597 0xc002ce8598}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-q2grk {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-q2grk,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-q2grk true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002ce8600} {node.kubernetes.io/unreachable Exists NoExecute 0xc002ce8620}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-26 14:20:19 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-26 14:20:19 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-26 14:20:19 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-26 14:20:17 +0000 UTC }],Message:,Reason:,HostIP:10.96.2.216,PodIP:,StartTime:2019-12-26 14:20:19 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 26 14:20:35.065: INFO: Pod "nginx-deployment-7b8c6f4498-mvtzh" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-mvtzh,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-8606,SelfLink:/api/v1/namespaces/deployment-8606/pods/nginx-deployment-7b8c6f4498-mvtzh,UID:d1efd79d-7d70-45a4-8fbf-d42db09ba7b5,ResourceVersion:18151260,Generation:0,CreationTimestamp:2019-12-26 14:20:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 838977c2-933f-4661-a7b7-74c2ad196907 0xc002ce86e7 0xc002ce86e8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-q2grk {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-q2grk,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-q2grk true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002ce8760} {node.kubernetes.io/unreachable Exists NoExecute 0xc002ce8780}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-26 14:20:20 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 26 14:20:35.065: INFO: Pod "nginx-deployment-7b8c6f4498-pc58w" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-pc58w,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-8606,SelfLink:/api/v1/namespaces/deployment-8606/pods/nginx-deployment-7b8c6f4498-pc58w,UID:923c7054-8fdb-416a-957f-824aa0b26de9,ResourceVersion:18151151,Generation:0,CreationTimestamp:2019-12-26 14:19:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 838977c2-933f-4661-a7b7-74c2ad196907 0xc002ce8807 0xc002ce8808}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-q2grk {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-q2grk,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-q2grk true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002ce8870} {node.kubernetes.io/unreachable Exists NoExecute 0xc002ce8890}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-26 14:19:33 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-26 14:20:09 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-26 14:20:09 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-26 14:19:33 +0000 UTC }],Message:,Reason:,HostIP:10.96.2.216,PodIP:10.32.0.6,StartTime:2019-12-26 14:19:33 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-12-26 14:20:07 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://9bd0bd6b1bb5eb4fa51474f6fb3be83a9ea9b8abd0ed8617c7a77f017f8b55fe}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 26 14:20:35.065: INFO: Pod "nginx-deployment-7b8c6f4498-q6x5k" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-q6x5k,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-8606,SelfLink:/api/v1/namespaces/deployment-8606/pods/nginx-deployment-7b8c6f4498-q6x5k,UID:bfa6eeb6-5d86-4da0-a3d3-c3c0c7d93c72,ResourceVersion:18151269,Generation:0,CreationTimestamp:2019-12-26 14:20:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 838977c2-933f-4661-a7b7-74c2ad196907 0xc002ce8977 0xc002ce8978}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-q2grk {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-q2grk,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-q2grk true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002ce89f0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002ce8a20}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-26 14:20:21 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 26 14:20:35.065: INFO: Pod "nginx-deployment-7b8c6f4498-rqwsf" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-rqwsf,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-8606,SelfLink:/api/v1/namespaces/deployment-8606/pods/nginx-deployment-7b8c6f4498-rqwsf,UID:5d777504-4bfd-45cb-9507-3ddb506210d2,ResourceVersion:18151267,Generation:0,CreationTimestamp:2019-12-26 14:20:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 838977c2-933f-4661-a7b7-74c2ad196907 0xc002ce8aa7 0xc002ce8aa8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-q2grk {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-q2grk,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-q2grk true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002ce8b20} {node.kubernetes.io/unreachable Exists NoExecute 0xc002ce8b40}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-26 14:20:21 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 26 14:20:35.066: INFO: Pod "nginx-deployment-7b8c6f4498-sv9pj" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-sv9pj,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-8606,SelfLink:/api/v1/namespaces/deployment-8606/pods/nginx-deployment-7b8c6f4498-sv9pj,UID:b25b9a50-20b7-415d-8850-942cb70e1b83,ResourceVersion:18151128,Generation:0,CreationTimestamp:2019-12-26 14:19:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 838977c2-933f-4661-a7b7-74c2ad196907 0xc002ce8bc7 0xc002ce8bc8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-q2grk {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-q2grk,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-q2grk true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002ce8c40} {node.kubernetes.io/unreachable Exists NoExecute 0xc002ce8c60}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-26 14:19:33 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-26 14:20:06 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-26 14:20:06 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-26 14:19:33 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.5,StartTime:2019-12-26 14:19:33 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-12-26 14:20:04 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://02b8a7ad6ef141fc80963ccc46b10fe1863c42e901b2eb76d1fa426698eff08a}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 26 14:20:35.066: INFO: Pod "nginx-deployment-7b8c6f4498-tf4c2" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-tf4c2,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-8606,SelfLink:/api/v1/namespaces/deployment-8606/pods/nginx-deployment-7b8c6f4498-tf4c2,UID:eca1ba15-3240-4904-a39c-f76582bd8a2d,ResourceVersion:18151268,Generation:0,CreationTimestamp:2019-12-26 14:20:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 838977c2-933f-4661-a7b7-74c2ad196907 0xc002ce8d47 0xc002ce8d48}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-q2grk {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-q2grk,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-q2grk true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002ce8db0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002ce8dd0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-26 14:20:21 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 26 14:20:35.066: INFO: Pod "nginx-deployment-7b8c6f4498-wh8nk" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-wh8nk,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-8606,SelfLink:/api/v1/namespaces/deployment-8606/pods/nginx-deployment-7b8c6f4498-wh8nk,UID:46ba0203-273b-470b-8fd0-5bccaf22b1aa,ResourceVersion:18151135,Generation:0,CreationTimestamp:2019-12-26 14:19:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 838977c2-933f-4661-a7b7-74c2ad196907 0xc002ce8e67 0xc002ce8e68}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-q2grk {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-q2grk,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-q2grk true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002ce8ee0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002ce8f00}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-26 14:19:33 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-26 14:20:06 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-26 14:20:06 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-26 14:19:33 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.2,StartTime:2019-12-26 14:19:33 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-12-26 14:20:04 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://bd073198248e78a0ea704cfc8954d0f8570f5a90b741ca0132197b69ac96adfb}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 26 14:20:35.066: INFO: Pod "nginx-deployment-7b8c6f4498-wphg2" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-wphg2,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-8606,SelfLink:/api/v1/namespaces/deployment-8606/pods/nginx-deployment-7b8c6f4498-wphg2,UID:0b9d48e2-ced0-41ce-a718-a3ede1c5f985,ResourceVersion:18151314,Generation:0,CreationTimestamp:2019-12-26 14:20:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 838977c2-933f-4661-a7b7-74c2ad196907 0xc002ce8fd7 0xc002ce8fd8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-q2grk {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-q2grk,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-q2grk true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002ce9050} {node.kubernetes.io/unreachable Exists NoExecute 0xc002ce9070}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-26 14:20:26 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-26 14:20:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-26 14:20:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-26 14:20:17 +0000 UTC }],Message:,Reason:,HostIP:10.96.2.216,PodIP:,StartTime:2019-12-26 14:20:26 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 26 14:20:35.066: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-8606" for this suite. Dec 26 14:21:39.274: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 26 14:21:39.443: INFO: namespace deployment-8606 deletion completed in 1m0.672096741s • [SLOW TEST:126.379 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 26 14:21:39.444: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Dec 26 14:21:39.501: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 26 14:21:49.957: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-2754" for this suite. Dec 26 14:22:41.989: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 26 14:22:42.272: INFO: namespace pods-2754 deletion completed in 52.302776834s • [SLOW TEST:62.828 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 26 14:22:42.273: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [It] should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a replication controller Dec 26 14:22:42.684: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1159' Dec 26 14:22:45.250: INFO: stderr: "" Dec 26 14:22:45.250: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Dec 26 14:22:45.251: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1159' Dec 26 14:22:45.563: INFO: stderr: "" Dec 26 14:22:45.563: INFO: stdout: "update-demo-nautilus-5jjmx update-demo-nautilus-m58cc " Dec 26 14:22:45.564: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-5jjmx -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1159' Dec 26 14:22:45.741: INFO: stderr: "" Dec 26 14:22:45.741: INFO: stdout: "" Dec 26 14:22:45.741: INFO: update-demo-nautilus-5jjmx is created but not running Dec 26 14:22:50.742: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1159' Dec 26 14:22:50.844: INFO: stderr: "" Dec 26 14:22:50.845: INFO: stdout: "update-demo-nautilus-5jjmx update-demo-nautilus-m58cc " Dec 26 14:22:50.845: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-5jjmx -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1159' Dec 26 14:22:50.976: INFO: stderr: "" Dec 26 14:22:50.976: INFO: stdout: "" Dec 26 14:22:50.976: INFO: update-demo-nautilus-5jjmx is created but not running Dec 26 14:22:55.977: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1159' Dec 26 14:22:56.075: INFO: stderr: "" Dec 26 14:22:56.075: INFO: stdout: "update-demo-nautilus-5jjmx update-demo-nautilus-m58cc " Dec 26 14:22:56.075: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-5jjmx -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1159' Dec 26 14:22:56.176: INFO: stderr: "" Dec 26 14:22:56.176: INFO: stdout: "" Dec 26 14:22:56.176: INFO: update-demo-nautilus-5jjmx is created but not running Dec 26 14:23:01.177: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1159' Dec 26 14:23:01.365: INFO: stderr: "" Dec 26 14:23:01.365: INFO: stdout: "update-demo-nautilus-5jjmx update-demo-nautilus-m58cc " Dec 26 14:23:01.366: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-5jjmx -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1159' Dec 26 14:23:01.505: INFO: stderr: "" Dec 26 14:23:01.505: INFO: stdout: "true" Dec 26 14:23:01.505: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-5jjmx -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1159' Dec 26 14:23:01.606: INFO: stderr: "" Dec 26 14:23:01.606: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Dec 26 14:23:01.606: INFO: validating pod update-demo-nautilus-5jjmx Dec 26 14:23:01.624: INFO: got data: { "image": "nautilus.jpg" } Dec 26 14:23:01.625: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Dec 26 14:23:01.625: INFO: update-demo-nautilus-5jjmx is verified up and running Dec 26 14:23:01.625: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-m58cc -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1159' Dec 26 14:23:01.713: INFO: stderr: "" Dec 26 14:23:01.713: INFO: stdout: "true" Dec 26 14:23:01.714: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-m58cc -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1159' Dec 26 14:23:01.808: INFO: stderr: "" Dec 26 14:23:01.808: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Dec 26 14:23:01.808: INFO: validating pod update-demo-nautilus-m58cc Dec 26 14:23:01.847: INFO: got data: { "image": "nautilus.jpg" } Dec 26 14:23:01.847: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Dec 26 14:23:01.847: INFO: update-demo-nautilus-m58cc is verified up and running STEP: scaling down the replication controller Dec 26 14:23:01.849: INFO: scanned /root for discovery docs: Dec 26 14:23:01.849: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-1159' Dec 26 14:23:02.983: INFO: stderr: "" Dec 26 14:23:02.983: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Dec 26 14:23:02.983: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1159' Dec 26 14:23:03.073: INFO: stderr: "" Dec 26 14:23:03.073: INFO: stdout: "update-demo-nautilus-5jjmx update-demo-nautilus-m58cc " STEP: Replicas for name=update-demo: expected=1 actual=2 Dec 26 14:23:08.073: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1159' Dec 26 14:23:08.220: INFO: stderr: "" Dec 26 14:23:08.220: INFO: stdout: "update-demo-nautilus-5jjmx update-demo-nautilus-m58cc " STEP: Replicas for name=update-demo: expected=1 actual=2 Dec 26 14:23:13.221: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1159' Dec 26 14:23:13.318: INFO: stderr: "" Dec 26 14:23:13.318: INFO: stdout: "update-demo-nautilus-5jjmx update-demo-nautilus-m58cc " STEP: Replicas for name=update-demo: expected=1 actual=2 Dec 26 14:23:18.318: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1159' Dec 26 14:23:18.419: INFO: stderr: "" Dec 26 14:23:18.419: INFO: stdout: "update-demo-nautilus-m58cc " Dec 26 14:23:18.420: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-m58cc -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1159' Dec 26 14:23:18.540: INFO: stderr: "" Dec 26 14:23:18.541: INFO: stdout: "true" Dec 26 14:23:18.541: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-m58cc -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1159' Dec 26 14:23:18.714: INFO: stderr: "" Dec 26 14:23:18.714: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Dec 26 14:23:18.714: INFO: validating pod update-demo-nautilus-m58cc Dec 26 14:23:18.731: INFO: got data: { "image": "nautilus.jpg" } Dec 26 14:23:18.731: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Dec 26 14:23:18.731: INFO: update-demo-nautilus-m58cc is verified up and running STEP: scaling up the replication controller Dec 26 14:23:18.733: INFO: scanned /root for discovery docs: Dec 26 14:23:18.733: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-1159' Dec 26 14:23:21.810: INFO: stderr: "" Dec 26 14:23:21.810: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Dec 26 14:23:21.810: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1159' Dec 26 14:23:22.097: INFO: stderr: "" Dec 26 14:23:22.097: INFO: stdout: "update-demo-nautilus-8qh7b update-demo-nautilus-m58cc " Dec 26 14:23:22.097: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8qh7b -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1159' Dec 26 14:23:23.198: INFO: stderr: "" Dec 26 14:23:23.198: INFO: stdout: "" Dec 26 14:23:23.198: INFO: update-demo-nautilus-8qh7b is created but not running Dec 26 14:23:28.198: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1159' Dec 26 14:23:28.297: INFO: stderr: "" Dec 26 14:23:28.297: INFO: stdout: "update-demo-nautilus-8qh7b update-demo-nautilus-m58cc " Dec 26 14:23:28.297: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8qh7b -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1159' Dec 26 14:23:28.386: INFO: stderr: "" Dec 26 14:23:28.386: INFO: stdout: "" Dec 26 14:23:28.386: INFO: update-demo-nautilus-8qh7b is created but not running Dec 26 14:23:33.387: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1159' Dec 26 14:23:33.569: INFO: stderr: "" Dec 26 14:23:33.569: INFO: stdout: "update-demo-nautilus-8qh7b update-demo-nautilus-m58cc " Dec 26 14:23:33.569: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8qh7b -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1159' Dec 26 14:23:33.684: INFO: stderr: "" Dec 26 14:23:33.684: INFO: stdout: "true" Dec 26 14:23:33.684: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8qh7b -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1159' Dec 26 14:23:33.816: INFO: stderr: "" Dec 26 14:23:33.816: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Dec 26 14:23:33.816: INFO: validating pod update-demo-nautilus-8qh7b Dec 26 14:23:33.827: INFO: got data: { "image": "nautilus.jpg" } Dec 26 14:23:33.827: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Dec 26 14:23:33.827: INFO: update-demo-nautilus-8qh7b is verified up and running Dec 26 14:23:33.827: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-m58cc -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1159' Dec 26 14:23:33.942: INFO: stderr: "" Dec 26 14:23:33.942: INFO: stdout: "true" Dec 26 14:23:33.942: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-m58cc -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1159' Dec 26 14:23:34.039: INFO: stderr: "" Dec 26 14:23:34.039: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Dec 26 14:23:34.039: INFO: validating pod update-demo-nautilus-m58cc Dec 26 14:23:34.052: INFO: got data: { "image": "nautilus.jpg" } Dec 26 14:23:34.052: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Dec 26 14:23:34.052: INFO: update-demo-nautilus-m58cc is verified up and running STEP: using delete to clean up resources Dec 26 14:23:34.053: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1159' Dec 26 14:23:34.197: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Dec 26 14:23:34.197: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Dec 26 14:23:34.197: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-1159' Dec 26 14:23:34.317: INFO: stderr: "No resources found.\n" Dec 26 14:23:34.317: INFO: stdout: "" Dec 26 14:23:34.318: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-1159 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Dec 26 14:23:34.481: INFO: stderr: "" Dec 26 14:23:34.482: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 26 14:23:34.483: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1159" for this suite. Dec 26 14:23:58.554: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 26 14:23:58.688: INFO: namespace kubectl-1159 deletion completed in 24.175131048s • [SLOW TEST:76.416 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 26 14:23:58.690: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating replication controller my-hostname-basic-80ec002f-4953-4247-8df7-4106021c4794 Dec 26 14:23:58.852: INFO: Pod name my-hostname-basic-80ec002f-4953-4247-8df7-4106021c4794: Found 0 pods out of 1 Dec 26 14:24:03.880: INFO: Pod name my-hostname-basic-80ec002f-4953-4247-8df7-4106021c4794: Found 1 pods out of 1 Dec 26 14:24:03.880: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-80ec002f-4953-4247-8df7-4106021c4794" are running Dec 26 14:24:07.911: INFO: Pod "my-hostname-basic-80ec002f-4953-4247-8df7-4106021c4794-pftcz" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-26 14:23:58 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-26 14:23:58 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-80ec002f-4953-4247-8df7-4106021c4794]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-26 14:23:58 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-80ec002f-4953-4247-8df7-4106021c4794]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-26 14:23:58 +0000 UTC Reason: Message:}]) Dec 26 14:24:07.912: INFO: Trying to dial the pod Dec 26 14:24:12.952: INFO: Controller my-hostname-basic-80ec002f-4953-4247-8df7-4106021c4794: Got expected result from replica 1 [my-hostname-basic-80ec002f-4953-4247-8df7-4106021c4794-pftcz]: "my-hostname-basic-80ec002f-4953-4247-8df7-4106021c4794-pftcz", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 26 14:24:12.953: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-5825" for this suite. Dec 26 14:24:19.003: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 26 14:24:19.069: INFO: namespace replication-controller-5825 deletion completed in 6.107698132s • [SLOW TEST:20.379 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 26 14:24:19.069: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:167 [It] should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating server pod server in namespace prestop-89 STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace prestop-89 STEP: Deleting pre-stop pod Dec 26 14:24:44.264: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 26 14:24:44.271: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "prestop-89" for this suite. Dec 26 14:25:24.326: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 26 14:25:24.442: INFO: namespace prestop-89 deletion completed in 40.144352533s • [SLOW TEST:65.373 seconds] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 26 14:25:24.442: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Given a Pod with a 'name' label pod-adoption-release is created STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change Dec 26 14:25:33.619: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 26 14:25:33.659: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-4998" for this suite. Dec 26 14:26:11.791: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 26 14:26:11.868: INFO: namespace replicaset-4998 deletion completed in 38.198200564s • [SLOW TEST:47.426 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 26 14:26:11.869: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Dec 26 14:26:11.953: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 26 14:26:24.029: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-5626" for this suite. Dec 26 14:27:12.171: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 26 14:27:12.254: INFO: namespace pods-5626 deletion completed in 48.134804612s • [SLOW TEST:60.385 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 26 14:27:12.255: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-9033.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-9033.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-9033.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-9033.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-9033.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-9033.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-9033.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-9033.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-9033.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-9033.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-9033.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-9033.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9033.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 130.132.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.132.130_udp@PTR;check="$$(dig +tcp +noall +answer +search 130.132.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.132.130_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-9033.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-9033.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-9033.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-9033.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-9033.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-9033.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-9033.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-9033.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-9033.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-9033.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-9033.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-9033.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9033.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 130.132.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.132.130_udp@PTR;check="$$(dig +tcp +noall +answer +search 130.132.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.132.130_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Dec 26 14:27:26.562: INFO: Unable to read wheezy_udp@dns-test-service.dns-9033.svc.cluster.local from pod dns-9033/dns-test-58362a26-1000-4c41-bfab-aacd62938e57: the server could not find the requested resource (get pods dns-test-58362a26-1000-4c41-bfab-aacd62938e57) Dec 26 14:27:26.567: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9033.svc.cluster.local from pod dns-9033/dns-test-58362a26-1000-4c41-bfab-aacd62938e57: the server could not find the requested resource (get pods dns-test-58362a26-1000-4c41-bfab-aacd62938e57) Dec 26 14:27:26.571: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9033.svc.cluster.local from pod dns-9033/dns-test-58362a26-1000-4c41-bfab-aacd62938e57: the server could not find the requested resource (get pods dns-test-58362a26-1000-4c41-bfab-aacd62938e57) Dec 26 14:27:26.575: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9033.svc.cluster.local from pod dns-9033/dns-test-58362a26-1000-4c41-bfab-aacd62938e57: the server could not find the requested resource (get pods dns-test-58362a26-1000-4c41-bfab-aacd62938e57) Dec 26 14:27:26.578: INFO: Unable to read wheezy_udp@_http._tcp.test-service-2.dns-9033.svc.cluster.local from pod dns-9033/dns-test-58362a26-1000-4c41-bfab-aacd62938e57: the server could not find the requested resource (get pods dns-test-58362a26-1000-4c41-bfab-aacd62938e57) Dec 26 14:27:26.583: INFO: Unable to read wheezy_tcp@_http._tcp.test-service-2.dns-9033.svc.cluster.local from pod dns-9033/dns-test-58362a26-1000-4c41-bfab-aacd62938e57: the server could not find the requested resource (get pods dns-test-58362a26-1000-4c41-bfab-aacd62938e57) Dec 26 14:27:26.586: INFO: Unable to read wheezy_udp@PodARecord from pod dns-9033/dns-test-58362a26-1000-4c41-bfab-aacd62938e57: the server could not find the requested resource (get pods dns-test-58362a26-1000-4c41-bfab-aacd62938e57) Dec 26 14:27:26.590: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-9033/dns-test-58362a26-1000-4c41-bfab-aacd62938e57: the server could not find the requested resource (get pods dns-test-58362a26-1000-4c41-bfab-aacd62938e57) Dec 26 14:27:26.593: INFO: Unable to read 10.96.132.130_udp@PTR from pod dns-9033/dns-test-58362a26-1000-4c41-bfab-aacd62938e57: the server could not find the requested resource (get pods dns-test-58362a26-1000-4c41-bfab-aacd62938e57) Dec 26 14:27:26.596: INFO: Unable to read 10.96.132.130_tcp@PTR from pod dns-9033/dns-test-58362a26-1000-4c41-bfab-aacd62938e57: the server could not find the requested resource (get pods dns-test-58362a26-1000-4c41-bfab-aacd62938e57) Dec 26 14:27:26.601: INFO: Unable to read jessie_udp@dns-test-service.dns-9033.svc.cluster.local from pod dns-9033/dns-test-58362a26-1000-4c41-bfab-aacd62938e57: the server could not find the requested resource (get pods dns-test-58362a26-1000-4c41-bfab-aacd62938e57) Dec 26 14:27:26.605: INFO: Unable to read jessie_tcp@dns-test-service.dns-9033.svc.cluster.local from pod dns-9033/dns-test-58362a26-1000-4c41-bfab-aacd62938e57: the server could not find the requested resource (get pods dns-test-58362a26-1000-4c41-bfab-aacd62938e57) Dec 26 14:27:26.610: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9033.svc.cluster.local from pod dns-9033/dns-test-58362a26-1000-4c41-bfab-aacd62938e57: the server could not find the requested resource (get pods dns-test-58362a26-1000-4c41-bfab-aacd62938e57) Dec 26 14:27:26.617: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9033.svc.cluster.local from pod dns-9033/dns-test-58362a26-1000-4c41-bfab-aacd62938e57: the server could not find the requested resource (get pods dns-test-58362a26-1000-4c41-bfab-aacd62938e57) Dec 26 14:27:26.622: INFO: Unable to read jessie_udp@_http._tcp.test-service-2.dns-9033.svc.cluster.local from pod dns-9033/dns-test-58362a26-1000-4c41-bfab-aacd62938e57: the server could not find the requested resource (get pods dns-test-58362a26-1000-4c41-bfab-aacd62938e57) Dec 26 14:27:26.624: INFO: Unable to read jessie_tcp@_http._tcp.test-service-2.dns-9033.svc.cluster.local from pod dns-9033/dns-test-58362a26-1000-4c41-bfab-aacd62938e57: the server could not find the requested resource (get pods dns-test-58362a26-1000-4c41-bfab-aacd62938e57) Dec 26 14:27:26.627: INFO: Unable to read jessie_udp@PodARecord from pod dns-9033/dns-test-58362a26-1000-4c41-bfab-aacd62938e57: the server could not find the requested resource (get pods dns-test-58362a26-1000-4c41-bfab-aacd62938e57) Dec 26 14:27:26.629: INFO: Unable to read jessie_tcp@PodARecord from pod dns-9033/dns-test-58362a26-1000-4c41-bfab-aacd62938e57: the server could not find the requested resource (get pods dns-test-58362a26-1000-4c41-bfab-aacd62938e57) Dec 26 14:27:26.632: INFO: Unable to read 10.96.132.130_udp@PTR from pod dns-9033/dns-test-58362a26-1000-4c41-bfab-aacd62938e57: the server could not find the requested resource (get pods dns-test-58362a26-1000-4c41-bfab-aacd62938e57) Dec 26 14:27:26.635: INFO: Unable to read 10.96.132.130_tcp@PTR from pod dns-9033/dns-test-58362a26-1000-4c41-bfab-aacd62938e57: the server could not find the requested resource (get pods dns-test-58362a26-1000-4c41-bfab-aacd62938e57) Dec 26 14:27:26.635: INFO: Lookups using dns-9033/dns-test-58362a26-1000-4c41-bfab-aacd62938e57 failed for: [wheezy_udp@dns-test-service.dns-9033.svc.cluster.local wheezy_tcp@dns-test-service.dns-9033.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-9033.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-9033.svc.cluster.local wheezy_udp@_http._tcp.test-service-2.dns-9033.svc.cluster.local wheezy_tcp@_http._tcp.test-service-2.dns-9033.svc.cluster.local wheezy_udp@PodARecord wheezy_tcp@PodARecord 10.96.132.130_udp@PTR 10.96.132.130_tcp@PTR jessie_udp@dns-test-service.dns-9033.svc.cluster.local jessie_tcp@dns-test-service.dns-9033.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-9033.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-9033.svc.cluster.local jessie_udp@_http._tcp.test-service-2.dns-9033.svc.cluster.local jessie_tcp@_http._tcp.test-service-2.dns-9033.svc.cluster.local jessie_udp@PodARecord jessie_tcp@PodARecord 10.96.132.130_udp@PTR 10.96.132.130_tcp@PTR] Dec 26 14:27:31.806: INFO: DNS probes using dns-9033/dns-test-58362a26-1000-4c41-bfab-aacd62938e57 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 26 14:27:33.766: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-9033" for this suite. Dec 26 14:27:39.856: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 26 14:27:39.973: INFO: namespace dns-9033 deletion completed in 6.182694305s • [SLOW TEST:27.719 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 26 14:27:39.974: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 26 14:28:40.125: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-6492" for this suite. Dec 26 14:29:02.157: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 26 14:29:02.246: INFO: namespace container-probe-6492 deletion completed in 22.112531263s • [SLOW TEST:82.272 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 26 14:29:02.247: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-0bdc7bcd-a3bb-4189-95be-b015f6acbd98 STEP: Creating a pod to test consume secrets Dec 26 14:29:02.370: INFO: Waiting up to 5m0s for pod "pod-secrets-9f733bb9-2098-4184-afb8-c76cc4989a9c" in namespace "secrets-5979" to be "success or failure" Dec 26 14:29:02.375: INFO: Pod "pod-secrets-9f733bb9-2098-4184-afb8-c76cc4989a9c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.798676ms Dec 26 14:29:05.262: INFO: Pod "pod-secrets-9f733bb9-2098-4184-afb8-c76cc4989a9c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.891990625s Dec 26 14:29:07.268: INFO: Pod "pod-secrets-9f733bb9-2098-4184-afb8-c76cc4989a9c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.897598809s Dec 26 14:29:09.275: INFO: Pod "pod-secrets-9f733bb9-2098-4184-afb8-c76cc4989a9c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.904451337s Dec 26 14:29:11.281: INFO: Pod "pod-secrets-9f733bb9-2098-4184-afb8-c76cc4989a9c": Phase="Pending", Reason="", readiness=false. Elapsed: 8.911021324s Dec 26 14:29:13.293: INFO: Pod "pod-secrets-9f733bb9-2098-4184-afb8-c76cc4989a9c": Phase="Pending", Reason="", readiness=false. Elapsed: 10.922788967s Dec 26 14:29:15.297: INFO: Pod "pod-secrets-9f733bb9-2098-4184-afb8-c76cc4989a9c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.926743082s STEP: Saw pod success Dec 26 14:29:15.297: INFO: Pod "pod-secrets-9f733bb9-2098-4184-afb8-c76cc4989a9c" satisfied condition "success or failure" Dec 26 14:29:15.299: INFO: Trying to get logs from node iruya-node pod pod-secrets-9f733bb9-2098-4184-afb8-c76cc4989a9c container secret-env-test: STEP: delete the pod Dec 26 14:29:15.531: INFO: Waiting for pod pod-secrets-9f733bb9-2098-4184-afb8-c76cc4989a9c to disappear Dec 26 14:29:15.579: INFO: Pod pod-secrets-9f733bb9-2098-4184-afb8-c76cc4989a9c no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 26 14:29:15.579: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-5979" for this suite. Dec 26 14:29:21.609: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 26 14:29:21.710: INFO: namespace secrets-5979 deletion completed in 6.12464583s • [SLOW TEST:19.463 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31 should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 26 14:29:21.710: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Dec 26 14:29:38.610: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Dec 26 14:29:38.656: INFO: Pod pod-with-prestop-http-hook still exists Dec 26 14:29:40.657: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Dec 26 14:29:40.666: INFO: Pod pod-with-prestop-http-hook still exists Dec 26 14:29:42.657: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Dec 26 14:29:42.665: INFO: Pod pod-with-prestop-http-hook still exists Dec 26 14:29:44.657: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Dec 26 14:29:44.664: INFO: Pod pod-with-prestop-http-hook still exists Dec 26 14:29:46.657: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Dec 26 14:29:46.663: INFO: Pod pod-with-prestop-http-hook still exists Dec 26 14:29:48.657: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Dec 26 14:29:48.665: INFO: Pod pod-with-prestop-http-hook still exists Dec 26 14:29:50.657: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Dec 26 14:29:50.664: INFO: Pod pod-with-prestop-http-hook still exists Dec 26 14:29:52.657: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Dec 26 14:29:52.676: INFO: Pod pod-with-prestop-http-hook still exists Dec 26 14:29:54.657: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Dec 26 14:29:54.669: INFO: Pod pod-with-prestop-http-hook still exists Dec 26 14:29:56.657: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Dec 26 14:29:56.689: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 26 14:29:56.722: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-4028" for this suite. Dec 26 14:30:18.763: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 26 14:30:18.845: INFO: namespace container-lifecycle-hook-4028 deletion completed in 22.11849496s • [SLOW TEST:57.135 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 26 14:30:18.846: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-5999 [It] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a new StatefulSet Dec 26 14:30:18.993: INFO: Found 0 stateful pods, waiting for 3 Dec 26 14:30:29.000: INFO: Found 2 stateful pods, waiting for 3 Dec 26 14:30:39.009: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Dec 26 14:30:39.009: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Dec 26 14:30:39.009: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Dec 26 14:30:49.004: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Dec 26 14:30:49.004: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Dec 26 14:30:49.004: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true Dec 26 14:30:49.016: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5999 ss2-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Dec 26 14:30:49.554: INFO: stderr: "+ mv -v /usr/share/nginx/html/index.html /tmp/\n" Dec 26 14:30:49.554: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Dec 26 14:30:49.554: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine Dec 26 14:30:59.616: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order Dec 26 14:31:09.691: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5999 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 26 14:31:10.141: INFO: stderr: "+ mv -v /tmp/index.html /usr/share/nginx/html/\n" Dec 26 14:31:10.141: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Dec 26 14:31:10.141: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Dec 26 14:31:20.181: INFO: Waiting for StatefulSet statefulset-5999/ss2 to complete update Dec 26 14:31:20.182: INFO: Waiting for Pod statefulset-5999/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Dec 26 14:31:20.182: INFO: Waiting for Pod statefulset-5999/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Dec 26 14:31:20.182: INFO: Waiting for Pod statefulset-5999/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Dec 26 14:31:30.194: INFO: Waiting for StatefulSet statefulset-5999/ss2 to complete update Dec 26 14:31:30.195: INFO: Waiting for Pod statefulset-5999/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Dec 26 14:31:30.195: INFO: Waiting for Pod statefulset-5999/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Dec 26 14:31:40.199: INFO: Waiting for StatefulSet statefulset-5999/ss2 to complete update Dec 26 14:31:40.199: INFO: Waiting for Pod statefulset-5999/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Dec 26 14:31:40.199: INFO: Waiting for Pod statefulset-5999/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Dec 26 14:31:50.208: INFO: Waiting for StatefulSet statefulset-5999/ss2 to complete update Dec 26 14:31:50.208: INFO: Waiting for Pod statefulset-5999/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Dec 26 14:32:00.249: INFO: Waiting for StatefulSet statefulset-5999/ss2 to complete update Dec 26 14:32:00.249: INFO: Waiting for Pod statefulset-5999/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Dec 26 14:32:10.248: INFO: Waiting for StatefulSet statefulset-5999/ss2 to complete update STEP: Rolling back to a previous revision Dec 26 14:32:20.197: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5999 ss2-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Dec 26 14:32:20.716: INFO: stderr: "+ mv -v /usr/share/nginx/html/index.html /tmp/\n" Dec 26 14:32:20.716: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Dec 26 14:32:20.716: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Dec 26 14:32:30.788: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order Dec 26 14:32:41.527: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5999 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 26 14:32:41.946: INFO: stderr: "+ mv -v /tmp/index.html /usr/share/nginx/html/\n" Dec 26 14:32:41.946: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Dec 26 14:32:41.946: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Dec 26 14:32:42.057: INFO: Waiting for StatefulSet statefulset-5999/ss2 to complete update Dec 26 14:32:42.057: INFO: Waiting for Pod statefulset-5999/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Dec 26 14:32:42.057: INFO: Waiting for Pod statefulset-5999/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Dec 26 14:32:42.057: INFO: Waiting for Pod statefulset-5999/ss2-2 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Dec 26 14:32:52.068: INFO: Waiting for StatefulSet statefulset-5999/ss2 to complete update Dec 26 14:32:52.068: INFO: Waiting for Pod statefulset-5999/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Dec 26 14:32:52.068: INFO: Waiting for Pod statefulset-5999/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Dec 26 14:32:52.068: INFO: Waiting for Pod statefulset-5999/ss2-2 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Dec 26 14:33:02.085: INFO: Waiting for StatefulSet statefulset-5999/ss2 to complete update Dec 26 14:33:02.085: INFO: Waiting for Pod statefulset-5999/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Dec 26 14:33:02.085: INFO: Waiting for Pod statefulset-5999/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Dec 26 14:33:12.112: INFO: Waiting for StatefulSet statefulset-5999/ss2 to complete update Dec 26 14:33:12.112: INFO: Waiting for Pod statefulset-5999/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Dec 26 14:33:22.075: INFO: Waiting for StatefulSet statefulset-5999/ss2 to complete update Dec 26 14:33:22.075: INFO: Waiting for Pod statefulset-5999/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Dec 26 14:33:32.078: INFO: Waiting for StatefulSet statefulset-5999/ss2 to complete update [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 Dec 26 14:33:42.068: INFO: Deleting all statefulset in ns statefulset-5999 Dec 26 14:33:42.073: INFO: Scaling statefulset ss2 to 0 Dec 26 14:34:22.112: INFO: Waiting for statefulset status.replicas updated to 0 Dec 26 14:34:22.115: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 26 14:34:22.128: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-5999" for this suite. Dec 26 14:34:30.158: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 26 14:34:30.316: INFO: namespace statefulset-5999 deletion completed in 8.181469738s • [SLOW TEST:251.470 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run job should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 26 14:34:30.316: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1612 [It] should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Dec 26 14:34:30.388: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-446' Dec 26 14:34:32.863: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Dec 26 14:34:32.863: INFO: stdout: "job.batch/e2e-test-nginx-job created\n" STEP: verifying the job e2e-test-nginx-job was created [AfterEach] [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1617 Dec 26 14:34:32.942: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-nginx-job --namespace=kubectl-446' Dec 26 14:34:33.204: INFO: stderr: "" Dec 26 14:34:33.204: INFO: stdout: "job.batch \"e2e-test-nginx-job\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 26 14:34:33.204: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-446" for this suite. Dec 26 14:34:39.261: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 26 14:34:39.415: INFO: namespace kubectl-446 deletion completed in 6.186076972s • [SLOW TEST:9.099 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 26 14:34:39.416: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-bdd8ab5b-aa4f-41a7-89e5-bbd020351e5d STEP: Creating a pod to test consume configMaps Dec 26 14:34:39.612: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-90138cf3-e80b-44ba-b946-aea99d250551" in namespace "projected-7733" to be "success or failure" Dec 26 14:34:39.629: INFO: Pod "pod-projected-configmaps-90138cf3-e80b-44ba-b946-aea99d250551": Phase="Pending", Reason="", readiness=false. Elapsed: 16.941475ms Dec 26 14:34:41.642: INFO: Pod "pod-projected-configmaps-90138cf3-e80b-44ba-b946-aea99d250551": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030189561s Dec 26 14:34:43.655: INFO: Pod "pod-projected-configmaps-90138cf3-e80b-44ba-b946-aea99d250551": Phase="Pending", Reason="", readiness=false. Elapsed: 4.042547279s Dec 26 14:34:45.692: INFO: Pod "pod-projected-configmaps-90138cf3-e80b-44ba-b946-aea99d250551": Phase="Pending", Reason="", readiness=false. Elapsed: 6.07963238s Dec 26 14:34:47.792: INFO: Pod "pod-projected-configmaps-90138cf3-e80b-44ba-b946-aea99d250551": Phase="Pending", Reason="", readiness=false. Elapsed: 8.17943683s Dec 26 14:34:49.961: INFO: Pod "pod-projected-configmaps-90138cf3-e80b-44ba-b946-aea99d250551": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.348710881s STEP: Saw pod success Dec 26 14:34:49.961: INFO: Pod "pod-projected-configmaps-90138cf3-e80b-44ba-b946-aea99d250551" satisfied condition "success or failure" Dec 26 14:34:49.966: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-90138cf3-e80b-44ba-b946-aea99d250551 container projected-configmap-volume-test: STEP: delete the pod Dec 26 14:34:50.057: INFO: Waiting for pod pod-projected-configmaps-90138cf3-e80b-44ba-b946-aea99d250551 to disappear Dec 26 14:34:50.143: INFO: Pod pod-projected-configmaps-90138cf3-e80b-44ba-b946-aea99d250551 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 26 14:34:50.143: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7733" for this suite. Dec 26 14:34:56.203: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 26 14:34:56.379: INFO: namespace projected-7733 deletion completed in 6.203755187s • [SLOW TEST:16.964 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 26 14:34:56.380: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Dec 26 14:35:05.213: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 26 14:35:05.294: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-7020" for this suite. Dec 26 14:35:11.354: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 26 14:35:11.540: INFO: namespace container-runtime-7020 deletion completed in 6.240786294s • [SLOW TEST:15.160 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129 should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 26 14:35:11.540: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 26 14:35:19.705: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-6924" for this suite. Dec 26 14:36:01.736: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 26 14:36:01.911: INFO: namespace kubelet-test-6924 deletion completed in 42.201277315s • [SLOW TEST:50.371 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a busybox Pod with hostAliases /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:136 should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 26 14:36:01.912: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap configmap-2396/configmap-test-261575b2-4c96-4f00-8402-4028b4e19b88 STEP: Creating a pod to test consume configMaps Dec 26 14:36:02.022: INFO: Waiting up to 5m0s for pod "pod-configmaps-7367319a-ad52-4c0e-a467-b458736e8f21" in namespace "configmap-2396" to be "success or failure" Dec 26 14:36:02.032: INFO: Pod "pod-configmaps-7367319a-ad52-4c0e-a467-b458736e8f21": Phase="Pending", Reason="", readiness=false. Elapsed: 9.696847ms Dec 26 14:36:04.044: INFO: Pod "pod-configmaps-7367319a-ad52-4c0e-a467-b458736e8f21": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022054584s Dec 26 14:36:06.052: INFO: Pod "pod-configmaps-7367319a-ad52-4c0e-a467-b458736e8f21": Phase="Pending", Reason="", readiness=false. Elapsed: 4.029356562s Dec 26 14:36:08.059: INFO: Pod "pod-configmaps-7367319a-ad52-4c0e-a467-b458736e8f21": Phase="Pending", Reason="", readiness=false. Elapsed: 6.037198361s Dec 26 14:36:10.067: INFO: Pod "pod-configmaps-7367319a-ad52-4c0e-a467-b458736e8f21": Phase="Pending", Reason="", readiness=false. Elapsed: 8.044813059s Dec 26 14:36:12.075: INFO: Pod "pod-configmaps-7367319a-ad52-4c0e-a467-b458736e8f21": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.05322582s STEP: Saw pod success Dec 26 14:36:12.076: INFO: Pod "pod-configmaps-7367319a-ad52-4c0e-a467-b458736e8f21" satisfied condition "success or failure" Dec 26 14:36:12.079: INFO: Trying to get logs from node iruya-node pod pod-configmaps-7367319a-ad52-4c0e-a467-b458736e8f21 container env-test: STEP: delete the pod Dec 26 14:36:12.132: INFO: Waiting for pod pod-configmaps-7367319a-ad52-4c0e-a467-b458736e8f21 to disappear Dec 26 14:36:12.137: INFO: Pod pod-configmaps-7367319a-ad52-4c0e-a467-b458736e8f21 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 26 14:36:12.137: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2396" for this suite. Dec 26 14:36:18.157: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 26 14:36:18.282: INFO: namespace configmap-2396 deletion completed in 6.139112468s • [SLOW TEST:16.370 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 26 14:36:18.282: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Dec 26 14:36:18.475: INFO: Create a RollingUpdate DaemonSet Dec 26 14:36:18.487: INFO: Check that daemon pods launch on every node of the cluster Dec 26 14:36:18.559: INFO: Number of nodes with available pods: 0 Dec 26 14:36:18.559: INFO: Node iruya-node is running more than one daemon pod Dec 26 14:36:20.389: INFO: Number of nodes with available pods: 0 Dec 26 14:36:20.389: INFO: Node iruya-node is running more than one daemon pod Dec 26 14:36:20.787: INFO: Number of nodes with available pods: 0 Dec 26 14:36:20.787: INFO: Node iruya-node is running more than one daemon pod Dec 26 14:36:21.706: INFO: Number of nodes with available pods: 0 Dec 26 14:36:21.706: INFO: Node iruya-node is running more than one daemon pod Dec 26 14:36:22.577: INFO: Number of nodes with available pods: 0 Dec 26 14:36:22.577: INFO: Node iruya-node is running more than one daemon pod Dec 26 14:36:23.577: INFO: Number of nodes with available pods: 0 Dec 26 14:36:23.578: INFO: Node iruya-node is running more than one daemon pod Dec 26 14:36:26.962: INFO: Number of nodes with available pods: 0 Dec 26 14:36:26.963: INFO: Node iruya-node is running more than one daemon pod Dec 26 14:36:27.612: INFO: Number of nodes with available pods: 0 Dec 26 14:36:27.613: INFO: Node iruya-node is running more than one daemon pod Dec 26 14:36:28.615: INFO: Number of nodes with available pods: 0 Dec 26 14:36:28.615: INFO: Node iruya-node is running more than one daemon pod Dec 26 14:36:29.573: INFO: Number of nodes with available pods: 2 Dec 26 14:36:29.573: INFO: Number of running nodes: 2, number of available pods: 2 Dec 26 14:36:29.573: INFO: Update the DaemonSet to trigger a rollout Dec 26 14:36:29.586: INFO: Updating DaemonSet daemon-set Dec 26 14:36:36.656: INFO: Roll back the DaemonSet before rollout is complete Dec 26 14:36:36.688: INFO: Updating DaemonSet daemon-set Dec 26 14:36:36.689: INFO: Make sure DaemonSet rollback is complete Dec 26 14:36:37.045: INFO: Wrong image for pod: daemon-set-mxjlx. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent. Dec 26 14:36:37.045: INFO: Pod daemon-set-mxjlx is not available Dec 26 14:36:38.069: INFO: Wrong image for pod: daemon-set-mxjlx. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent. Dec 26 14:36:38.070: INFO: Pod daemon-set-mxjlx is not available Dec 26 14:36:39.074: INFO: Wrong image for pod: daemon-set-mxjlx. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent. Dec 26 14:36:39.074: INFO: Pod daemon-set-mxjlx is not available Dec 26 14:36:40.066: INFO: Wrong image for pod: daemon-set-mxjlx. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent. Dec 26 14:36:40.066: INFO: Pod daemon-set-mxjlx is not available Dec 26 14:36:41.582: INFO: Wrong image for pod: daemon-set-mxjlx. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent. Dec 26 14:36:41.582: INFO: Pod daemon-set-mxjlx is not available Dec 26 14:36:42.072: INFO: Wrong image for pod: daemon-set-mxjlx. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent. Dec 26 14:36:42.072: INFO: Pod daemon-set-mxjlx is not available Dec 26 14:36:43.302: INFO: Pod daemon-set-lcvrc is not available [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-9997, will wait for the garbage collector to delete the pods Dec 26 14:36:43.382: INFO: Deleting DaemonSet.extensions daemon-set took: 12.093384ms Dec 26 14:36:43.783: INFO: Terminating DaemonSet.extensions daemon-set pods took: 400.592092ms Dec 26 14:36:56.594: INFO: Number of nodes with available pods: 0 Dec 26 14:36:56.594: INFO: Number of running nodes: 0, number of available pods: 0 Dec 26 14:36:56.620: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-9997/daemonsets","resourceVersion":"18153742"},"items":null} Dec 26 14:36:56.626: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-9997/pods","resourceVersion":"18153742"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 26 14:36:56.643: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-9997" for this suite. Dec 26 14:37:04.669: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 26 14:37:04.748: INFO: namespace daemonsets-9997 deletion completed in 8.100742049s • [SLOW TEST:46.466 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 26 14:37:04.748: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change Dec 26 14:37:04.842: INFO: Pod name pod-release: Found 0 pods out of 1 Dec 26 14:37:09.849: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 26 14:37:10.944: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-6704" for this suite. Dec 26 14:37:16.996: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 26 14:37:17.071: INFO: namespace replication-controller-6704 deletion completed in 6.121483882s • [SLOW TEST:12.323 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 26 14:37:17.072: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-05f2cfe1-2dee-4a6c-bad9-904308287e2e STEP: Creating a pod to test consume secrets Dec 26 14:37:17.306: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-901f080e-7f44-4fda-a8d3-68dbc612cdfe" in namespace "projected-2923" to be "success or failure" Dec 26 14:37:17.319: INFO: Pod "pod-projected-secrets-901f080e-7f44-4fda-a8d3-68dbc612cdfe": Phase="Pending", Reason="", readiness=false. Elapsed: 12.36331ms Dec 26 14:37:19.327: INFO: Pod "pod-projected-secrets-901f080e-7f44-4fda-a8d3-68dbc612cdfe": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020460305s Dec 26 14:37:21.337: INFO: Pod "pod-projected-secrets-901f080e-7f44-4fda-a8d3-68dbc612cdfe": Phase="Pending", Reason="", readiness=false. Elapsed: 4.030097307s Dec 26 14:37:23.350: INFO: Pod "pod-projected-secrets-901f080e-7f44-4fda-a8d3-68dbc612cdfe": Phase="Pending", Reason="", readiness=false. Elapsed: 6.043187799s Dec 26 14:37:25.360: INFO: Pod "pod-projected-secrets-901f080e-7f44-4fda-a8d3-68dbc612cdfe": Phase="Pending", Reason="", readiness=false. Elapsed: 8.053515457s Dec 26 14:37:27.370: INFO: Pod "pod-projected-secrets-901f080e-7f44-4fda-a8d3-68dbc612cdfe": Phase="Pending", Reason="", readiness=false. Elapsed: 10.063495262s Dec 26 14:37:29.377: INFO: Pod "pod-projected-secrets-901f080e-7f44-4fda-a8d3-68dbc612cdfe": Phase="Pending", Reason="", readiness=false. Elapsed: 12.070686934s Dec 26 14:37:31.386: INFO: Pod "pod-projected-secrets-901f080e-7f44-4fda-a8d3-68dbc612cdfe": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.079652268s STEP: Saw pod success Dec 26 14:37:31.386: INFO: Pod "pod-projected-secrets-901f080e-7f44-4fda-a8d3-68dbc612cdfe" satisfied condition "success or failure" Dec 26 14:37:31.391: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-901f080e-7f44-4fda-a8d3-68dbc612cdfe container projected-secret-volume-test: STEP: delete the pod Dec 26 14:37:31.490: INFO: Waiting for pod pod-projected-secrets-901f080e-7f44-4fda-a8d3-68dbc612cdfe to disappear Dec 26 14:37:31.506: INFO: Pod pod-projected-secrets-901f080e-7f44-4fda-a8d3-68dbc612cdfe no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 26 14:37:31.506: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2923" for this suite. Dec 26 14:37:37.556: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 26 14:37:37.671: INFO: namespace projected-2923 deletion completed in 6.154467645s • [SLOW TEST:20.599 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 26 14:37:37.673: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0644 on node default medium Dec 26 14:37:37.795: INFO: Waiting up to 5m0s for pod "pod-5fa24e92-d46e-458d-92de-3ee42b713508" in namespace "emptydir-7460" to be "success or failure" Dec 26 14:37:37.828: INFO: Pod "pod-5fa24e92-d46e-458d-92de-3ee42b713508": Phase="Pending", Reason="", readiness=false. Elapsed: 32.254231ms Dec 26 14:37:39.837: INFO: Pod "pod-5fa24e92-d46e-458d-92de-3ee42b713508": Phase="Pending", Reason="", readiness=false. Elapsed: 2.041280803s Dec 26 14:37:41.854: INFO: Pod "pod-5fa24e92-d46e-458d-92de-3ee42b713508": Phase="Pending", Reason="", readiness=false. Elapsed: 4.05892928s Dec 26 14:37:43.874: INFO: Pod "pod-5fa24e92-d46e-458d-92de-3ee42b713508": Phase="Pending", Reason="", readiness=false. Elapsed: 6.078500381s Dec 26 14:37:45.895: INFO: Pod "pod-5fa24e92-d46e-458d-92de-3ee42b713508": Phase="Pending", Reason="", readiness=false. Elapsed: 8.099790548s Dec 26 14:37:47.906: INFO: Pod "pod-5fa24e92-d46e-458d-92de-3ee42b713508": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.110389771s STEP: Saw pod success Dec 26 14:37:47.906: INFO: Pod "pod-5fa24e92-d46e-458d-92de-3ee42b713508" satisfied condition "success or failure" Dec 26 14:37:47.912: INFO: Trying to get logs from node iruya-node pod pod-5fa24e92-d46e-458d-92de-3ee42b713508 container test-container: STEP: delete the pod Dec 26 14:37:48.026: INFO: Waiting for pod pod-5fa24e92-d46e-458d-92de-3ee42b713508 to disappear Dec 26 14:37:48.039: INFO: Pod pod-5fa24e92-d46e-458d-92de-3ee42b713508 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 26 14:37:48.040: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7460" for this suite. Dec 26 14:37:54.129: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 26 14:37:54.234: INFO: namespace emptydir-7460 deletion completed in 6.17837917s • [SLOW TEST:16.562 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 26 14:37:54.236: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: starting a background goroutine to produce watch events STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 26 14:37:59.745: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-9751" for this suite. Dec 26 14:38:05.915: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 26 14:38:06.041: INFO: namespace watch-9751 deletion completed in 6.243476949s • [SLOW TEST:11.806 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Proxy server should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 26 14:38:06.042: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: starting the proxy server Dec 26 14:38:06.090: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter' STEP: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 26 14:38:06.165: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-157" for this suite. Dec 26 14:38:12.265: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 26 14:38:12.347: INFO: namespace kubectl-157 deletion completed in 6.145861387s • [SLOW TEST:6.305 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Proxy server /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 26 14:38:12.347: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-3329 [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a new StatefulSet Dec 26 14:38:12.603: INFO: Found 0 stateful pods, waiting for 3 Dec 26 14:38:22.617: INFO: Found 2 stateful pods, waiting for 3 Dec 26 14:38:32.620: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Dec 26 14:38:32.620: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Dec 26 14:38:32.620: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Dec 26 14:38:42.634: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Dec 26 14:38:42.634: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Dec 26 14:38:42.634: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Updating stateful set template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine Dec 26 14:38:42.733: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Not applying an update when the partition is greater than the number of replicas STEP: Performing a canary update Dec 26 14:38:52.832: INFO: Updating stateful set ss2 Dec 26 14:38:52.911: INFO: Waiting for Pod statefulset-3329/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Dec 26 14:39:02.935: INFO: Waiting for Pod statefulset-3329/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c STEP: Restoring Pods to the correct revision when they are deleted Dec 26 14:39:13.399: INFO: Found 2 stateful pods, waiting for 3 Dec 26 14:39:23.413: INFO: Found 2 stateful pods, waiting for 3 Dec 26 14:39:33.407: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Dec 26 14:39:33.408: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Dec 26 14:39:33.408: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Dec 26 14:39:43.411: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Dec 26 14:39:43.411: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Dec 26 14:39:43.411: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Performing a phased rolling update Dec 26 14:39:43.447: INFO: Updating stateful set ss2 Dec 26 14:39:43.459: INFO: Waiting for Pod statefulset-3329/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Dec 26 14:39:53.473: INFO: Waiting for Pod statefulset-3329/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Dec 26 14:40:04.731: INFO: Updating stateful set ss2 Dec 26 14:40:05.113: INFO: Waiting for StatefulSet statefulset-3329/ss2 to complete update Dec 26 14:40:05.113: INFO: Waiting for Pod statefulset-3329/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Dec 26 14:40:15.123: INFO: Waiting for StatefulSet statefulset-3329/ss2 to complete update [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 Dec 26 14:40:25.125: INFO: Deleting all statefulset in ns statefulset-3329 Dec 26 14:40:25.130: INFO: Scaling statefulset ss2 to 0 Dec 26 14:41:05.160: INFO: Waiting for statefulset status.replicas updated to 0 Dec 26 14:41:05.165: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 26 14:41:06.168: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-3329" for this suite. Dec 26 14:41:14.331: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 26 14:41:14.508: INFO: namespace statefulset-3329 deletion completed in 8.331173492s • [SLOW TEST:182.161 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 26 14:41:14.509: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test substitution in container's command Dec 26 14:41:14.634: INFO: Waiting up to 5m0s for pod "var-expansion-352ff60d-8847-41ad-95ae-7ca73aea8dc0" in namespace "var-expansion-7046" to be "success or failure" Dec 26 14:41:14.663: INFO: Pod "var-expansion-352ff60d-8847-41ad-95ae-7ca73aea8dc0": Phase="Pending", Reason="", readiness=false. Elapsed: 28.814767ms Dec 26 14:41:16.672: INFO: Pod "var-expansion-352ff60d-8847-41ad-95ae-7ca73aea8dc0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.038096881s Dec 26 14:41:18.693: INFO: Pod "var-expansion-352ff60d-8847-41ad-95ae-7ca73aea8dc0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.058821626s Dec 26 14:41:20.699: INFO: Pod "var-expansion-352ff60d-8847-41ad-95ae-7ca73aea8dc0": Phase="Pending", Reason="", readiness=false. Elapsed: 6.065446937s Dec 26 14:41:22.710: INFO: Pod "var-expansion-352ff60d-8847-41ad-95ae-7ca73aea8dc0": Phase="Pending", Reason="", readiness=false. Elapsed: 8.075779424s Dec 26 14:41:24.724: INFO: Pod "var-expansion-352ff60d-8847-41ad-95ae-7ca73aea8dc0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.090165553s STEP: Saw pod success Dec 26 14:41:24.724: INFO: Pod "var-expansion-352ff60d-8847-41ad-95ae-7ca73aea8dc0" satisfied condition "success or failure" Dec 26 14:41:24.734: INFO: Trying to get logs from node iruya-node pod var-expansion-352ff60d-8847-41ad-95ae-7ca73aea8dc0 container dapi-container: STEP: delete the pod Dec 26 14:41:24.783: INFO: Waiting for pod var-expansion-352ff60d-8847-41ad-95ae-7ca73aea8dc0 to disappear Dec 26 14:41:24.789: INFO: Pod var-expansion-352ff60d-8847-41ad-95ae-7ca73aea8dc0 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 26 14:41:24.789: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-7046" for this suite. Dec 26 14:41:30.817: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 26 14:41:30.946: INFO: namespace var-expansion-7046 deletion completed in 6.150269209s • [SLOW TEST:16.437 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 26 14:41:30.946: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Dec 26 14:41:31.114: INFO: Waiting up to 5m0s for pod "downwardapi-volume-69f74715-283d-489c-bf18-152726c537e5" in namespace "downward-api-9348" to be "success or failure" Dec 26 14:41:31.140: INFO: Pod "downwardapi-volume-69f74715-283d-489c-bf18-152726c537e5": Phase="Pending", Reason="", readiness=false. Elapsed: 25.807721ms Dec 26 14:41:33.153: INFO: Pod "downwardapi-volume-69f74715-283d-489c-bf18-152726c537e5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.038802951s Dec 26 14:41:35.184: INFO: Pod "downwardapi-volume-69f74715-283d-489c-bf18-152726c537e5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.070079364s Dec 26 14:41:37.433: INFO: Pod "downwardapi-volume-69f74715-283d-489c-bf18-152726c537e5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.319278879s Dec 26 14:41:39.440: INFO: Pod "downwardapi-volume-69f74715-283d-489c-bf18-152726c537e5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.326352892s STEP: Saw pod success Dec 26 14:41:39.441: INFO: Pod "downwardapi-volume-69f74715-283d-489c-bf18-152726c537e5" satisfied condition "success or failure" Dec 26 14:41:39.444: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-69f74715-283d-489c-bf18-152726c537e5 container client-container: STEP: delete the pod Dec 26 14:41:39.492: INFO: Waiting for pod downwardapi-volume-69f74715-283d-489c-bf18-152726c537e5 to disappear Dec 26 14:41:39.499: INFO: Pod downwardapi-volume-69f74715-283d-489c-bf18-152726c537e5 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 26 14:41:39.499: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9348" for this suite. Dec 26 14:41:45.569: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 26 14:41:45.839: INFO: namespace downward-api-9348 deletion completed in 6.29581394s • [SLOW TEST:14.893 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 26 14:41:45.840: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Dec 26 14:41:45.939: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a7d27ca9-d331-40a0-9b51-2ca28e97cad5" in namespace "downward-api-2437" to be "success or failure" Dec 26 14:41:45.960: INFO: Pod "downwardapi-volume-a7d27ca9-d331-40a0-9b51-2ca28e97cad5": Phase="Pending", Reason="", readiness=false. Elapsed: 20.815662ms Dec 26 14:41:48.016: INFO: Pod "downwardapi-volume-a7d27ca9-d331-40a0-9b51-2ca28e97cad5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.076722073s Dec 26 14:41:50.027: INFO: Pod "downwardapi-volume-a7d27ca9-d331-40a0-9b51-2ca28e97cad5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.087520857s Dec 26 14:41:52.037: INFO: Pod "downwardapi-volume-a7d27ca9-d331-40a0-9b51-2ca28e97cad5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.097205086s Dec 26 14:41:54.046: INFO: Pod "downwardapi-volume-a7d27ca9-d331-40a0-9b51-2ca28e97cad5": Phase="Pending", Reason="", readiness=false. Elapsed: 8.106110194s Dec 26 14:41:56.054: INFO: Pod "downwardapi-volume-a7d27ca9-d331-40a0-9b51-2ca28e97cad5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.114456998s STEP: Saw pod success Dec 26 14:41:56.054: INFO: Pod "downwardapi-volume-a7d27ca9-d331-40a0-9b51-2ca28e97cad5" satisfied condition "success or failure" Dec 26 14:41:56.058: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-a7d27ca9-d331-40a0-9b51-2ca28e97cad5 container client-container: STEP: delete the pod Dec 26 14:41:56.140: INFO: Waiting for pod downwardapi-volume-a7d27ca9-d331-40a0-9b51-2ca28e97cad5 to disappear Dec 26 14:41:56.154: INFO: Pod downwardapi-volume-a7d27ca9-d331-40a0-9b51-2ca28e97cad5 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 26 14:41:56.155: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2437" for this suite. Dec 26 14:42:02.264: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 26 14:42:02.414: INFO: namespace downward-api-2437 deletion completed in 6.185373951s • [SLOW TEST:16.575 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 26 14:42:02.415: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Dec 26 14:42:02.502: INFO: (0) /api/v1/nodes/iruya-node:10250/proxy/logs/:
alternatives.log
alternatives.l... (200; 14.85842ms)
Dec 26 14:42:02.507: INFO: (1) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.271655ms)
Dec 26 14:42:02.512: INFO: (2) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.079214ms)
Dec 26 14:42:02.531: INFO: (3) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 18.407988ms)
Dec 26 14:42:02.536: INFO: (4) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.904416ms)
Dec 26 14:42:02.542: INFO: (5) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.380427ms)
Dec 26 14:42:02.547: INFO: (6) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.43441ms)
Dec 26 14:42:02.551: INFO: (7) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.113183ms)
Dec 26 14:42:02.558: INFO: (8) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.922231ms)
Dec 26 14:42:02.563: INFO: (9) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.337846ms)
Dec 26 14:42:02.567: INFO: (10) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.482159ms)
Dec 26 14:42:02.570: INFO: (11) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.75991ms)
Dec 26 14:42:02.575: INFO: (12) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.83832ms)
Dec 26 14:42:02.580: INFO: (13) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.36527ms)
Dec 26 14:42:02.583: INFO: (14) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.503775ms)
Dec 26 14:42:02.587: INFO: (15) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.298299ms)
Dec 26 14:42:02.592: INFO: (16) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.824681ms)
Dec 26 14:42:02.596: INFO: (17) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.861668ms)
Dec 26 14:42:02.600: INFO: (18) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.966036ms)
Dec 26 14:42:02.605: INFO: (19) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.916801ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 26 14:42:02.605: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-324" for this suite.
Dec 26 14:42:08.919: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 26 14:42:09.095: INFO: namespace proxy-324 deletion completed in 6.48737793s

• [SLOW TEST:6.681 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:58
    should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Burst scaling should run to completion even with unhealthy pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 26 14:42:09.098: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75
STEP: Creating service test in namespace statefulset-5758
[It] Burst scaling should run to completion even with unhealthy pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating stateful set ss in namespace statefulset-5758
STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-5758
Dec 26 14:42:09.237: INFO: Found 0 stateful pods, waiting for 1
Dec 26 14:42:19.250: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod
Dec 26 14:42:19.259: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5758 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Dec 26 14:42:20.097: INFO: stderr: "+ mv -v /usr/share/nginx/html/index.html /tmp/\n"
Dec 26 14:42:20.097: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Dec 26 14:42:20.097: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Dec 26 14:42:20.107: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true
Dec 26 14:42:30.119: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Dec 26 14:42:30.119: INFO: Waiting for statefulset status.replicas updated to 0
Dec 26 14:42:30.151: INFO: POD   NODE        PHASE    GRACE  CONDITIONS
Dec 26 14:42:30.151: INFO: ss-0  iruya-node  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-26 14:42:09 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-26 14:42:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-26 14:42:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-26 14:42:09 +0000 UTC  }]
Dec 26 14:42:30.151: INFO: 
Dec 26 14:42:30.151: INFO: StatefulSet ss has not reached scale 3, at 1
Dec 26 14:42:31.784: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.989361752s
Dec 26 14:42:33.526: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.3565969s
Dec 26 14:42:34.548: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.614366919s
Dec 26 14:42:35.558: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.592409289s
Dec 26 14:42:38.319: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.582114304s
Dec 26 14:42:39.329: INFO: Verifying statefulset ss doesn't scale past 3 for another 821.212563ms
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-5758
Dec 26 14:42:40.464: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5758 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 26 14:42:41.371: INFO: stderr: "+ mv -v /tmp/index.html /usr/share/nginx/html/\n"
Dec 26 14:42:41.371: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Dec 26 14:42:41.371: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Dec 26 14:42:41.372: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5758 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 26 14:42:41.815: INFO: stderr: "+ mv -v /tmp/index.html /usr/share/nginx/html/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\n"
Dec 26 14:42:41.815: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Dec 26 14:42:41.815: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Dec 26 14:42:41.816: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5758 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 26 14:42:42.426: INFO: stderr: "+ mv -v /tmp/index.html /usr/share/nginx/html/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\n"
Dec 26 14:42:42.426: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Dec 26 14:42:42.426: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Dec 26 14:42:42.433: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Dec 26 14:42:42.433: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=false
Dec 26 14:42:52.453: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Dec 26 14:42:52.453: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Dec 26 14:42:52.453: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Scale down will not halt with unhealthy stateful pod
Dec 26 14:42:52.462: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5758 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Dec 26 14:42:53.080: INFO: stderr: "+ mv -v /usr/share/nginx/html/index.html /tmp/\n"
Dec 26 14:42:53.080: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Dec 26 14:42:53.080: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Dec 26 14:42:53.080: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5758 ss-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Dec 26 14:42:53.512: INFO: stderr: "+ mv -v /usr/share/nginx/html/index.html /tmp/\n"
Dec 26 14:42:53.513: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Dec 26 14:42:53.513: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Dec 26 14:42:53.513: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5758 ss-2 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Dec 26 14:42:54.421: INFO: stderr: "+ mv -v /usr/share/nginx/html/index.html /tmp/\n"
Dec 26 14:42:54.421: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Dec 26 14:42:54.421: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Dec 26 14:42:54.421: INFO: Waiting for statefulset status.replicas updated to 0
Dec 26 14:42:54.434: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1
Dec 26 14:43:04.459: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Dec 26 14:43:04.459: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Dec 26 14:43:04.459: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Dec 26 14:43:04.485: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Dec 26 14:43:04.485: INFO: ss-0  iruya-node                 Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-26 14:42:09 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-26 14:42:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-26 14:42:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-26 14:42:09 +0000 UTC  }]
Dec 26 14:43:04.485: INFO: ss-1  iruya-server-sfge57q7djm7  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-26 14:42:30 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-26 14:42:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-26 14:42:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-26 14:42:30 +0000 UTC  }]
Dec 26 14:43:04.485: INFO: ss-2  iruya-node                 Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-26 14:42:30 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-26 14:42:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-26 14:42:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-26 14:42:30 +0000 UTC  }]
Dec 26 14:43:04.485: INFO: 
Dec 26 14:43:04.485: INFO: StatefulSet ss has not reached scale 0, at 3
Dec 26 14:43:07.255: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Dec 26 14:43:07.255: INFO: ss-0  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-26 14:42:09 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-26 14:42:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-26 14:42:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-26 14:42:09 +0000 UTC  }]
Dec 26 14:43:07.255: INFO: ss-1  iruya-server-sfge57q7djm7  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-26 14:42:30 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-26 14:42:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-26 14:42:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-26 14:42:30 +0000 UTC  }]
Dec 26 14:43:07.255: INFO: ss-2  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-26 14:42:30 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-26 14:42:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-26 14:42:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-26 14:42:30 +0000 UTC  }]
Dec 26 14:43:07.255: INFO: 
Dec 26 14:43:07.255: INFO: StatefulSet ss has not reached scale 0, at 3
Dec 26 14:43:08.281: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Dec 26 14:43:08.281: INFO: ss-0  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-26 14:42:09 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-26 14:42:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-26 14:42:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-26 14:42:09 +0000 UTC  }]
Dec 26 14:43:08.281: INFO: ss-1  iruya-server-sfge57q7djm7  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-26 14:42:30 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-26 14:42:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-26 14:42:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-26 14:42:30 +0000 UTC  }]
Dec 26 14:43:08.281: INFO: ss-2  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-26 14:42:30 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-26 14:42:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-26 14:42:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-26 14:42:30 +0000 UTC  }]
Dec 26 14:43:08.281: INFO: 
Dec 26 14:43:08.281: INFO: StatefulSet ss has not reached scale 0, at 3
Dec 26 14:43:09.290: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Dec 26 14:43:09.290: INFO: ss-0  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-26 14:42:09 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-26 14:42:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-26 14:42:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-26 14:42:09 +0000 UTC  }]
Dec 26 14:43:09.290: INFO: ss-1  iruya-server-sfge57q7djm7  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-26 14:42:30 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-26 14:42:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-26 14:42:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-26 14:42:30 +0000 UTC  }]
Dec 26 14:43:09.290: INFO: ss-2  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-26 14:42:30 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-26 14:42:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-26 14:42:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-26 14:42:30 +0000 UTC  }]
Dec 26 14:43:09.290: INFO: 
Dec 26 14:43:09.290: INFO: StatefulSet ss has not reached scale 0, at 3
Dec 26 14:43:10.387: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Dec 26 14:43:10.387: INFO: ss-0  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-26 14:42:09 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-26 14:42:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-26 14:42:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-26 14:42:09 +0000 UTC  }]
Dec 26 14:43:10.387: INFO: ss-1  iruya-server-sfge57q7djm7  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-26 14:42:30 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-26 14:42:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-26 14:42:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-26 14:42:30 +0000 UTC  }]
Dec 26 14:43:10.387: INFO: ss-2  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-26 14:42:30 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-26 14:42:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-26 14:42:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-26 14:42:30 +0000 UTC  }]
Dec 26 14:43:10.387: INFO: 
Dec 26 14:43:10.387: INFO: StatefulSet ss has not reached scale 0, at 3
Dec 26 14:43:11.395: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Dec 26 14:43:11.395: INFO: ss-0  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-26 14:42:09 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-26 14:42:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-26 14:42:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-26 14:42:09 +0000 UTC  }]
Dec 26 14:43:11.395: INFO: ss-1  iruya-server-sfge57q7djm7  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-26 14:42:30 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-26 14:42:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-26 14:42:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-26 14:42:30 +0000 UTC  }]
Dec 26 14:43:11.396: INFO: ss-2  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-26 14:42:30 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-26 14:42:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-26 14:42:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-26 14:42:30 +0000 UTC  }]
Dec 26 14:43:11.396: INFO: 
Dec 26 14:43:11.396: INFO: StatefulSet ss has not reached scale 0, at 3
Dec 26 14:43:12.406: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Dec 26 14:43:12.406: INFO: ss-0  iruya-node                 Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-26 14:42:09 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-26 14:42:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-26 14:42:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-26 14:42:09 +0000 UTC  }]
Dec 26 14:43:12.406: INFO: ss-1  iruya-server-sfge57q7djm7  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-26 14:42:30 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-26 14:42:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-26 14:42:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-26 14:42:30 +0000 UTC  }]
Dec 26 14:43:12.406: INFO: ss-2  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-26 14:42:30 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-26 14:42:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-26 14:42:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-26 14:42:30 +0000 UTC  }]
Dec 26 14:43:12.406: INFO: 
Dec 26 14:43:12.406: INFO: StatefulSet ss has not reached scale 0, at 3
Dec 26 14:43:13.415: INFO: POD   NODE        PHASE    GRACE  CONDITIONS
Dec 26 14:43:13.415: INFO: ss-2  iruya-node  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-26 14:42:30 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-26 14:42:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-26 14:42:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-26 14:42:30 +0000 UTC  }]
Dec 26 14:43:13.415: INFO: 
Dec 26 14:43:13.415: INFO: StatefulSet ss has not reached scale 0, at 1
Dec 26 14:43:14.426: INFO: POD   NODE        PHASE    GRACE  CONDITIONS
Dec 26 14:43:14.426: INFO: ss-2  iruya-node  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-26 14:42:30 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-26 14:42:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-26 14:42:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-26 14:42:30 +0000 UTC  }]
Dec 26 14:43:14.427: INFO: 
Dec 26 14:43:14.427: INFO: StatefulSet ss has not reached scale 0, at 1
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-5758
Dec 26 14:43:15.439: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5758 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 26 14:43:15.666: INFO: rc: 1
Dec 26 14:43:15.666: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5758 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    error: unable to upgrade connection: container not found ("nginx")
 []  0xc002bb5ef0 exit status 1   true [0xc00176c1f8 0xc00176c210 0xc00176c228] [0xc00176c1f8 0xc00176c210 0xc00176c228] [0xc00176c208 0xc00176c220] [0xba6c50 0xba6c50] 0xc00256b860 }:
Command stdout:

stderr:
error: unable to upgrade connection: container not found ("nginx")

error:
exit status 1
Dec 26 14:43:25.667: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5758 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 26 14:43:25.818: INFO: rc: 1
Dec 26 14:43:25.818: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5758 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc002bac000 exit status 1   true [0xc00176c230 0xc00176c248 0xc00176c260] [0xc00176c230 0xc00176c248 0xc00176c260] [0xc00176c240 0xc00176c258] [0xba6c50 0xba6c50] 0xc00256bb60 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Dec 26 14:43:35.819: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5758 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 26 14:43:36.057: INFO: rc: 1
Dec 26 14:43:36.057: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5758 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc002535500 exit status 1   true [0xc00090f758 0xc00090f7a0 0xc00090f7f0] [0xc00090f758 0xc00090f7a0 0xc00090f7f0] [0xc00090f788 0xc00090f7d0] [0xba6c50 0xba6c50] 0xc0023e9200 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Dec 26 14:43:46.058: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5758 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 26 14:43:46.288: INFO: rc: 1
Dec 26 14:43:46.288: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5758 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc002bac0f0 exit status 1   true [0xc00176c268 0xc00176c280 0xc00176c298] [0xc00176c268 0xc00176c280 0xc00176c298] [0xc00176c278 0xc00176c290] [0xba6c50 0xba6c50] 0xc00256bec0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Dec 26 14:43:56.289: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5758 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 26 14:43:56.517: INFO: rc: 1
Dec 26 14:43:56.518: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5758 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc001f547b0 exit status 1   true [0xc001aec568 0xc001aec580 0xc001aec598] [0xc001aec568 0xc001aec580 0xc001aec598] [0xc001aec578 0xc001aec590] [0xba6c50 0xba6c50] 0xc002c63860 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Dec 26 14:44:06.519: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5758 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 26 14:44:06.694: INFO: rc: 1
Dec 26 14:44:06.694: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5758 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc001f54870 exit status 1   true [0xc001aec5a0 0xc001aec5b8 0xc001aec5d0] [0xc001aec5a0 0xc001aec5b8 0xc001aec5d0] [0xc001aec5b0 0xc001aec5c8] [0xba6c50 0xba6c50] 0xc001f9e1e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Dec 26 14:44:16.695: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5758 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 26 14:44:16.886: INFO: rc: 1
Dec 26 14:44:16.886: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5758 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc002bb4090 exit status 1   true [0xc000358ef8 0xc000359010 0xc000359088] [0xc000358ef8 0xc000359010 0xc000359088] [0xc000358ff0 0xc000359080] [0xba6c50 0xba6c50] 0xc001fd88a0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Dec 26 14:44:26.887: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5758 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 26 14:44:27.033: INFO: rc: 1
Dec 26 14:44:27.033: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5758 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc002bb4150 exit status 1   true [0xc000359108 0xc000359188 0xc000359270] [0xc000359108 0xc000359188 0xc000359270] [0xc000359178 0xc000359260] [0xba6c50 0xba6c50] 0xc0020866c0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Dec 26 14:44:37.033: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5758 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 26 14:44:39.237: INFO: rc: 1
Dec 26 14:44:39.237: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5758 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc000c38090 exit status 1   true [0xc00133e018 0xc00133e410 0xc00133e6d0] [0xc00133e018 0xc00133e410 0xc00133e6d0] [0xc00133e250 0xc00133e500] [0xba6c50 0xba6c50] 0xc002c62360 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Dec 26 14:44:49.238: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5758 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 26 14:44:49.450: INFO: rc: 1
Dec 26 14:44:49.450: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5758 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc000c381e0 exit status 1   true [0xc00133e7c0 0xc00133ebf0 0xc00133eee0] [0xc00133e7c0 0xc00133ebf0 0xc00133eee0] [0xc00133ea50 0xc00133ed50] [0xba6c50 0xba6c50] 0xc002c626c0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Dec 26 14:44:59.450: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5758 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 26 14:44:59.623: INFO: rc: 1
Dec 26 14:44:59.624: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5758 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc000c382d0 exit status 1   true [0xc00133ef98 0xc00133f0f0 0xc00133f358] [0xc00133ef98 0xc00133f0f0 0xc00133f358] [0xc00133f0b0 0xc00133f288] [0xba6c50 0xba6c50] 0xc002c629c0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Dec 26 14:45:09.624: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5758 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 26 14:45:09.816: INFO: rc: 1
Dec 26 14:45:09.817: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5758 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc002878120 exit status 1   true [0xc001aec000 0xc001aec018 0xc001aec030] [0xc001aec000 0xc001aec018 0xc001aec030] [0xc001aec010 0xc001aec028] [0xba6c50 0xba6c50] 0xc0024b6d80 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Dec 26 14:45:19.817: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5758 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 26 14:45:20.018: INFO: rc: 1
Dec 26 14:45:20.018: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5758 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc0028781e0 exit status 1   true [0xc001aec038 0xc001aec050 0xc001aec068] [0xc001aec038 0xc001aec050 0xc001aec068] [0xc001aec048 0xc001aec060] [0xba6c50 0xba6c50] 0xc0024b7a40 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Dec 26 14:45:30.019: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5758 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 26 14:45:30.195: INFO: rc: 1
Dec 26 14:45:30.195: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5758 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc0028782a0 exit status 1   true [0xc001aec070 0xc001aec088 0xc001aec0a0] [0xc001aec070 0xc001aec088 0xc001aec0a0] [0xc001aec080 0xc001aec098] [0xba6c50 0xba6c50] 0xc001f462a0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Dec 26 14:45:40.196: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5758 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 26 14:45:40.358: INFO: rc: 1
Dec 26 14:45:40.358: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5758 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc000c38420 exit status 1   true [0xc00133f450 0xc00133f590 0xc00133f748] [0xc00133f450 0xc00133f590 0xc00133f748] [0xc00133f580 0xc00133f720] [0xba6c50 0xba6c50] 0xc002c62d80 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Dec 26 14:45:50.359: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5758 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 26 14:45:50.522: INFO: rc: 1
Dec 26 14:45:50.523: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5758 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc002bb4240 exit status 1   true [0xc0003592a0 0xc000359340 0xc000359478] [0xc0003592a0 0xc000359340 0xc000359478] [0xc000359320 0xc0003593d8] [0xba6c50 0xba6c50] 0xc002087f80 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Dec 26 14:46:00.523: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5758 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 26 14:46:00.669: INFO: rc: 1
Dec 26 14:46:00.669: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5758 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc000c384e0 exit status 1   true [0xc00133f798 0xc00133f968 0xc00133f990] [0xc00133f798 0xc00133f968 0xc00133f990] [0xc00133f8b8 0xc00133f980] [0xba6c50 0xba6c50] 0xc002c63260 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Dec 26 14:46:10.684: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5758 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 26 14:46:10.846: INFO: rc: 1
Dec 26 14:46:10.847: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5758 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc000c38120 exit status 1   true [0xc00133e108 0xc00133e4a0 0xc00133e7c0] [0xc00133e108 0xc00133e4a0 0xc00133e7c0] [0xc00133e410 0xc00133e6d0] [0xba6c50 0xba6c50] 0xc002c62420 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Dec 26 14:46:20.847: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5758 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 26 14:46:21.053: INFO: rc: 1
Dec 26 14:46:21.053: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5758 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc002bb40c0 exit status 1   true [0xc000358ee0 0xc000358ff0 0xc000359080] [0xc000358ee0 0xc000358ff0 0xc000359080] [0xc000358fe8 0xc000359078] [0xba6c50 0xba6c50] 0xc0024b6d80 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Dec 26 14:46:31.053: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5758 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 26 14:46:31.242: INFO: rc: 1
Dec 26 14:46:31.242: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5758 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc000c38270 exit status 1   true [0xc00133e8c8 0xc00133ec70 0xc00133ef98] [0xc00133e8c8 0xc00133ec70 0xc00133ef98] [0xc00133ebf0 0xc00133eee0] [0xba6c50 0xba6c50] 0xc002c62780 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Dec 26 14:46:41.243: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5758 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 26 14:46:41.480: INFO: rc: 1
Dec 26 14:46:41.481: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5758 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc000c38390 exit status 1   true [0xc00133f060 0xc00133f118 0xc00133f450] [0xc00133f060 0xc00133f118 0xc00133f450] [0xc00133f0f0 0xc00133f358] [0xba6c50 0xba6c50] 0xc002c62a80 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Dec 26 14:46:51.481: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5758 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 26 14:46:51.640: INFO: rc: 1
Dec 26 14:46:51.640: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5758 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc002bb4210 exit status 1   true [0xc000359088 0xc000359178 0xc000359260] [0xc000359088 0xc000359178 0xc000359260] [0xc000359160 0xc0003591c8] [0xba6c50 0xba6c50] 0xc0024b7a40 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Dec 26 14:47:01.641: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5758 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 26 14:47:01.824: INFO: rc: 1
Dec 26 14:47:01.824: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5758 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc002878090 exit status 1   true [0xc001aec000 0xc001aec018 0xc001aec030] [0xc001aec000 0xc001aec018 0xc001aec030] [0xc001aec010 0xc001aec028] [0xba6c50 0xba6c50] 0xc002086ea0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Dec 26 14:47:11.826: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5758 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 26 14:47:11.988: INFO: rc: 1
Dec 26 14:47:11.988: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5758 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc000c38540 exit status 1   true [0xc00133f498 0xc00133f5c8 0xc00133f798] [0xc00133f498 0xc00133f5c8 0xc00133f798] [0xc00133f590 0xc00133f748] [0xba6c50 0xba6c50] 0xc002c62ea0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Dec 26 14:47:21.988: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5758 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 26 14:47:22.166: INFO: rc: 1
Dec 26 14:47:22.166: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5758 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc001dd9920 exit status 1   true [0xc002866000 0xc002866018 0xc002866030] [0xc002866000 0xc002866018 0xc002866030] [0xc002866010 0xc002866028] [0xba6c50 0xba6c50] 0xc001fd9d40 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Dec 26 14:47:32.167: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5758 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 26 14:47:32.356: INFO: rc: 1
Dec 26 14:47:32.356: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5758 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc001dd9a10 exit status 1   true [0xc002866038 0xc002866050 0xc002866068] [0xc002866038 0xc002866050 0xc002866068] [0xc002866048 0xc002866060] [0xba6c50 0xba6c50] 0xc0026060c0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Dec 26 14:47:42.357: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5758 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 26 14:47:42.555: INFO: rc: 1
Dec 26 14:47:42.555: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5758 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc001dd9b00 exit status 1   true [0xc002866070 0xc002866088 0xc0028660a0] [0xc002866070 0xc002866088 0xc0028660a0] [0xc002866080 0xc002866098] [0xba6c50 0xba6c50] 0xc0026063c0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Dec 26 14:47:52.556: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5758 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 26 14:47:52.740: INFO: rc: 1
Dec 26 14:47:52.740: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5758 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc002bb4330 exit status 1   true [0xc000359270 0xc000359320 0xc0003593d8] [0xc000359270 0xc000359320 0xc0003593d8] [0xc0003592e8 0xc0003593a8] [0xba6c50 0xba6c50] 0xc001f462a0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Dec 26 14:48:02.740: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5758 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 26 14:48:03.079: INFO: rc: 1
Dec 26 14:48:03.079: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5758 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc001dd9bf0 exit status 1   true [0xc0028660a8 0xc0028660c0 0xc0028660d8] [0xc0028660a8 0xc0028660c0 0xc0028660d8] [0xc0028660b8 0xc0028660d0] [0xba6c50 0xba6c50] 0xc002606720 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Dec 26 14:48:13.081: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5758 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 26 14:48:13.287: INFO: rc: 1
Dec 26 14:48:13.287: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5758 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc0028780c0 exit status 1   true [0xc001aec008 0xc001aec020 0xc001aec038] [0xc001aec008 0xc001aec020 0xc001aec038] [0xc001aec018 0xc001aec030] [0xba6c50 0xba6c50] 0xc001fd88a0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Dec 26 14:48:23.288: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5758 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 26 14:48:23.498: INFO: rc: 1
Dec 26 14:48:23.499: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: 
Dec 26 14:48:23.499: INFO: Scaling statefulset ss to 0
Dec 26 14:48:23.509: INFO: Waiting for statefulset status.replicas updated to 0
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86
Dec 26 14:48:23.516: INFO: Deleting all statefulset in ns statefulset-5758
Dec 26 14:48:23.519: INFO: Scaling statefulset ss to 0
Dec 26 14:48:23.526: INFO: Waiting for statefulset status.replicas updated to 0
Dec 26 14:48:23.528: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 26 14:48:23.630: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-5758" for this suite.
Dec 26 14:48:29.665: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 26 14:48:29.787: INFO: namespace statefulset-5758 deletion completed in 6.152334143s

• [SLOW TEST:380.690 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    Burst scaling should run to completion even with unhealthy pods [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl logs 
  should be able to retrieve and filter logs  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 26 14:48:29.788: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1292
STEP: creating an rc
Dec 26 14:48:29.885: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4222'
Dec 26 14:48:30.184: INFO: stderr: ""
Dec 26 14:48:30.184: INFO: stdout: "replicationcontroller/redis-master created\n"
[It] should be able to retrieve and filter logs  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Waiting for Redis master to start.
Dec 26 14:48:31.193: INFO: Selector matched 1 pods for map[app:redis]
Dec 26 14:48:31.193: INFO: Found 0 / 1
Dec 26 14:48:32.194: INFO: Selector matched 1 pods for map[app:redis]
Dec 26 14:48:32.194: INFO: Found 0 / 1
Dec 26 14:48:33.204: INFO: Selector matched 1 pods for map[app:redis]
Dec 26 14:48:33.204: INFO: Found 0 / 1
Dec 26 14:48:34.216: INFO: Selector matched 1 pods for map[app:redis]
Dec 26 14:48:34.216: INFO: Found 0 / 1
Dec 26 14:48:35.197: INFO: Selector matched 1 pods for map[app:redis]
Dec 26 14:48:35.197: INFO: Found 0 / 1
Dec 26 14:48:36.190: INFO: Selector matched 1 pods for map[app:redis]
Dec 26 14:48:36.191: INFO: Found 0 / 1
Dec 26 14:48:37.189: INFO: Selector matched 1 pods for map[app:redis]
Dec 26 14:48:37.189: INFO: Found 0 / 1
Dec 26 14:48:38.193: INFO: Selector matched 1 pods for map[app:redis]
Dec 26 14:48:38.193: INFO: Found 0 / 1
Dec 26 14:48:39.210: INFO: Selector matched 1 pods for map[app:redis]
Dec 26 14:48:39.210: INFO: Found 1 / 1
Dec 26 14:48:39.210: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Dec 26 14:48:39.214: INFO: Selector matched 1 pods for map[app:redis]
Dec 26 14:48:39.214: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
STEP: checking for a matching strings
Dec 26 14:48:39.214: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-g75cs redis-master --namespace=kubectl-4222'
Dec 26 14:48:39.380: INFO: stderr: ""
Dec 26 14:48:39.380: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 26 Dec 14:48:37.214 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 26 Dec 14:48:37.214 # Server started, Redis version 3.2.12\n1:M 26 Dec 14:48:37.215 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 26 Dec 14:48:37.215 * The server is now ready to accept connections on port 6379\n"
STEP: limiting log lines
Dec 26 14:48:39.380: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-g75cs redis-master --namespace=kubectl-4222 --tail=1'
Dec 26 14:48:39.563: INFO: stderr: ""
Dec 26 14:48:39.563: INFO: stdout: "1:M 26 Dec 14:48:37.215 * The server is now ready to accept connections on port 6379\n"
STEP: limiting log bytes
Dec 26 14:48:39.564: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-g75cs redis-master --namespace=kubectl-4222 --limit-bytes=1'
Dec 26 14:48:39.678: INFO: stderr: ""
Dec 26 14:48:39.678: INFO: stdout: " "
STEP: exposing timestamps
Dec 26 14:48:39.678: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-g75cs redis-master --namespace=kubectl-4222 --tail=1 --timestamps'
Dec 26 14:48:39.836: INFO: stderr: ""
Dec 26 14:48:39.836: INFO: stdout: "2019-12-26T14:48:37.216113813Z 1:M 26 Dec 14:48:37.215 * The server is now ready to accept connections on port 6379\n"
STEP: restricting to a time range
Dec 26 14:48:42.336: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-g75cs redis-master --namespace=kubectl-4222 --since=1s'
Dec 26 14:48:42.620: INFO: stderr: ""
Dec 26 14:48:42.621: INFO: stdout: ""
Dec 26 14:48:42.621: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-g75cs redis-master --namespace=kubectl-4222 --since=24h'
Dec 26 14:48:42.781: INFO: stderr: ""
Dec 26 14:48:42.781: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 26 Dec 14:48:37.214 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 26 Dec 14:48:37.214 # Server started, Redis version 3.2.12\n1:M 26 Dec 14:48:37.215 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 26 Dec 14:48:37.215 * The server is now ready to accept connections on port 6379\n"
[AfterEach] [k8s.io] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1298
STEP: using delete to clean up resources
Dec 26 14:48:42.782: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-4222'
Dec 26 14:48:42.911: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Dec 26 14:48:42.911: INFO: stdout: "replicationcontroller \"redis-master\" force deleted\n"
Dec 26 14:48:42.911: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=nginx --no-headers --namespace=kubectl-4222'
Dec 26 14:48:43.017: INFO: stderr: "No resources found.\n"
Dec 26 14:48:43.017: INFO: stdout: ""
Dec 26 14:48:43.017: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=nginx --namespace=kubectl-4222 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Dec 26 14:48:43.167: INFO: stderr: ""
Dec 26 14:48:43.167: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 26 14:48:43.168: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-4222" for this suite.
Dec 26 14:49:05.227: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 26 14:49:05.353: INFO: namespace kubectl-4222 deletion completed in 22.159502272s

• [SLOW TEST:35.566 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should be able to retrieve and filter logs  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 26 14:49:05.355: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3612.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3612.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Dec 26 14:49:17.552: INFO: Unable to read wheezy_udp@kubernetes.default.svc.cluster.local from pod dns-3612/dns-test-54cee4cc-a943-4b7e-89f0-5b95b834bc65: the server could not find the requested resource (get pods dns-test-54cee4cc-a943-4b7e-89f0-5b95b834bc65)
Dec 26 14:49:17.557: INFO: Unable to read wheezy_tcp@kubernetes.default.svc.cluster.local from pod dns-3612/dns-test-54cee4cc-a943-4b7e-89f0-5b95b834bc65: the server could not find the requested resource (get pods dns-test-54cee4cc-a943-4b7e-89f0-5b95b834bc65)
Dec 26 14:49:17.561: INFO: Unable to read wheezy_udp@PodARecord from pod dns-3612/dns-test-54cee4cc-a943-4b7e-89f0-5b95b834bc65: the server could not find the requested resource (get pods dns-test-54cee4cc-a943-4b7e-89f0-5b95b834bc65)
Dec 26 14:49:17.566: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-3612/dns-test-54cee4cc-a943-4b7e-89f0-5b95b834bc65: the server could not find the requested resource (get pods dns-test-54cee4cc-a943-4b7e-89f0-5b95b834bc65)
Dec 26 14:49:17.573: INFO: Unable to read jessie_udp@kubernetes.default.svc.cluster.local from pod dns-3612/dns-test-54cee4cc-a943-4b7e-89f0-5b95b834bc65: the server could not find the requested resource (get pods dns-test-54cee4cc-a943-4b7e-89f0-5b95b834bc65)
Dec 26 14:49:17.651: INFO: Unable to read jessie_tcp@kubernetes.default.svc.cluster.local from pod dns-3612/dns-test-54cee4cc-a943-4b7e-89f0-5b95b834bc65: the server could not find the requested resource (get pods dns-test-54cee4cc-a943-4b7e-89f0-5b95b834bc65)
Dec 26 14:49:17.658: INFO: Unable to read jessie_udp@PodARecord from pod dns-3612/dns-test-54cee4cc-a943-4b7e-89f0-5b95b834bc65: the server could not find the requested resource (get pods dns-test-54cee4cc-a943-4b7e-89f0-5b95b834bc65)
Dec 26 14:49:17.663: INFO: Unable to read jessie_tcp@PodARecord from pod dns-3612/dns-test-54cee4cc-a943-4b7e-89f0-5b95b834bc65: the server could not find the requested resource (get pods dns-test-54cee4cc-a943-4b7e-89f0-5b95b834bc65)
Dec 26 14:49:17.663: INFO: Lookups using dns-3612/dns-test-54cee4cc-a943-4b7e-89f0-5b95b834bc65 failed for: [wheezy_udp@kubernetes.default.svc.cluster.local wheezy_tcp@kubernetes.default.svc.cluster.local wheezy_udp@PodARecord wheezy_tcp@PodARecord jessie_udp@kubernetes.default.svc.cluster.local jessie_tcp@kubernetes.default.svc.cluster.local jessie_udp@PodARecord jessie_tcp@PodARecord]

Dec 26 14:49:22.972: INFO: DNS probes using dns-3612/dns-test-54cee4cc-a943-4b7e-89f0-5b95b834bc65 succeeded

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 26 14:49:23.032: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-3612" for this suite.
Dec 26 14:49:29.285: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 26 14:49:29.465: INFO: namespace dns-3612 deletion completed in 6.261672584s

• [SLOW TEST:24.110 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-storage] HostPath 
  should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 26 14:49:29.466: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename hostpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37
[It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test hostPath mode
Dec 26 14:49:29.559: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-9064" to be "success or failure"
Dec 26 14:49:29.622: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 62.449133ms
Dec 26 14:49:31.630: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.070135406s
Dec 26 14:49:33.648: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.088136474s
Dec 26 14:49:35.662: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 6.102330876s
Dec 26 14:49:37.669: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 8.109640712s
Dec 26 14:49:39.676: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 10.116387373s
Dec 26 14:49:41.683: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.123963036s
STEP: Saw pod success
Dec 26 14:49:41.683: INFO: Pod "pod-host-path-test" satisfied condition "success or failure"
Dec 26 14:49:41.687: INFO: Trying to get logs from node iruya-node pod pod-host-path-test container test-container-1: 
STEP: delete the pod
Dec 26 14:49:41.836: INFO: Waiting for pod pod-host-path-test to disappear
Dec 26 14:49:41.857: INFO: Pod pod-host-path-test no longer exists
[AfterEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 26 14:49:41.857: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "hostpath-9064" for this suite.
Dec 26 14:49:47.944: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 26 14:49:48.089: INFO: namespace hostpath-9064 deletion completed in 6.223007334s

• [SLOW TEST:18.624 seconds]
[sig-storage] HostPath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34
  should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-network] Service endpoints latency 
  should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Service endpoints latency
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 26 14:49:48.090: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svc-latency
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating replication controller svc-latency-rc in namespace svc-latency-4188
I1226 14:49:48.218021       9 runners.go:180] Created replication controller with name: svc-latency-rc, namespace: svc-latency-4188, replica count: 1
I1226 14:49:49.268712       9 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1226 14:49:50.269149       9 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1226 14:49:51.269529       9 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1226 14:49:52.269828       9 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1226 14:49:53.270152       9 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1226 14:49:54.270502       9 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1226 14:49:55.270939       9 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1226 14:49:56.271357       9 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1226 14:49:57.271704       9 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Dec 26 14:49:57.423: INFO: Created: latency-svc-k5b59
Dec 26 14:49:57.442: INFO: Got endpoints: latency-svc-k5b59 [70.449984ms]
Dec 26 14:49:57.580: INFO: Created: latency-svc-6xcdr
Dec 26 14:49:57.635: INFO: Created: latency-svc-49f5c
Dec 26 14:49:57.636: INFO: Got endpoints: latency-svc-6xcdr [193.136732ms]
Dec 26 14:49:57.757: INFO: Got endpoints: latency-svc-49f5c [314.481537ms]
Dec 26 14:49:57.810: INFO: Created: latency-svc-pt499
Dec 26 14:49:57.829: INFO: Got endpoints: latency-svc-pt499 [385.326793ms]
Dec 26 14:49:57.958: INFO: Created: latency-svc-nt9pg
Dec 26 14:49:57.973: INFO: Got endpoints: latency-svc-nt9pg [529.46335ms]
Dec 26 14:49:58.037: INFO: Created: latency-svc-znhk6
Dec 26 14:49:58.042: INFO: Got endpoints: latency-svc-znhk6 [599.118224ms]
Dec 26 14:49:58.126: INFO: Created: latency-svc-vqb5j
Dec 26 14:49:58.131: INFO: Got endpoints: latency-svc-vqb5j [687.458964ms]
Dec 26 14:49:58.190: INFO: Created: latency-svc-224tq
Dec 26 14:49:58.194: INFO: Got endpoints: latency-svc-224tq [750.823136ms]
Dec 26 14:49:58.279: INFO: Created: latency-svc-pbsfh
Dec 26 14:49:58.289: INFO: Got endpoints: latency-svc-pbsfh [845.62138ms]
Dec 26 14:49:58.325: INFO: Created: latency-svc-m5tcc
Dec 26 14:49:58.343: INFO: Got endpoints: latency-svc-m5tcc [899.596472ms]
Dec 26 14:49:58.385: INFO: Created: latency-svc-8zblp
Dec 26 14:49:58.450: INFO: Got endpoints: latency-svc-8zblp [1.006349601s]
Dec 26 14:49:58.493: INFO: Created: latency-svc-gs6ws
Dec 26 14:49:58.506: INFO: Got endpoints: latency-svc-gs6ws [1.062129192s]
Dec 26 14:49:58.549: INFO: Created: latency-svc-zshzn
Dec 26 14:49:58.649: INFO: Created: latency-svc-fz4ch
Dec 26 14:49:58.668: INFO: Got endpoints: latency-svc-zshzn [1.224416186s]
Dec 26 14:49:58.705: INFO: Got endpoints: latency-svc-fz4ch [1.26087846s]
Dec 26 14:49:58.711: INFO: Created: latency-svc-f6rgx
Dec 26 14:49:58.775: INFO: Got endpoints: latency-svc-f6rgx [1.33119115s]
Dec 26 14:49:58.812: INFO: Created: latency-svc-tp4ws
Dec 26 14:49:58.819: INFO: Got endpoints: latency-svc-tp4ws [1.375650269s]
Dec 26 14:49:58.860: INFO: Created: latency-svc-llzmq
Dec 26 14:49:58.873: INFO: Got endpoints: latency-svc-llzmq [1.23675026s]
Dec 26 14:49:59.005: INFO: Created: latency-svc-vjxs6
Dec 26 14:49:59.024: INFO: Got endpoints: latency-svc-vjxs6 [1.266434904s]
Dec 26 14:49:59.208: INFO: Created: latency-svc-kbspb
Dec 26 14:49:59.225: INFO: Got endpoints: latency-svc-kbspb [1.395759225s]
Dec 26 14:49:59.288: INFO: Created: latency-svc-fc5n4
Dec 26 14:49:59.290: INFO: Got endpoints: latency-svc-fc5n4 [1.316408417s]
Dec 26 14:49:59.502: INFO: Created: latency-svc-hbdkt
Dec 26 14:49:59.506: INFO: Got endpoints: latency-svc-hbdkt [1.464531212s]
Dec 26 14:49:59.795: INFO: Created: latency-svc-cjjs4
Dec 26 14:49:59.827: INFO: Got endpoints: latency-svc-cjjs4 [1.696174759s]
Dec 26 14:49:59.876: INFO: Created: latency-svc-vn8h4
Dec 26 14:49:59.953: INFO: Got endpoints: latency-svc-vn8h4 [1.758842001s]
Dec 26 14:50:00.033: INFO: Created: latency-svc-rhhk2
Dec 26 14:50:00.034: INFO: Got endpoints: latency-svc-rhhk2 [1.744939937s]
Dec 26 14:50:00.114: INFO: Created: latency-svc-s4mzx
Dec 26 14:50:00.122: INFO: Got endpoints: latency-svc-s4mzx [1.778633002s]
Dec 26 14:50:00.164: INFO: Created: latency-svc-lfn6d
Dec 26 14:50:00.175: INFO: Got endpoints: latency-svc-lfn6d [1.724063819s]
Dec 26 14:50:00.295: INFO: Created: latency-svc-dmsh8
Dec 26 14:50:00.299: INFO: Got endpoints: latency-svc-dmsh8 [1.792495252s]
Dec 26 14:50:00.350: INFO: Created: latency-svc-vvrzb
Dec 26 14:50:00.354: INFO: Got endpoints: latency-svc-vvrzb [1.6856181s]
Dec 26 14:50:00.433: INFO: Created: latency-svc-zblts
Dec 26 14:50:00.444: INFO: Got endpoints: latency-svc-zblts [1.739217118s]
Dec 26 14:50:00.502: INFO: Created: latency-svc-zmxqp
Dec 26 14:50:00.520: INFO: Got endpoints: latency-svc-zmxqp [1.744402036s]
Dec 26 14:50:00.605: INFO: Created: latency-svc-q7n64
Dec 26 14:50:00.615: INFO: Got endpoints: latency-svc-q7n64 [1.795539791s]
Dec 26 14:50:00.645: INFO: Created: latency-svc-6wdtb
Dec 26 14:50:00.647: INFO: Got endpoints: latency-svc-6wdtb [1.774238054s]
Dec 26 14:50:00.684: INFO: Created: latency-svc-vr8gz
Dec 26 14:50:00.777: INFO: Got endpoints: latency-svc-vr8gz [1.752806043s]
Dec 26 14:50:00.791: INFO: Created: latency-svc-s7c7r
Dec 26 14:50:00.803: INFO: Got endpoints: latency-svc-s7c7r [1.57779481s]
Dec 26 14:50:00.833: INFO: Created: latency-svc-rqb4f
Dec 26 14:50:00.841: INFO: Got endpoints: latency-svc-rqb4f [1.550491114s]
Dec 26 14:50:00.954: INFO: Created: latency-svc-29624
Dec 26 14:50:00.965: INFO: Got endpoints: latency-svc-29624 [1.458367459s]
Dec 26 14:50:01.021: INFO: Created: latency-svc-q8kt5
Dec 26 14:50:01.214: INFO: Got endpoints: latency-svc-q8kt5 [1.386544739s]
Dec 26 14:50:01.225: INFO: Created: latency-svc-dv96t
Dec 26 14:50:01.233: INFO: Got endpoints: latency-svc-dv96t [1.279316941s]
Dec 26 14:50:01.282: INFO: Created: latency-svc-cgbdk
Dec 26 14:50:01.292: INFO: Got endpoints: latency-svc-cgbdk [1.257782469s]
Dec 26 14:50:01.386: INFO: Created: latency-svc-w64gs
Dec 26 14:50:01.398: INFO: Got endpoints: latency-svc-w64gs [1.275172304s]
Dec 26 14:50:01.437: INFO: Created: latency-svc-ffbzt
Dec 26 14:50:01.457: INFO: Got endpoints: latency-svc-ffbzt [1.282648886s]
Dec 26 14:50:01.582: INFO: Created: latency-svc-s6g2t
Dec 26 14:50:01.654: INFO: Got endpoints: latency-svc-s6g2t [1.355268474s]
Dec 26 14:50:01.675: INFO: Created: latency-svc-crs67
Dec 26 14:50:01.680: INFO: Got endpoints: latency-svc-crs67 [1.325529135s]
Dec 26 14:50:01.816: INFO: Created: latency-svc-k6gl8
Dec 26 14:50:01.836: INFO: Got endpoints: latency-svc-k6gl8 [1.39164542s]
Dec 26 14:50:01.844: INFO: Created: latency-svc-2wbfv
Dec 26 14:50:01.856: INFO: Got endpoints: latency-svc-2wbfv [1.335469745s]
Dec 26 14:50:01.958: INFO: Created: latency-svc-wkrx5
Dec 26 14:50:01.984: INFO: Got endpoints: latency-svc-wkrx5 [1.368621476s]
Dec 26 14:50:02.009: INFO: Created: latency-svc-6prwg
Dec 26 14:50:02.022: INFO: Got endpoints: latency-svc-6prwg [1.374590098s]
Dec 26 14:50:02.151: INFO: Created: latency-svc-56nvj
Dec 26 14:50:02.190: INFO: Got endpoints: latency-svc-56nvj [1.412026549s]
Dec 26 14:50:02.239: INFO: Created: latency-svc-2m74q
Dec 26 14:50:02.342: INFO: Got endpoints: latency-svc-2m74q [1.538640522s]
Dec 26 14:50:02.366: INFO: Created: latency-svc-m4zv4
Dec 26 14:50:02.378: INFO: Got endpoints: latency-svc-m4zv4 [187.506755ms]
Dec 26 14:50:02.433: INFO: Created: latency-svc-qxj7z
Dec 26 14:50:02.535: INFO: Got endpoints: latency-svc-qxj7z [1.694429937s]
Dec 26 14:50:02.582: INFO: Created: latency-svc-2f99p
Dec 26 14:50:02.620: INFO: Got endpoints: latency-svc-2f99p [1.65511262s]
Dec 26 14:50:02.728: INFO: Created: latency-svc-xwhtf
Dec 26 14:50:02.752: INFO: Got endpoints: latency-svc-xwhtf [1.537896789s]
Dec 26 14:50:02.810: INFO: Created: latency-svc-wtxgl
Dec 26 14:50:02.929: INFO: Got endpoints: latency-svc-wtxgl [1.695323613s]
Dec 26 14:50:02.931: INFO: Created: latency-svc-xgz9l
Dec 26 14:50:03.002: INFO: Got endpoints: latency-svc-xgz9l [1.709309159s]
Dec 26 14:50:03.106: INFO: Created: latency-svc-chllm
Dec 26 14:50:03.118: INFO: Got endpoints: latency-svc-chllm [1.720581227s]
Dec 26 14:50:03.250: INFO: Created: latency-svc-v9c7j
Dec 26 14:50:03.258: INFO: Got endpoints: latency-svc-v9c7j [1.800538704s]
Dec 26 14:50:03.300: INFO: Created: latency-svc-9kmgz
Dec 26 14:50:03.308: INFO: Got endpoints: latency-svc-9kmgz [1.654219603s]
Dec 26 14:50:03.413: INFO: Created: latency-svc-vn7l8
Dec 26 14:50:03.420: INFO: Got endpoints: latency-svc-vn7l8 [1.739905543s]
Dec 26 14:50:03.474: INFO: Created: latency-svc-jsqg8
Dec 26 14:50:03.483: INFO: Got endpoints: latency-svc-jsqg8 [1.647120329s]
Dec 26 14:50:03.565: INFO: Created: latency-svc-b2sth
Dec 26 14:50:03.575: INFO: Got endpoints: latency-svc-b2sth [1.718443774s]
Dec 26 14:50:03.667: INFO: Created: latency-svc-2c7bx
Dec 26 14:50:03.714: INFO: Got endpoints: latency-svc-2c7bx [1.72968557s]
Dec 26 14:50:03.796: INFO: Created: latency-svc-7c2qg
Dec 26 14:50:03.813: INFO: Got endpoints: latency-svc-7c2qg [1.791492118s]
Dec 26 14:50:03.896: INFO: Created: latency-svc-pnzf9
Dec 26 14:50:03.953: INFO: Got endpoints: latency-svc-pnzf9 [1.611170679s]
Dec 26 14:50:04.025: INFO: Created: latency-svc-4kksx
Dec 26 14:50:04.038: INFO: Got endpoints: latency-svc-4kksx [1.659809445s]
Dec 26 14:50:04.101: INFO: Created: latency-svc-zdvfn
Dec 26 14:50:04.104: INFO: Got endpoints: latency-svc-zdvfn [1.568197487s]
Dec 26 14:50:04.266: INFO: Created: latency-svc-nfdr2
Dec 26 14:50:04.275: INFO: Got endpoints: latency-svc-nfdr2 [1.654301149s]
Dec 26 14:50:04.332: INFO: Created: latency-svc-qb64s
Dec 26 14:50:04.457: INFO: Got endpoints: latency-svc-qb64s [1.704129061s]
Dec 26 14:50:04.513: INFO: Created: latency-svc-xt74r
Dec 26 14:50:04.548: INFO: Got endpoints: latency-svc-xt74r [1.619268134s]
Dec 26 14:50:04.713: INFO: Created: latency-svc-ptgfp
Dec 26 14:50:04.721: INFO: Got endpoints: latency-svc-ptgfp [1.7187379s]
Dec 26 14:50:04.782: INFO: Created: latency-svc-gf9lj
Dec 26 14:50:04.798: INFO: Got endpoints: latency-svc-gf9lj [1.679603493s]
Dec 26 14:50:04.998: INFO: Created: latency-svc-zpstk
Dec 26 14:50:04.998: INFO: Got endpoints: latency-svc-zpstk [1.739920395s]
Dec 26 14:50:05.160: INFO: Created: latency-svc-x9x2j
Dec 26 14:50:05.161: INFO: Got endpoints: latency-svc-x9x2j [1.852289927s]
Dec 26 14:50:05.227: INFO: Created: latency-svc-hlm6z
Dec 26 14:50:05.336: INFO: Created: latency-svc-2dkdj
Dec 26 14:50:05.338: INFO: Got endpoints: latency-svc-hlm6z [1.91785289s]
Dec 26 14:50:05.363: INFO: Got endpoints: latency-svc-2dkdj [1.878830904s]
Dec 26 14:50:05.397: INFO: Created: latency-svc-5bnc4
Dec 26 14:50:05.410: INFO: Got endpoints: latency-svc-5bnc4 [1.835819198s]
Dec 26 14:50:05.476: INFO: Created: latency-svc-p9njz
Dec 26 14:50:05.480: INFO: Got endpoints: latency-svc-p9njz [1.765285666s]
Dec 26 14:50:05.537: INFO: Created: latency-svc-kph9v
Dec 26 14:50:05.539: INFO: Got endpoints: latency-svc-kph9v [1.725000197s]
Dec 26 14:50:06.215: INFO: Created: latency-svc-52ns2
Dec 26 14:50:06.219: INFO: Got endpoints: latency-svc-52ns2 [2.265079119s]
Dec 26 14:50:06.422: INFO: Created: latency-svc-dkh6k
Dec 26 14:50:06.431: INFO: Got endpoints: latency-svc-dkh6k [2.393387753s]
Dec 26 14:50:06.516: INFO: Created: latency-svc-znnwq
Dec 26 14:50:06.640: INFO: Created: latency-svc-rdmhl
Dec 26 14:50:06.643: INFO: Got endpoints: latency-svc-znnwq [2.538584815s]
Dec 26 14:50:06.654: INFO: Got endpoints: latency-svc-rdmhl [2.378539994s]
Dec 26 14:50:06.717: INFO: Created: latency-svc-pk7gd
Dec 26 14:50:06.771: INFO: Got endpoints: latency-svc-pk7gd [2.313334699s]
Dec 26 14:50:06.818: INFO: Created: latency-svc-mwck7
Dec 26 14:50:06.824: INFO: Got endpoints: latency-svc-mwck7 [2.275360513s]
Dec 26 14:50:06.920: INFO: Created: latency-svc-lsp8n
Dec 26 14:50:06.933: INFO: Got endpoints: latency-svc-lsp8n [2.212422337s]
Dec 26 14:50:06.985: INFO: Created: latency-svc-87lrn
Dec 26 14:50:07.000: INFO: Got endpoints: latency-svc-87lrn [2.202146294s]
Dec 26 14:50:07.136: INFO: Created: latency-svc-grmnx
Dec 26 14:50:07.192: INFO: Got endpoints: latency-svc-grmnx [2.193230096s]
Dec 26 14:50:07.198: INFO: Created: latency-svc-vtmmj
Dec 26 14:50:07.344: INFO: Got endpoints: latency-svc-vtmmj [2.182981895s]
Dec 26 14:50:07.378: INFO: Created: latency-svc-2ctmr
Dec 26 14:50:07.388: INFO: Got endpoints: latency-svc-2ctmr [2.049739549s]
Dec 26 14:50:07.435: INFO: Created: latency-svc-rslsr
Dec 26 14:50:07.507: INFO: Got endpoints: latency-svc-rslsr [2.144202226s]
Dec 26 14:50:07.529: INFO: Created: latency-svc-vqnqs
Dec 26 14:50:07.529: INFO: Got endpoints: latency-svc-vqnqs [2.118832683s]
Dec 26 14:50:07.730: INFO: Created: latency-svc-95jdb
Dec 26 14:50:07.742: INFO: Got endpoints: latency-svc-95jdb [2.262407109s]
Dec 26 14:50:07.801: INFO: Created: latency-svc-xtvkn
Dec 26 14:50:07.816: INFO: Got endpoints: latency-svc-xtvkn [2.27709564s]
Dec 26 14:50:07.942: INFO: Created: latency-svc-qcbht
Dec 26 14:50:07.956: INFO: Got endpoints: latency-svc-qcbht [1.737071459s]
Dec 26 14:50:08.009: INFO: Created: latency-svc-b6bxb
Dec 26 14:50:08.023: INFO: Got endpoints: latency-svc-b6bxb [1.591522503s]
Dec 26 14:50:08.146: INFO: Created: latency-svc-9xpxw
Dec 26 14:50:08.163: INFO: Got endpoints: latency-svc-9xpxw [1.519959485s]
Dec 26 14:50:08.229: INFO: Created: latency-svc-6929d
Dec 26 14:50:08.229: INFO: Got endpoints: latency-svc-6929d [1.575294247s]
Dec 26 14:50:08.315: INFO: Created: latency-svc-fgwwc
Dec 26 14:50:08.332: INFO: Got endpoints: latency-svc-fgwwc [1.560956813s]
Dec 26 14:50:08.418: INFO: Created: latency-svc-hmdtr
Dec 26 14:50:08.530: INFO: Got endpoints: latency-svc-hmdtr [1.706295395s]
Dec 26 14:50:08.598: INFO: Created: latency-svc-l7xv6
Dec 26 14:50:08.608: INFO: Got endpoints: latency-svc-l7xv6 [1.674553843s]
Dec 26 14:50:08.716: INFO: Created: latency-svc-h2vhc
Dec 26 14:50:08.730: INFO: Got endpoints: latency-svc-h2vhc [1.729430035s]
Dec 26 14:50:08.797: INFO: Created: latency-svc-wqp4r
Dec 26 14:50:08.811: INFO: Got endpoints: latency-svc-wqp4r [1.618761687s]
Dec 26 14:50:08.913: INFO: Created: latency-svc-sz8p7
Dec 26 14:50:08.914: INFO: Got endpoints: latency-svc-sz8p7 [1.569385644s]
Dec 26 14:50:09.124: INFO: Created: latency-svc-cdwcp
Dec 26 14:50:09.136: INFO: Got endpoints: latency-svc-cdwcp [1.747459217s]
Dec 26 14:50:09.352: INFO: Created: latency-svc-4wl4b
Dec 26 14:50:09.354: INFO: Got endpoints: latency-svc-4wl4b [1.846712429s]
Dec 26 14:50:09.437: INFO: Created: latency-svc-86cw4
Dec 26 14:50:09.602: INFO: Got endpoints: latency-svc-86cw4 [2.072100532s]
Dec 26 14:50:09.650: INFO: Created: latency-svc-9lbm4
Dec 26 14:50:09.660: INFO: Got endpoints: latency-svc-9lbm4 [1.917965726s]
Dec 26 14:50:09.841: INFO: Created: latency-svc-x8s25
Dec 26 14:50:09.851: INFO: Got endpoints: latency-svc-x8s25 [2.034641409s]
Dec 26 14:50:10.043: INFO: Created: latency-svc-xrfl4
Dec 26 14:50:10.047: INFO: Got endpoints: latency-svc-xrfl4 [2.091073104s]
Dec 26 14:50:10.110: INFO: Created: latency-svc-bjzsx
Dec 26 14:50:10.119: INFO: Got endpoints: latency-svc-bjzsx [2.09603016s]
Dec 26 14:50:10.222: INFO: Created: latency-svc-2f8pz
Dec 26 14:50:10.227: INFO: Got endpoints: latency-svc-2f8pz [2.063555338s]
Dec 26 14:50:10.278: INFO: Created: latency-svc-mhgzf
Dec 26 14:50:10.415: INFO: Created: latency-svc-f79fd
Dec 26 14:50:10.422: INFO: Got endpoints: latency-svc-mhgzf [2.192620356s]
Dec 26 14:50:10.433: INFO: Got endpoints: latency-svc-f79fd [2.101216888s]
Dec 26 14:50:10.494: INFO: Created: latency-svc-2wwlf
Dec 26 14:50:10.626: INFO: Got endpoints: latency-svc-2wwlf [2.095133138s]
Dec 26 14:50:10.635: INFO: Created: latency-svc-pl4sv
Dec 26 14:50:10.652: INFO: Got endpoints: latency-svc-pl4sv [2.043780261s]
Dec 26 14:50:10.690: INFO: Created: latency-svc-kwz55
Dec 26 14:50:10.698: INFO: Got endpoints: latency-svc-kwz55 [1.968015429s]
Dec 26 14:50:10.812: INFO: Created: latency-svc-b74xt
Dec 26 14:50:10.841: INFO: Got endpoints: latency-svc-b74xt [2.030295877s]
Dec 26 14:50:10.876: INFO: Created: latency-svc-6q6mn
Dec 26 14:50:10.894: INFO: Got endpoints: latency-svc-6q6mn [1.980307051s]
Dec 26 14:50:11.022: INFO: Created: latency-svc-wclzk
Dec 26 14:50:11.030: INFO: Got endpoints: latency-svc-wclzk [1.89335202s]
Dec 26 14:50:11.077: INFO: Created: latency-svc-2sn8w
Dec 26 14:50:11.078: INFO: Got endpoints: latency-svc-2sn8w [1.723605498s]
Dec 26 14:50:11.231: INFO: Created: latency-svc-nps25
Dec 26 14:50:11.271: INFO: Created: latency-svc-5xlvq
Dec 26 14:50:11.275: INFO: Got endpoints: latency-svc-nps25 [1.672833722s]
Dec 26 14:50:11.312: INFO: Got endpoints: latency-svc-5xlvq [1.651868068s]
Dec 26 14:50:11.523: INFO: Created: latency-svc-4fkw8
Dec 26 14:50:11.549: INFO: Got endpoints: latency-svc-4fkw8 [1.697969904s]
Dec 26 14:50:11.582: INFO: Created: latency-svc-lwrnb
Dec 26 14:50:11.815: INFO: Got endpoints: latency-svc-lwrnb [1.767692922s]
Dec 26 14:50:11.832: INFO: Created: latency-svc-zfhll
Dec 26 14:50:11.868: INFO: Got endpoints: latency-svc-zfhll [1.748968452s]
Dec 26 14:50:11.889: INFO: Created: latency-svc-kxcqr
Dec 26 14:50:11.997: INFO: Got endpoints: latency-svc-kxcqr [1.770562102s]
Dec 26 14:50:12.007: INFO: Created: latency-svc-6kf27
Dec 26 14:50:12.013: INFO: Got endpoints: latency-svc-6kf27 [1.590453757s]
Dec 26 14:50:12.063: INFO: Created: latency-svc-kklzc
Dec 26 14:50:12.070: INFO: Got endpoints: latency-svc-kklzc [1.636848572s]
Dec 26 14:50:12.163: INFO: Created: latency-svc-sv6m4
Dec 26 14:50:12.170: INFO: Got endpoints: latency-svc-sv6m4 [1.54359228s]
Dec 26 14:50:12.209: INFO: Created: latency-svc-6l789
Dec 26 14:50:12.249: INFO: Got endpoints: latency-svc-6l789 [1.596230715s]
Dec 26 14:50:12.256: INFO: Created: latency-svc-9dnk2
Dec 26 14:50:12.350: INFO: Created: latency-svc-txr4l
Dec 26 14:50:12.356: INFO: Got endpoints: latency-svc-9dnk2 [1.657193942s]
Dec 26 14:50:12.366: INFO: Got endpoints: latency-svc-txr4l [1.524404801s]
Dec 26 14:50:12.439: INFO: Created: latency-svc-fb58x
Dec 26 14:50:12.548: INFO: Got endpoints: latency-svc-fb58x [1.654127259s]
Dec 26 14:50:12.562: INFO: Created: latency-svc-4xjnt
Dec 26 14:50:12.599: INFO: Created: latency-svc-vwdq5
Dec 26 14:50:12.599: INFO: Got endpoints: latency-svc-4xjnt [1.569372783s]
Dec 26 14:50:12.614: INFO: Got endpoints: latency-svc-vwdq5 [1.536103161s]
Dec 26 14:50:12.749: INFO: Created: latency-svc-dw6f6
Dec 26 14:50:12.789: INFO: Got endpoints: latency-svc-dw6f6 [1.514547392s]
Dec 26 14:50:12.792: INFO: Created: latency-svc-hcdm7
Dec 26 14:50:12.797: INFO: Got endpoints: latency-svc-hcdm7 [1.484722569s]
Dec 26 14:50:12.959: INFO: Created: latency-svc-qtr97
Dec 26 14:50:12.961: INFO: Got endpoints: latency-svc-qtr97 [1.411146532s]
Dec 26 14:50:13.002: INFO: Created: latency-svc-cw9fv
Dec 26 14:50:13.010: INFO: Got endpoints: latency-svc-cw9fv [1.195048081s]
Dec 26 14:50:13.051: INFO: Created: latency-svc-6tl48
Dec 26 14:50:13.126: INFO: Got endpoints: latency-svc-6tl48 [1.257542017s]
Dec 26 14:50:13.160: INFO: Created: latency-svc-6sgj2
Dec 26 14:50:13.169: INFO: Got endpoints: latency-svc-6sgj2 [1.171419874s]
Dec 26 14:50:13.228: INFO: Created: latency-svc-s6q6g
Dec 26 14:50:13.316: INFO: Got endpoints: latency-svc-s6q6g [1.302713721s]
Dec 26 14:50:13.376: INFO: Created: latency-svc-xs68n
Dec 26 14:50:13.383: INFO: Got endpoints: latency-svc-xs68n [1.312611856s]
Dec 26 14:50:13.471: INFO: Created: latency-svc-4cgc2
Dec 26 14:50:13.479: INFO: Got endpoints: latency-svc-4cgc2 [1.308992542s]
Dec 26 14:50:13.527: INFO: Created: latency-svc-8mvgt
Dec 26 14:50:13.540: INFO: Got endpoints: latency-svc-8mvgt [1.291426131s]
Dec 26 14:50:13.720: INFO: Created: latency-svc-7m5hv
Dec 26 14:50:13.724: INFO: Got endpoints: latency-svc-7m5hv [1.367957546s]
Dec 26 14:50:13.807: INFO: Created: latency-svc-vc8xl
Dec 26 14:50:13.812: INFO: Got endpoints: latency-svc-vc8xl [1.445403223s]
Dec 26 14:50:13.934: INFO: Created: latency-svc-v45qd
Dec 26 14:50:13.996: INFO: Got endpoints: latency-svc-v45qd [1.447779383s]
Dec 26 14:50:14.072: INFO: Created: latency-svc-4dx7h
Dec 26 14:50:14.093: INFO: Got endpoints: latency-svc-4dx7h [1.493601345s]
Dec 26 14:50:14.160: INFO: Created: latency-svc-k756p
Dec 26 14:50:14.286: INFO: Got endpoints: latency-svc-k756p [1.670879223s]
Dec 26 14:50:14.286: INFO: Created: latency-svc-5vst9
Dec 26 14:50:14.318: INFO: Got endpoints: latency-svc-5vst9 [1.528172195s]
Dec 26 14:50:14.326: INFO: Created: latency-svc-d4kmr
Dec 26 14:50:14.356: INFO: Got endpoints: latency-svc-d4kmr [1.558552258s]
Dec 26 14:50:14.433: INFO: Created: latency-svc-29m2x
Dec 26 14:50:14.459: INFO: Got endpoints: latency-svc-29m2x [1.498015501s]
Dec 26 14:50:14.470: INFO: Created: latency-svc-bbdk5
Dec 26 14:50:14.482: INFO: Got endpoints: latency-svc-bbdk5 [1.471822515s]
Dec 26 14:50:14.634: INFO: Created: latency-svc-7w9h8
Dec 26 14:50:14.640: INFO: Got endpoints: latency-svc-7w9h8 [1.512955035s]
Dec 26 14:50:14.705: INFO: Created: latency-svc-ql9wg
Dec 26 14:50:14.721: INFO: Got endpoints: latency-svc-ql9wg [1.551926141s]
Dec 26 14:50:14.822: INFO: Created: latency-svc-vrlt7
Dec 26 14:50:14.827: INFO: Got endpoints: latency-svc-vrlt7 [1.510840277s]
Dec 26 14:50:14.889: INFO: Created: latency-svc-4fx65
Dec 26 14:50:14.899: INFO: Got endpoints: latency-svc-4fx65 [1.515829914s]
Dec 26 14:50:15.097: INFO: Created: latency-svc-jm9kx
Dec 26 14:50:15.103: INFO: Got endpoints: latency-svc-jm9kx [1.623544017s]
Dec 26 14:50:15.232: INFO: Created: latency-svc-vfjqj
Dec 26 14:50:15.240: INFO: Got endpoints: latency-svc-vfjqj [1.699717061s]
Dec 26 14:50:15.305: INFO: Created: latency-svc-t4c44
Dec 26 14:50:15.318: INFO: Got endpoints: latency-svc-t4c44 [1.593821271s]
Dec 26 14:50:15.434: INFO: Created: latency-svc-6qjsp
Dec 26 14:50:15.440: INFO: Got endpoints: latency-svc-6qjsp [1.627719181s]
Dec 26 14:50:15.488: INFO: Created: latency-svc-w68fd
Dec 26 14:50:15.488: INFO: Got endpoints: latency-svc-w68fd [1.491395568s]
Dec 26 14:50:15.592: INFO: Created: latency-svc-wctrx
Dec 26 14:50:15.592: INFO: Got endpoints: latency-svc-wctrx [1.49851249s]
Dec 26 14:50:15.650: INFO: Created: latency-svc-vgvqp
Dec 26 14:50:15.662: INFO: Got endpoints: latency-svc-vgvqp [1.376230669s]
Dec 26 14:50:15.790: INFO: Created: latency-svc-lrmbx
Dec 26 14:50:15.832: INFO: Got endpoints: latency-svc-lrmbx [1.513485459s]
Dec 26 14:50:15.851: INFO: Created: latency-svc-x2nmd
Dec 26 14:50:15.854: INFO: Got endpoints: latency-svc-x2nmd [1.497696548s]
Dec 26 14:50:16.001: INFO: Created: latency-svc-djcmm
Dec 26 14:50:16.016: INFO: Got endpoints: latency-svc-djcmm [1.556611456s]
Dec 26 14:50:16.417: INFO: Created: latency-svc-n6dw8
Dec 26 14:50:16.476: INFO: Got endpoints: latency-svc-n6dw8 [1.993653002s]
Dec 26 14:50:16.490: INFO: Created: latency-svc-qd2qd
Dec 26 14:50:16.503: INFO: Got endpoints: latency-svc-qd2qd [1.863044364s]
Dec 26 14:50:16.695: INFO: Created: latency-svc-5knnw
Dec 26 14:50:16.731: INFO: Got endpoints: latency-svc-5knnw [2.00917549s]
Dec 26 14:50:16.895: INFO: Created: latency-svc-gjcv9
Dec 26 14:50:16.949: INFO: Created: latency-svc-w2vv2
Dec 26 14:50:16.952: INFO: Got endpoints: latency-svc-gjcv9 [2.125427334s]
Dec 26 14:50:16.960: INFO: Got endpoints: latency-svc-w2vv2 [2.060790816s]
Dec 26 14:50:17.055: INFO: Created: latency-svc-xblns
Dec 26 14:50:17.075: INFO: Got endpoints: latency-svc-xblns [1.972627515s]
Dec 26 14:50:17.126: INFO: Created: latency-svc-j5sqs
Dec 26 14:50:17.193: INFO: Created: latency-svc-sr6gs
Dec 26 14:50:17.196: INFO: Got endpoints: latency-svc-j5sqs [1.955991847s]
Dec 26 14:50:17.203: INFO: Got endpoints: latency-svc-sr6gs [1.884781865s]
Dec 26 14:50:17.247: INFO: Created: latency-svc-4brtg
Dec 26 14:50:17.271: INFO: Got endpoints: latency-svc-4brtg [1.830854327s]
Dec 26 14:50:17.282: INFO: Created: latency-svc-mlzrf
Dec 26 14:50:17.350: INFO: Got endpoints: latency-svc-mlzrf [1.861460022s]
Dec 26 14:50:17.406: INFO: Created: latency-svc-r9gmk
Dec 26 14:50:17.414: INFO: Got endpoints: latency-svc-r9gmk [1.821851208s]
Dec 26 14:50:17.502: INFO: Created: latency-svc-f9n8j
Dec 26 14:50:17.506: INFO: Got endpoints: latency-svc-f9n8j [1.84308145s]
Dec 26 14:50:17.556: INFO: Created: latency-svc-gpl9v
Dec 26 14:50:17.574: INFO: Got endpoints: latency-svc-gpl9v [1.742475508s]
Dec 26 14:50:17.659: INFO: Created: latency-svc-nnz2f
Dec 26 14:50:17.666: INFO: Got endpoints: latency-svc-nnz2f [1.812632743s]
Dec 26 14:50:17.709: INFO: Created: latency-svc-p5hlx
Dec 26 14:50:17.719: INFO: Got endpoints: latency-svc-p5hlx [1.703126041s]
Dec 26 14:50:17.862: INFO: Created: latency-svc-5pjdl
Dec 26 14:50:17.882: INFO: Got endpoints: latency-svc-5pjdl [1.405109895s]
Dec 26 14:50:17.958: INFO: Created: latency-svc-bvm8n
Dec 26 14:50:18.075: INFO: Got endpoints: latency-svc-bvm8n [1.571680016s]
Dec 26 14:50:18.090: INFO: Created: latency-svc-m2dlv
Dec 26 14:50:18.099: INFO: Got endpoints: latency-svc-m2dlv [1.368119995s]
Dec 26 14:50:18.165: INFO: Created: latency-svc-7r4kq
Dec 26 14:50:18.232: INFO: Got endpoints: latency-svc-7r4kq [1.280220448s]
Dec 26 14:50:18.248: INFO: Created: latency-svc-c4g9f
Dec 26 14:50:18.277: INFO: Got endpoints: latency-svc-c4g9f [1.317347594s]
Dec 26 14:50:18.314: INFO: Created: latency-svc-jntvk
Dec 26 14:50:18.380: INFO: Got endpoints: latency-svc-jntvk [1.304169797s]
Dec 26 14:50:18.394: INFO: Created: latency-svc-lk77w
Dec 26 14:50:18.399: INFO: Got endpoints: latency-svc-lk77w [1.202476363s]
Dec 26 14:50:18.446: INFO: Created: latency-svc-4p2wh
Dec 26 14:50:18.458: INFO: Got endpoints: latency-svc-4p2wh [1.254413778s]
Dec 26 14:50:18.548: INFO: Created: latency-svc-q4zz6
Dec 26 14:50:18.591: INFO: Got endpoints: latency-svc-q4zz6 [1.320331644s]
Dec 26 14:50:18.607: INFO: Created: latency-svc-7grvq
Dec 26 14:50:18.634: INFO: Got endpoints: latency-svc-7grvq [1.283997396s]
Dec 26 14:50:18.756: INFO: Created: latency-svc-j66xz
Dec 26 14:50:18.803: INFO: Created: latency-svc-ff2zc
Dec 26 14:50:18.804: INFO: Got endpoints: latency-svc-j66xz [1.390006573s]
Dec 26 14:50:18.880: INFO: Got endpoints: latency-svc-ff2zc [1.374703781s]
Dec 26 14:50:18.924: INFO: Created: latency-svc-76hrx
Dec 26 14:50:18.924: INFO: Got endpoints: latency-svc-76hrx [1.34965483s]
Dec 26 14:50:19.026: INFO: Created: latency-svc-d44r9
Dec 26 14:50:19.090: INFO: Got endpoints: latency-svc-d44r9 [1.423305013s]
Dec 26 14:50:19.095: INFO: Created: latency-svc-9c5pr
Dec 26 14:50:19.183: INFO: Got endpoints: latency-svc-9c5pr [1.46304945s]
Dec 26 14:50:19.204: INFO: Created: latency-svc-fgzzn
Dec 26 14:50:19.205: INFO: Got endpoints: latency-svc-fgzzn [1.323434505s]
Dec 26 14:50:19.255: INFO: Created: latency-svc-dkk55
Dec 26 14:50:19.260: INFO: Got endpoints: latency-svc-dkk55 [1.185103572s]
Dec 26 14:50:19.500: INFO: Created: latency-svc-527v5
Dec 26 14:50:19.530: INFO: Got endpoints: latency-svc-527v5 [1.43093089s]
Dec 26 14:50:19.531: INFO: Latencies: [187.506755ms 193.136732ms 314.481537ms 385.326793ms 529.46335ms 599.118224ms 687.458964ms 750.823136ms 845.62138ms 899.596472ms 1.006349601s 1.062129192s 1.171419874s 1.185103572s 1.195048081s 1.202476363s 1.224416186s 1.23675026s 1.254413778s 1.257542017s 1.257782469s 1.26087846s 1.266434904s 1.275172304s 1.279316941s 1.280220448s 1.282648886s 1.283997396s 1.291426131s 1.302713721s 1.304169797s 1.308992542s 1.312611856s 1.316408417s 1.317347594s 1.320331644s 1.323434505s 1.325529135s 1.33119115s 1.335469745s 1.34965483s 1.355268474s 1.367957546s 1.368119995s 1.368621476s 1.374590098s 1.374703781s 1.375650269s 1.376230669s 1.386544739s 1.390006573s 1.39164542s 1.395759225s 1.405109895s 1.411146532s 1.412026549s 1.423305013s 1.43093089s 1.445403223s 1.447779383s 1.458367459s 1.46304945s 1.464531212s 1.471822515s 1.484722569s 1.491395568s 1.493601345s 1.497696548s 1.498015501s 1.49851249s 1.510840277s 1.512955035s 1.513485459s 1.514547392s 1.515829914s 1.519959485s 1.524404801s 1.528172195s 1.536103161s 1.537896789s 1.538640522s 1.54359228s 1.550491114s 1.551926141s 1.556611456s 1.558552258s 1.560956813s 1.568197487s 1.569372783s 1.569385644s 1.571680016s 1.575294247s 1.57779481s 1.590453757s 1.591522503s 1.593821271s 1.596230715s 1.611170679s 1.618761687s 1.619268134s 1.623544017s 1.627719181s 1.636848572s 1.647120329s 1.651868068s 1.654127259s 1.654219603s 1.654301149s 1.65511262s 1.657193942s 1.659809445s 1.670879223s 1.672833722s 1.674553843s 1.679603493s 1.6856181s 1.694429937s 1.695323613s 1.696174759s 1.697969904s 1.699717061s 1.703126041s 1.704129061s 1.706295395s 1.709309159s 1.718443774s 1.7187379s 1.720581227s 1.723605498s 1.724063819s 1.725000197s 1.729430035s 1.72968557s 1.737071459s 1.739217118s 1.739905543s 1.739920395s 1.742475508s 1.744402036s 1.744939937s 1.747459217s 1.748968452s 1.752806043s 1.758842001s 1.765285666s 1.767692922s 1.770562102s 1.774238054s 1.778633002s 1.791492118s 1.792495252s 1.795539791s 1.800538704s 1.812632743s 1.821851208s 1.830854327s 1.835819198s 1.84308145s 1.846712429s 1.852289927s 1.861460022s 1.863044364s 1.878830904s 1.884781865s 1.89335202s 1.91785289s 1.917965726s 1.955991847s 1.968015429s 1.972627515s 1.980307051s 1.993653002s 2.00917549s 2.030295877s 2.034641409s 2.043780261s 2.049739549s 2.060790816s 2.063555338s 2.072100532s 2.091073104s 2.095133138s 2.09603016s 2.101216888s 2.118832683s 2.125427334s 2.144202226s 2.182981895s 2.192620356s 2.193230096s 2.202146294s 2.212422337s 2.262407109s 2.265079119s 2.275360513s 2.27709564s 2.313334699s 2.378539994s 2.393387753s 2.538584815s]
Dec 26 14:50:19.532: INFO: 50 %ile: 1.623544017s
Dec 26 14:50:19.532: INFO: 90 %ile: 2.091073104s
Dec 26 14:50:19.532: INFO: 99 %ile: 2.393387753s
Dec 26 14:50:19.532: INFO: Total sample count: 200
[AfterEach] [sig-network] Service endpoints latency
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 26 14:50:19.532: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svc-latency-4188" for this suite.
Dec 26 14:51:01.561: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 26 14:51:01.708: INFO: namespace svc-latency-4188 deletion completed in 42.166765752s

• [SLOW TEST:73.619 seconds]
[sig-network] Service endpoints latency
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-storage] Downward API volume 
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 26 14:51:01.709: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Dec 26 14:51:01.883: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1073e8bb-90a0-42c1-bb62-b5f42a1bda9f" in namespace "downward-api-3001" to be "success or failure"
Dec 26 14:51:01.931: INFO: Pod "downwardapi-volume-1073e8bb-90a0-42c1-bb62-b5f42a1bda9f": Phase="Pending", Reason="", readiness=false. Elapsed: 47.750478ms
Dec 26 14:51:03.942: INFO: Pod "downwardapi-volume-1073e8bb-90a0-42c1-bb62-b5f42a1bda9f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.058232346s
Dec 26 14:51:06.070: INFO: Pod "downwardapi-volume-1073e8bb-90a0-42c1-bb62-b5f42a1bda9f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.187037447s
Dec 26 14:51:08.078: INFO: Pod "downwardapi-volume-1073e8bb-90a0-42c1-bb62-b5f42a1bda9f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.194958347s
Dec 26 14:51:10.095: INFO: Pod "downwardapi-volume-1073e8bb-90a0-42c1-bb62-b5f42a1bda9f": Phase="Pending", Reason="", readiness=false. Elapsed: 8.212035089s
Dec 26 14:51:12.105: INFO: Pod "downwardapi-volume-1073e8bb-90a0-42c1-bb62-b5f42a1bda9f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.221314719s
STEP: Saw pod success
Dec 26 14:51:12.105: INFO: Pod "downwardapi-volume-1073e8bb-90a0-42c1-bb62-b5f42a1bda9f" satisfied condition "success or failure"
Dec 26 14:51:12.109: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-1073e8bb-90a0-42c1-bb62-b5f42a1bda9f container client-container: 
STEP: delete the pod
Dec 26 14:51:12.161: INFO: Waiting for pod downwardapi-volume-1073e8bb-90a0-42c1-bb62-b5f42a1bda9f to disappear
Dec 26 14:51:12.259: INFO: Pod downwardapi-volume-1073e8bb-90a0-42c1-bb62-b5f42a1bda9f no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 26 14:51:12.260: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-3001" for this suite.
Dec 26 14:51:18.304: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 26 14:51:18.446: INFO: namespace downward-api-3001 deletion completed in 6.176662175s

• [SLOW TEST:16.738 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 26 14:51:18.447: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Dec 26 14:51:18.573: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0857c911-5752-4e1f-87cb-64c0e68dfee3" in namespace "projected-1366" to be "success or failure"
Dec 26 14:51:18.596: INFO: Pod "downwardapi-volume-0857c911-5752-4e1f-87cb-64c0e68dfee3": Phase="Pending", Reason="", readiness=false. Elapsed: 22.271012ms
Dec 26 14:51:20.607: INFO: Pod "downwardapi-volume-0857c911-5752-4e1f-87cb-64c0e68dfee3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033724129s
Dec 26 14:51:22.621: INFO: Pod "downwardapi-volume-0857c911-5752-4e1f-87cb-64c0e68dfee3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.047363296s
Dec 26 14:51:24.637: INFO: Pod "downwardapi-volume-0857c911-5752-4e1f-87cb-64c0e68dfee3": Phase="Pending", Reason="", readiness=false. Elapsed: 6.063671464s
Dec 26 14:51:26.655: INFO: Pod "downwardapi-volume-0857c911-5752-4e1f-87cb-64c0e68dfee3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.081108517s
STEP: Saw pod success
Dec 26 14:51:26.655: INFO: Pod "downwardapi-volume-0857c911-5752-4e1f-87cb-64c0e68dfee3" satisfied condition "success or failure"
Dec 26 14:51:26.660: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-0857c911-5752-4e1f-87cb-64c0e68dfee3 container client-container: 
STEP: delete the pod
Dec 26 14:51:26.802: INFO: Waiting for pod downwardapi-volume-0857c911-5752-4e1f-87cb-64c0e68dfee3 to disappear
Dec 26 14:51:26.813: INFO: Pod downwardapi-volume-0857c911-5752-4e1f-87cb-64c0e68dfee3 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 26 14:51:26.813: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1366" for this suite.
Dec 26 14:51:32.869: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 26 14:51:32.960: INFO: namespace projected-1366 deletion completed in 6.141612275s

• [SLOW TEST:14.514 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 26 14:51:32.961: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the deployment
STEP: Wait for the Deployment to create new ReplicaSet
STEP: delete the deployment
STEP: wait for all rs to be garbage collected
STEP: expected 0 rs, got 1 rs
STEP: expected 0 pods, got 2 pods
STEP: expected 0 pods, got 2 pods
STEP: Gathering metrics
W1226 14:51:37.709301       9 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Dec 26 14:51:37.709: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 26 14:51:37.709: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-4819" for this suite.
Dec 26 14:51:45.756: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 26 14:51:45.912: INFO: namespace gc-4819 deletion completed in 8.196818428s

• [SLOW TEST:12.951 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 26 14:51:45.914: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating secret secrets-9841/secret-test-5b183c60-3cea-4eea-bbf3-0005aab37596
STEP: Creating a pod to test consume secrets
Dec 26 14:51:46.080: INFO: Waiting up to 5m0s for pod "pod-configmaps-73a18fac-5fc8-4d3e-86e5-d4746b4dbe3c" in namespace "secrets-9841" to be "success or failure"
Dec 26 14:51:46.098: INFO: Pod "pod-configmaps-73a18fac-5fc8-4d3e-86e5-d4746b4dbe3c": Phase="Pending", Reason="", readiness=false. Elapsed: 18.459085ms
Dec 26 14:51:48.108: INFO: Pod "pod-configmaps-73a18fac-5fc8-4d3e-86e5-d4746b4dbe3c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027877212s
Dec 26 14:51:50.128: INFO: Pod "pod-configmaps-73a18fac-5fc8-4d3e-86e5-d4746b4dbe3c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.048327247s
Dec 26 14:51:52.136: INFO: Pod "pod-configmaps-73a18fac-5fc8-4d3e-86e5-d4746b4dbe3c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.056540592s
Dec 26 14:51:54.145: INFO: Pod "pod-configmaps-73a18fac-5fc8-4d3e-86e5-d4746b4dbe3c": Phase="Pending", Reason="", readiness=false. Elapsed: 8.065717327s
Dec 26 14:51:56.154: INFO: Pod "pod-configmaps-73a18fac-5fc8-4d3e-86e5-d4746b4dbe3c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.074752842s
STEP: Saw pod success
Dec 26 14:51:56.155: INFO: Pod "pod-configmaps-73a18fac-5fc8-4d3e-86e5-d4746b4dbe3c" satisfied condition "success or failure"
Dec 26 14:51:56.160: INFO: Trying to get logs from node iruya-node pod pod-configmaps-73a18fac-5fc8-4d3e-86e5-d4746b4dbe3c container env-test: 
STEP: delete the pod
Dec 26 14:51:56.232: INFO: Waiting for pod pod-configmaps-73a18fac-5fc8-4d3e-86e5-d4746b4dbe3c to disappear
Dec 26 14:51:56.245: INFO: Pod pod-configmaps-73a18fac-5fc8-4d3e-86e5-d4746b4dbe3c no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 26 14:51:56.246: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-9841" for this suite.
Dec 26 14:52:02.318: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 26 14:52:02.453: INFO: namespace secrets-9841 deletion completed in 6.197921034s

• [SLOW TEST:16.540 seconds]
[sig-api-machinery] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 26 14:52:02.454: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the deployment
STEP: Wait for the Deployment to create new ReplicaSet
STEP: delete the deployment
STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs
STEP: Gathering metrics
W1226 14:52:33.158300       9 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Dec 26 14:52:33.158: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 26 14:52:33.158: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-2560" for this suite.
Dec 26 14:52:39.868: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 26 14:52:40.489: INFO: namespace gc-2560 deletion completed in 7.328937907s

• [SLOW TEST:38.036 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 26 14:52:40.490: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Dec 26 14:52:40.697: INFO: Waiting up to 5m0s for pod "downwardapi-volume-441ca541-b2a4-45d8-b453-e2ab00034a69" in namespace "projected-6864" to be "success or failure"
Dec 26 14:52:40.754: INFO: Pod "downwardapi-volume-441ca541-b2a4-45d8-b453-e2ab00034a69": Phase="Pending", Reason="", readiness=false. Elapsed: 56.446221ms
Dec 26 14:52:43.090: INFO: Pod "downwardapi-volume-441ca541-b2a4-45d8-b453-e2ab00034a69": Phase="Pending", Reason="", readiness=false. Elapsed: 2.392311585s
Dec 26 14:52:45.101: INFO: Pod "downwardapi-volume-441ca541-b2a4-45d8-b453-e2ab00034a69": Phase="Pending", Reason="", readiness=false. Elapsed: 4.403436974s
Dec 26 14:52:47.109: INFO: Pod "downwardapi-volume-441ca541-b2a4-45d8-b453-e2ab00034a69": Phase="Pending", Reason="", readiness=false. Elapsed: 6.411321822s
Dec 26 14:52:49.116: INFO: Pod "downwardapi-volume-441ca541-b2a4-45d8-b453-e2ab00034a69": Phase="Pending", Reason="", readiness=false. Elapsed: 8.418870854s
Dec 26 14:52:51.133: INFO: Pod "downwardapi-volume-441ca541-b2a4-45d8-b453-e2ab00034a69": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.435392458s
STEP: Saw pod success
Dec 26 14:52:51.133: INFO: Pod "downwardapi-volume-441ca541-b2a4-45d8-b453-e2ab00034a69" satisfied condition "success or failure"
Dec 26 14:52:51.136: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-441ca541-b2a4-45d8-b453-e2ab00034a69 container client-container: 
STEP: delete the pod
Dec 26 14:52:51.240: INFO: Waiting for pod downwardapi-volume-441ca541-b2a4-45d8-b453-e2ab00034a69 to disappear
Dec 26 14:52:51.250: INFO: Pod downwardapi-volume-441ca541-b2a4-45d8-b453-e2ab00034a69 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 26 14:52:51.250: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6864" for this suite.
Dec 26 14:52:57.286: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 26 14:52:57.569: INFO: namespace projected-6864 deletion completed in 6.315895995s

• [SLOW TEST:17.080 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 26 14:52:57.570: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103
[It] should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Dec 26 14:52:57.737: INFO: Creating daemon "daemon-set" with a node selector
STEP: Initially, daemon pods should not be running on any nodes.
Dec 26 14:52:57.752: INFO: Number of nodes with available pods: 0
Dec 26 14:52:57.752: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Change node label to blue, check that daemon pod is launched.
Dec 26 14:52:57.814: INFO: Number of nodes with available pods: 0
Dec 26 14:52:57.814: INFO: Node iruya-node is running more than one daemon pod
Dec 26 14:52:58.824: INFO: Number of nodes with available pods: 0
Dec 26 14:52:58.824: INFO: Node iruya-node is running more than one daemon pod
Dec 26 14:52:59.828: INFO: Number of nodes with available pods: 0
Dec 26 14:52:59.828: INFO: Node iruya-node is running more than one daemon pod
Dec 26 14:53:00.828: INFO: Number of nodes with available pods: 0
Dec 26 14:53:00.829: INFO: Node iruya-node is running more than one daemon pod
Dec 26 14:53:01.821: INFO: Number of nodes with available pods: 0
Dec 26 14:53:01.821: INFO: Node iruya-node is running more than one daemon pod
Dec 26 14:53:02.833: INFO: Number of nodes with available pods: 0
Dec 26 14:53:02.833: INFO: Node iruya-node is running more than one daemon pod
Dec 26 14:53:03.833: INFO: Number of nodes with available pods: 0
Dec 26 14:53:03.833: INFO: Node iruya-node is running more than one daemon pod
Dec 26 14:53:04.824: INFO: Number of nodes with available pods: 0
Dec 26 14:53:04.824: INFO: Node iruya-node is running more than one daemon pod
Dec 26 14:53:05.836: INFO: Number of nodes with available pods: 1
Dec 26 14:53:05.836: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Update the node label to green, and wait for daemons to be unscheduled
Dec 26 14:53:05.960: INFO: Number of nodes with available pods: 1
Dec 26 14:53:05.960: INFO: Number of running nodes: 0, number of available pods: 1
Dec 26 14:53:06.968: INFO: Number of nodes with available pods: 0
Dec 26 14:53:06.968: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate
Dec 26 14:53:06.983: INFO: Number of nodes with available pods: 0
Dec 26 14:53:06.983: INFO: Node iruya-node is running more than one daemon pod
Dec 26 14:53:07.997: INFO: Number of nodes with available pods: 0
Dec 26 14:53:07.997: INFO: Node iruya-node is running more than one daemon pod
Dec 26 14:53:09.000: INFO: Number of nodes with available pods: 0
Dec 26 14:53:09.000: INFO: Node iruya-node is running more than one daemon pod
Dec 26 14:53:10.004: INFO: Number of nodes with available pods: 0
Dec 26 14:53:10.004: INFO: Node iruya-node is running more than one daemon pod
Dec 26 14:53:11.003: INFO: Number of nodes with available pods: 0
Dec 26 14:53:11.003: INFO: Node iruya-node is running more than one daemon pod
Dec 26 14:53:12.001: INFO: Number of nodes with available pods: 0
Dec 26 14:53:12.001: INFO: Node iruya-node is running more than one daemon pod
Dec 26 14:53:12.992: INFO: Number of nodes with available pods: 0
Dec 26 14:53:12.992: INFO: Node iruya-node is running more than one daemon pod
Dec 26 14:53:13.992: INFO: Number of nodes with available pods: 0
Dec 26 14:53:13.992: INFO: Node iruya-node is running more than one daemon pod
Dec 26 14:53:14.991: INFO: Number of nodes with available pods: 0
Dec 26 14:53:14.991: INFO: Node iruya-node is running more than one daemon pod
Dec 26 14:53:15.993: INFO: Number of nodes with available pods: 0
Dec 26 14:53:15.993: INFO: Node iruya-node is running more than one daemon pod
Dec 26 14:53:16.990: INFO: Number of nodes with available pods: 0
Dec 26 14:53:16.991: INFO: Node iruya-node is running more than one daemon pod
Dec 26 14:53:17.994: INFO: Number of nodes with available pods: 0
Dec 26 14:53:17.994: INFO: Node iruya-node is running more than one daemon pod
Dec 26 14:53:18.994: INFO: Number of nodes with available pods: 0
Dec 26 14:53:18.994: INFO: Node iruya-node is running more than one daemon pod
Dec 26 14:53:20.005: INFO: Number of nodes with available pods: 0
Dec 26 14:53:20.005: INFO: Node iruya-node is running more than one daemon pod
Dec 26 14:53:21.009: INFO: Number of nodes with available pods: 0
Dec 26 14:53:21.009: INFO: Node iruya-node is running more than one daemon pod
Dec 26 14:53:21.991: INFO: Number of nodes with available pods: 0
Dec 26 14:53:21.992: INFO: Node iruya-node is running more than one daemon pod
Dec 26 14:53:22.991: INFO: Number of nodes with available pods: 0
Dec 26 14:53:22.991: INFO: Node iruya-node is running more than one daemon pod
Dec 26 14:53:23.997: INFO: Number of nodes with available pods: 0
Dec 26 14:53:23.997: INFO: Node iruya-node is running more than one daemon pod
Dec 26 14:53:24.993: INFO: Number of nodes with available pods: 1
Dec 26 14:53:24.993: INFO: Number of running nodes: 1, number of available pods: 1
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-9656, will wait for the garbage collector to delete the pods
Dec 26 14:53:25.069: INFO: Deleting DaemonSet.extensions daemon-set took: 8.438469ms
Dec 26 14:53:25.470: INFO: Terminating DaemonSet.extensions daemon-set pods took: 400.463057ms
Dec 26 14:53:30.775: INFO: Number of nodes with available pods: 0
Dec 26 14:53:30.775: INFO: Number of running nodes: 0, number of available pods: 0
Dec 26 14:53:30.778: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-9656/daemonsets","resourceVersion":"18157380"},"items":null}

Dec 26 14:53:30.783: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-9656/pods","resourceVersion":"18157380"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 26 14:53:30.817: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-9656" for this suite.
Dec 26 14:53:36.848: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 26 14:53:37.001: INFO: namespace daemonsets-9656 deletion completed in 6.179062723s

• [SLOW TEST:39.431 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 26 14:53:37.002: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0644 on node default medium
Dec 26 14:53:37.141: INFO: Waiting up to 5m0s for pod "pod-dc1f3803-2e1b-49b1-b9c3-ac6d15eb0ac9" in namespace "emptydir-2530" to be "success or failure"
Dec 26 14:53:37.163: INFO: Pod "pod-dc1f3803-2e1b-49b1-b9c3-ac6d15eb0ac9": Phase="Pending", Reason="", readiness=false. Elapsed: 21.475604ms
Dec 26 14:53:39.172: INFO: Pod "pod-dc1f3803-2e1b-49b1-b9c3-ac6d15eb0ac9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030502265s
Dec 26 14:53:41.177: INFO: Pod "pod-dc1f3803-2e1b-49b1-b9c3-ac6d15eb0ac9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.035801142s
Dec 26 14:53:43.182: INFO: Pod "pod-dc1f3803-2e1b-49b1-b9c3-ac6d15eb0ac9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.040666098s
Dec 26 14:53:45.193: INFO: Pod "pod-dc1f3803-2e1b-49b1-b9c3-ac6d15eb0ac9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.051790563s
STEP: Saw pod success
Dec 26 14:53:45.193: INFO: Pod "pod-dc1f3803-2e1b-49b1-b9c3-ac6d15eb0ac9" satisfied condition "success or failure"
Dec 26 14:53:45.198: INFO: Trying to get logs from node iruya-node pod pod-dc1f3803-2e1b-49b1-b9c3-ac6d15eb0ac9 container test-container: 
STEP: delete the pod
Dec 26 14:53:45.390: INFO: Waiting for pod pod-dc1f3803-2e1b-49b1-b9c3-ac6d15eb0ac9 to disappear
Dec 26 14:53:45.422: INFO: Pod pod-dc1f3803-2e1b-49b1-b9c3-ac6d15eb0ac9 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 26 14:53:45.422: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-2530" for this suite.
Dec 26 14:53:51.458: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 26 14:53:51.582: INFO: namespace emptydir-2530 deletion completed in 6.147365656s

• [SLOW TEST:14.580 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should do a rolling update of a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 26 14:53:51.583: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[It] should do a rolling update of a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the initial replication controller
Dec 26 14:53:51.762: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1895'
Dec 26 14:53:52.188: INFO: stderr: ""
Dec 26 14:53:52.188: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Dec 26 14:53:52.189: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1895'
Dec 26 14:53:52.417: INFO: stderr: ""
Dec 26 14:53:52.417: INFO: stdout: "update-demo-nautilus-hrcz7 update-demo-nautilus-lp8nm "
Dec 26 14:53:52.417: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-hrcz7 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1895'
Dec 26 14:53:52.575: INFO: stderr: ""
Dec 26 14:53:52.575: INFO: stdout: ""
Dec 26 14:53:52.575: INFO: update-demo-nautilus-hrcz7 is created but not running
Dec 26 14:53:57.575: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1895'
Dec 26 14:53:57.789: INFO: stderr: ""
Dec 26 14:53:57.789: INFO: stdout: "update-demo-nautilus-hrcz7 update-demo-nautilus-lp8nm "
Dec 26 14:53:57.789: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-hrcz7 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1895'
Dec 26 14:53:57.968: INFO: stderr: ""
Dec 26 14:53:57.968: INFO: stdout: ""
Dec 26 14:53:57.968: INFO: update-demo-nautilus-hrcz7 is created but not running
Dec 26 14:54:02.969: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1895'
Dec 26 14:54:03.151: INFO: stderr: ""
Dec 26 14:54:03.151: INFO: stdout: "update-demo-nautilus-hrcz7 update-demo-nautilus-lp8nm "
Dec 26 14:54:03.151: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-hrcz7 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1895'
Dec 26 14:54:03.265: INFO: stderr: ""
Dec 26 14:54:03.265: INFO: stdout: ""
Dec 26 14:54:03.265: INFO: update-demo-nautilus-hrcz7 is created but not running
Dec 26 14:54:08.266: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1895'
Dec 26 14:54:08.429: INFO: stderr: ""
Dec 26 14:54:08.429: INFO: stdout: "update-demo-nautilus-hrcz7 update-demo-nautilus-lp8nm "
Dec 26 14:54:08.429: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-hrcz7 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1895'
Dec 26 14:54:08.573: INFO: stderr: ""
Dec 26 14:54:08.574: INFO: stdout: "true"
Dec 26 14:54:08.574: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-hrcz7 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1895'
Dec 26 14:54:08.692: INFO: stderr: ""
Dec 26 14:54:08.692: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Dec 26 14:54:08.692: INFO: validating pod update-demo-nautilus-hrcz7
Dec 26 14:54:08.700: INFO: got data: {
  "image": "nautilus.jpg"
}

Dec 26 14:54:08.700: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Dec 26 14:54:08.700: INFO: update-demo-nautilus-hrcz7 is verified up and running
Dec 26 14:54:08.700: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-lp8nm -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1895'
Dec 26 14:54:08.801: INFO: stderr: ""
Dec 26 14:54:08.801: INFO: stdout: "true"
Dec 26 14:54:08.801: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-lp8nm -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1895'
Dec 26 14:54:08.899: INFO: stderr: ""
Dec 26 14:54:08.900: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Dec 26 14:54:08.900: INFO: validating pod update-demo-nautilus-lp8nm
Dec 26 14:54:08.922: INFO: got data: {
  "image": "nautilus.jpg"
}

Dec 26 14:54:08.922: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Dec 26 14:54:08.922: INFO: update-demo-nautilus-lp8nm is verified up and running
STEP: rolling-update to new replication controller
Dec 26 14:54:08.924: INFO: scanned /root for discovery docs: 
Dec 26 14:54:08.924: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=kubectl-1895'
Dec 26 14:54:41.802: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n"
Dec 26 14:54:41.803: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Dec 26 14:54:41.804: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1895'
Dec 26 14:54:43.950: INFO: stderr: ""
Dec 26 14:54:43.950: INFO: stdout: "update-demo-kitten-m4qn6 update-demo-kitten-sn2qt update-demo-nautilus-lp8nm "
STEP: Replicas for name=update-demo: expected=2 actual=3
Dec 26 14:54:48.950: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1895'
Dec 26 14:54:49.112: INFO: stderr: ""
Dec 26 14:54:49.112: INFO: stdout: "update-demo-kitten-m4qn6 update-demo-kitten-sn2qt "
Dec 26 14:54:49.112: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-m4qn6 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1895'
Dec 26 14:54:49.240: INFO: stderr: ""
Dec 26 14:54:49.240: INFO: stdout: "true"
Dec 26 14:54:49.240: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-m4qn6 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1895'
Dec 26 14:54:49.347: INFO: stderr: ""
Dec 26 14:54:49.347: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
Dec 26 14:54:49.347: INFO: validating pod update-demo-kitten-m4qn6
Dec 26 14:54:49.387: INFO: got data: {
  "image": "kitten.jpg"
}

Dec 26 14:54:49.387: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Dec 26 14:54:49.387: INFO: update-demo-kitten-m4qn6 is verified up and running
Dec 26 14:54:49.387: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-sn2qt -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1895'
Dec 26 14:54:49.549: INFO: stderr: ""
Dec 26 14:54:49.549: INFO: stdout: "true"
Dec 26 14:54:49.549: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-sn2qt -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1895'
Dec 26 14:54:49.656: INFO: stderr: ""
Dec 26 14:54:49.656: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
Dec 26 14:54:49.656: INFO: validating pod update-demo-kitten-sn2qt
Dec 26 14:54:49.679: INFO: got data: {
  "image": "kitten.jpg"
}

Dec 26 14:54:49.679: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Dec 26 14:54:49.680: INFO: update-demo-kitten-sn2qt is verified up and running
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 26 14:54:49.680: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-1895" for this suite.
Dec 26 14:55:19.718: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 26 14:55:19.959: INFO: namespace kubectl-1895 deletion completed in 30.273761179s

• [SLOW TEST:88.376 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should do a rolling update of a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy logs on node using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 26 14:55:19.959: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy logs on node using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Dec 26 14:55:20.041: INFO: (0) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 14.756725ms)
Dec 26 14:55:20.098: INFO: (1) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 56.597162ms)
Dec 26 14:55:20.105: INFO: (2) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.686177ms)
Dec 26 14:55:20.111: INFO: (3) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.642652ms)
Dec 26 14:55:20.116: INFO: (4) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.956597ms)
Dec 26 14:55:20.124: INFO: (5) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.370488ms)
Dec 26 14:55:20.131: INFO: (6) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.576809ms)
Dec 26 14:55:20.139: INFO: (7) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.235455ms)
Dec 26 14:55:20.143: INFO: (8) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.936146ms)
Dec 26 14:55:20.146: INFO: (9) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.799909ms)
Dec 26 14:55:20.150: INFO: (10) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.044408ms)
Dec 26 14:55:20.154: INFO: (11) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.830468ms)
Dec 26 14:55:20.158: INFO: (12) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.503197ms)
Dec 26 14:55:20.165: INFO: (13) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.41848ms)
Dec 26 14:55:20.171: INFO: (14) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.217568ms)
Dec 26 14:55:20.177: INFO: (15) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.900148ms)
Dec 26 14:55:20.182: INFO: (16) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.634495ms)
Dec 26 14:55:20.187: INFO: (17) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.480276ms)
Dec 26 14:55:20.191: INFO: (18) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.635706ms)
Dec 26 14:55:20.220: INFO: (19) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 28.124724ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 26 14:55:20.220: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-774" for this suite.
Dec 26 14:55:27.124: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 26 14:55:27.237: INFO: namespace proxy-774 deletion completed in 7.011196108s

• [SLOW TEST:7.278 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:58
    should proxy logs on node using proxy subresource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 26 14:55:27.237: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name projected-secret-test-map-72a871ac-778d-45eb-aa73-e76b42d8b9fb
STEP: Creating a pod to test consume secrets
Dec 26 14:55:27.423: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-1e6c01ab-cd62-4894-8eff-950a6acc49ce" in namespace "projected-8579" to be "success or failure"
Dec 26 14:55:27.429: INFO: Pod "pod-projected-secrets-1e6c01ab-cd62-4894-8eff-950a6acc49ce": Phase="Pending", Reason="", readiness=false. Elapsed: 6.23824ms
Dec 26 14:55:29.444: INFO: Pod "pod-projected-secrets-1e6c01ab-cd62-4894-8eff-950a6acc49ce": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020595628s
Dec 26 14:55:31.488: INFO: Pod "pod-projected-secrets-1e6c01ab-cd62-4894-8eff-950a6acc49ce": Phase="Pending", Reason="", readiness=false. Elapsed: 4.064912668s
Dec 26 14:55:33.497: INFO: Pod "pod-projected-secrets-1e6c01ab-cd62-4894-8eff-950a6acc49ce": Phase="Pending", Reason="", readiness=false. Elapsed: 6.074334315s
Dec 26 14:55:35.508: INFO: Pod "pod-projected-secrets-1e6c01ab-cd62-4894-8eff-950a6acc49ce": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.08503891s
STEP: Saw pod success
Dec 26 14:55:35.508: INFO: Pod "pod-projected-secrets-1e6c01ab-cd62-4894-8eff-950a6acc49ce" satisfied condition "success or failure"
Dec 26 14:55:35.516: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-1e6c01ab-cd62-4894-8eff-950a6acc49ce container projected-secret-volume-test: 
STEP: delete the pod
Dec 26 14:55:35.568: INFO: Waiting for pod pod-projected-secrets-1e6c01ab-cd62-4894-8eff-950a6acc49ce to disappear
Dec 26 14:55:35.601: INFO: Pod pod-projected-secrets-1e6c01ab-cd62-4894-8eff-950a6acc49ce no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 26 14:55:35.601: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8579" for this suite.
Dec 26 14:55:41.705: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 26 14:55:41.853: INFO: namespace projected-8579 deletion completed in 6.246022463s

• [SLOW TEST:14.616 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 26 14:55:41.853: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name projected-secret-test-57b3f2a2-551f-420f-9727-d06f6403b0f1
STEP: Creating a pod to test consume secrets
Dec 26 14:55:41.952: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-90607496-8c22-43d3-9583-be52f478f8d4" in namespace "projected-2240" to be "success or failure"
Dec 26 14:55:42.018: INFO: Pod "pod-projected-secrets-90607496-8c22-43d3-9583-be52f478f8d4": Phase="Pending", Reason="", readiness=false. Elapsed: 66.39781ms
Dec 26 14:55:44.033: INFO: Pod "pod-projected-secrets-90607496-8c22-43d3-9583-be52f478f8d4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.081255831s
Dec 26 14:55:46.043: INFO: Pod "pod-projected-secrets-90607496-8c22-43d3-9583-be52f478f8d4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.090691018s
Dec 26 14:55:48.055: INFO: Pod "pod-projected-secrets-90607496-8c22-43d3-9583-be52f478f8d4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.103318166s
Dec 26 14:55:50.066: INFO: Pod "pod-projected-secrets-90607496-8c22-43d3-9583-be52f478f8d4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.113980366s
STEP: Saw pod success
Dec 26 14:55:50.066: INFO: Pod "pod-projected-secrets-90607496-8c22-43d3-9583-be52f478f8d4" satisfied condition "success or failure"
Dec 26 14:55:50.069: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-90607496-8c22-43d3-9583-be52f478f8d4 container secret-volume-test: 
STEP: delete the pod
Dec 26 14:55:50.407: INFO: Waiting for pod pod-projected-secrets-90607496-8c22-43d3-9583-be52f478f8d4 to disappear
Dec 26 14:55:50.427: INFO: Pod pod-projected-secrets-90607496-8c22-43d3-9583-be52f478f8d4 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 26 14:55:50.427: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2240" for this suite.
Dec 26 14:55:56.460: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 26 14:55:56.685: INFO: namespace projected-2240 deletion completed in 6.248775989s

• [SLOW TEST:14.831 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with secret pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 26 14:55:56.685: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with secret pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod pod-subpath-test-secret-q725
STEP: Creating a pod to test atomic-volume-subpath
Dec 26 14:55:56.889: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-q725" in namespace "subpath-5783" to be "success or failure"
Dec 26 14:55:56.905: INFO: Pod "pod-subpath-test-secret-q725": Phase="Pending", Reason="", readiness=false. Elapsed: 16.182673ms
Dec 26 14:55:58.913: INFO: Pod "pod-subpath-test-secret-q725": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023500195s
Dec 26 14:56:00.922: INFO: Pod "pod-subpath-test-secret-q725": Phase="Pending", Reason="", readiness=false. Elapsed: 4.032539086s
Dec 26 14:56:03.948: INFO: Pod "pod-subpath-test-secret-q725": Phase="Pending", Reason="", readiness=false. Elapsed: 7.059224041s
Dec 26 14:56:05.960: INFO: Pod "pod-subpath-test-secret-q725": Phase="Pending", Reason="", readiness=false. Elapsed: 9.070357088s
Dec 26 14:56:07.967: INFO: Pod "pod-subpath-test-secret-q725": Phase="Running", Reason="", readiness=true. Elapsed: 11.077805356s
Dec 26 14:56:09.976: INFO: Pod "pod-subpath-test-secret-q725": Phase="Running", Reason="", readiness=true. Elapsed: 13.086903385s
Dec 26 14:56:11.998: INFO: Pod "pod-subpath-test-secret-q725": Phase="Running", Reason="", readiness=true. Elapsed: 15.109141744s
Dec 26 14:56:14.009: INFO: Pod "pod-subpath-test-secret-q725": Phase="Running", Reason="", readiness=true. Elapsed: 17.119702136s
Dec 26 14:56:16.027: INFO: Pod "pod-subpath-test-secret-q725": Phase="Running", Reason="", readiness=true. Elapsed: 19.137519878s
Dec 26 14:56:18.036: INFO: Pod "pod-subpath-test-secret-q725": Phase="Running", Reason="", readiness=true. Elapsed: 21.147273311s
Dec 26 14:56:20.043: INFO: Pod "pod-subpath-test-secret-q725": Phase="Running", Reason="", readiness=true. Elapsed: 23.153668456s
Dec 26 14:56:22.048: INFO: Pod "pod-subpath-test-secret-q725": Phase="Running", Reason="", readiness=true. Elapsed: 25.159287177s
Dec 26 14:56:24.569: INFO: Pod "pod-subpath-test-secret-q725": Phase="Running", Reason="", readiness=true. Elapsed: 27.680190899s
Dec 26 14:56:26.586: INFO: Pod "pod-subpath-test-secret-q725": Phase="Running", Reason="", readiness=true. Elapsed: 29.69679842s
Dec 26 14:56:28.612: INFO: Pod "pod-subpath-test-secret-q725": Phase="Succeeded", Reason="", readiness=false. Elapsed: 31.722847822s
STEP: Saw pod success
Dec 26 14:56:28.612: INFO: Pod "pod-subpath-test-secret-q725" satisfied condition "success or failure"
Dec 26 14:56:28.629: INFO: Trying to get logs from node iruya-node pod pod-subpath-test-secret-q725 container test-container-subpath-secret-q725: 
STEP: delete the pod
Dec 26 14:56:28.809: INFO: Waiting for pod pod-subpath-test-secret-q725 to disappear
Dec 26 14:56:28.818: INFO: Pod pod-subpath-test-secret-q725 no longer exists
STEP: Deleting pod pod-subpath-test-secret-q725
Dec 26 14:56:28.818: INFO: Deleting pod "pod-subpath-test-secret-q725" in namespace "subpath-5783"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 26 14:56:28.827: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-5783" for this suite.
Dec 26 14:56:34.865: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 26 14:56:35.029: INFO: namespace subpath-5783 deletion completed in 6.190333031s

• [SLOW TEST:38.344 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with secret pod [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 26 14:56:35.030: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward api env vars
Dec 26 14:56:35.175: INFO: Waiting up to 5m0s for pod "downward-api-19c5affa-807e-41f5-8198-87d2a28c160e" in namespace "downward-api-3668" to be "success or failure"
Dec 26 14:56:35.256: INFO: Pod "downward-api-19c5affa-807e-41f5-8198-87d2a28c160e": Phase="Pending", Reason="", readiness=false. Elapsed: 80.873373ms
Dec 26 14:56:37.271: INFO: Pod "downward-api-19c5affa-807e-41f5-8198-87d2a28c160e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.096009733s
Dec 26 14:56:39.286: INFO: Pod "downward-api-19c5affa-807e-41f5-8198-87d2a28c160e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.111370956s
Dec 26 14:56:41.293: INFO: Pod "downward-api-19c5affa-807e-41f5-8198-87d2a28c160e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.11793082s
Dec 26 14:56:43.302: INFO: Pod "downward-api-19c5affa-807e-41f5-8198-87d2a28c160e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.127141334s
STEP: Saw pod success
Dec 26 14:56:43.302: INFO: Pod "downward-api-19c5affa-807e-41f5-8198-87d2a28c160e" satisfied condition "success or failure"
Dec 26 14:56:43.308: INFO: Trying to get logs from node iruya-node pod downward-api-19c5affa-807e-41f5-8198-87d2a28c160e container dapi-container: 
STEP: delete the pod
Dec 26 14:56:43.375: INFO: Waiting for pod downward-api-19c5affa-807e-41f5-8198-87d2a28c160e to disappear
Dec 26 14:56:43.379: INFO: Pod downward-api-19c5affa-807e-41f5-8198-87d2a28c160e no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 26 14:56:43.380: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-3668" for this suite.
Dec 26 14:56:49.408: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 26 14:56:49.547: INFO: namespace downward-api-3668 deletion completed in 6.16215314s

• [SLOW TEST:14.518 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32
  should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 26 14:56:49.547: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0777 on tmpfs
Dec 26 14:56:49.664: INFO: Waiting up to 5m0s for pod "pod-6d33f74f-4a6d-4a0a-9d51-62e09ca04050" in namespace "emptydir-2017" to be "success or failure"
Dec 26 14:56:49.682: INFO: Pod "pod-6d33f74f-4a6d-4a0a-9d51-62e09ca04050": Phase="Pending", Reason="", readiness=false. Elapsed: 18.620093ms
Dec 26 14:56:51.693: INFO: Pod "pod-6d33f74f-4a6d-4a0a-9d51-62e09ca04050": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029246271s
Dec 26 14:56:53.706: INFO: Pod "pod-6d33f74f-4a6d-4a0a-9d51-62e09ca04050": Phase="Pending", Reason="", readiness=false. Elapsed: 4.042045702s
Dec 26 14:56:55.875: INFO: Pod "pod-6d33f74f-4a6d-4a0a-9d51-62e09ca04050": Phase="Pending", Reason="", readiness=false. Elapsed: 6.21132025s
Dec 26 14:56:57.888: INFO: Pod "pod-6d33f74f-4a6d-4a0a-9d51-62e09ca04050": Phase="Pending", Reason="", readiness=false. Elapsed: 8.223756688s
Dec 26 14:56:59.902: INFO: Pod "pod-6d33f74f-4a6d-4a0a-9d51-62e09ca04050": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.237836589s
STEP: Saw pod success
Dec 26 14:56:59.902: INFO: Pod "pod-6d33f74f-4a6d-4a0a-9d51-62e09ca04050" satisfied condition "success or failure"
Dec 26 14:56:59.908: INFO: Trying to get logs from node iruya-node pod pod-6d33f74f-4a6d-4a0a-9d51-62e09ca04050 container test-container: 
STEP: delete the pod
Dec 26 14:56:59.986: INFO: Waiting for pod pod-6d33f74f-4a6d-4a0a-9d51-62e09ca04050 to disappear
Dec 26 14:56:59.993: INFO: Pod pod-6d33f74f-4a6d-4a0a-9d51-62e09ca04050 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 26 14:56:59.993: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-2017" for this suite.
Dec 26 14:57:06.029: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 26 14:57:06.118: INFO: namespace emptydir-2017 deletion completed in 6.117835398s

• [SLOW TEST:16.571 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-storage] Projected downwardAPI 
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 26 14:57:06.118: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating the pod
Dec 26 14:57:14.827: INFO: Successfully updated pod "annotationupdatee36ec071-74ec-4079-b40f-4d389e8f39f4"
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 26 14:57:16.899: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9020" for this suite.
Dec 26 14:57:38.930: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 26 14:57:39.004: INFO: namespace projected-9020 deletion completed in 22.097879636s

• [SLOW TEST:32.886 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should fail to create ConfigMap with empty key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 26 14:57:39.004: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create ConfigMap with empty key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap that has name configmap-test-emptyKey-1355e862-bd49-4276-8037-e51410573fb7
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 26 14:57:39.061: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-4527" for this suite.
Dec 26 14:57:45.079: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 26 14:57:45.208: INFO: namespace configmap-4527 deletion completed in 6.143995745s

• [SLOW TEST:6.204 seconds]
[sig-node] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should fail to create ConfigMap with empty key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 26 14:57:45.209: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-4ce4ba57-3f85-48d0-bb66-6f2b2722bc67
STEP: Creating a pod to test consume configMaps
Dec 26 14:57:45.418: INFO: Waiting up to 5m0s for pod "pod-configmaps-e4a78ab4-e577-4f81-8c3e-a045a9b34fbe" in namespace "configmap-7642" to be "success or failure"
Dec 26 14:57:45.456: INFO: Pod "pod-configmaps-e4a78ab4-e577-4f81-8c3e-a045a9b34fbe": Phase="Pending", Reason="", readiness=false. Elapsed: 37.304312ms
Dec 26 14:57:47.465: INFO: Pod "pod-configmaps-e4a78ab4-e577-4f81-8c3e-a045a9b34fbe": Phase="Pending", Reason="", readiness=false. Elapsed: 2.046541067s
Dec 26 14:57:49.470: INFO: Pod "pod-configmaps-e4a78ab4-e577-4f81-8c3e-a045a9b34fbe": Phase="Pending", Reason="", readiness=false. Elapsed: 4.052212531s
Dec 26 14:57:51.477: INFO: Pod "pod-configmaps-e4a78ab4-e577-4f81-8c3e-a045a9b34fbe": Phase="Pending", Reason="", readiness=false. Elapsed: 6.058523741s
Dec 26 14:57:53.551: INFO: Pod "pod-configmaps-e4a78ab4-e577-4f81-8c3e-a045a9b34fbe": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.132700194s
STEP: Saw pod success
Dec 26 14:57:53.551: INFO: Pod "pod-configmaps-e4a78ab4-e577-4f81-8c3e-a045a9b34fbe" satisfied condition "success or failure"
Dec 26 14:57:53.555: INFO: Trying to get logs from node iruya-node pod pod-configmaps-e4a78ab4-e577-4f81-8c3e-a045a9b34fbe container configmap-volume-test: 
STEP: delete the pod
Dec 26 14:57:53.596: INFO: Waiting for pod pod-configmaps-e4a78ab4-e577-4f81-8c3e-a045a9b34fbe to disappear
Dec 26 14:57:53.602: INFO: Pod pod-configmaps-e4a78ab4-e577-4f81-8c3e-a045a9b34fbe no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 26 14:57:53.602: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-7642" for this suite.
Dec 26 14:57:59.658: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 26 14:57:59.828: INFO: namespace configmap-7642 deletion completed in 6.213660916s

• [SLOW TEST:14.619 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 26 14:57:59.829: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Dec 26 15:00:59.311: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 26 15:00:59.340: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 26 15:01:01.340: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 26 15:01:01.346: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 26 15:01:03.341: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 26 15:01:03.351: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 26 15:01:05.340: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 26 15:01:05.353: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 26 15:01:07.340: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 26 15:01:07.350: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 26 15:01:09.340: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 26 15:01:09.349: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 26 15:01:11.340: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 26 15:01:11.348: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 26 15:01:13.340: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 26 15:01:13.350: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 26 15:01:15.340: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 26 15:01:15.352: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 26 15:01:17.340: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 26 15:01:17.351: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 26 15:01:19.340: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 26 15:01:19.351: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 26 15:01:21.340: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 26 15:01:21.353: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 26 15:01:23.340: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 26 15:01:23.353: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 26 15:01:25.340: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 26 15:01:25.353: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 26 15:01:27.340: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 26 15:01:27.352: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 26 15:01:29.340: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 26 15:01:29.350: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 26 15:01:31.340: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 26 15:01:31.350: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 26 15:01:33.340: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 26 15:01:33.348: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 26 15:01:35.340: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 26 15:01:35.350: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 26 15:01:37.340: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 26 15:01:37.349: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 26 15:01:39.340: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 26 15:01:39.354: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 26 15:01:41.340: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 26 15:01:41.348: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 26 15:01:43.340: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 26 15:01:43.350: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 26 15:01:45.340: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 26 15:01:45.349: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 26 15:01:47.340: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 26 15:01:47.348: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 26 15:01:49.340: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 26 15:01:49.352: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 26 15:01:51.340: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 26 15:01:51.349: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 26 15:01:53.340: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 26 15:01:53.349: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 26 15:01:55.340: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 26 15:01:55.348: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 26 15:01:57.340: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 26 15:01:57.349: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 26 15:01:59.340: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 26 15:01:59.349: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 26 15:02:01.342: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 26 15:02:01.349: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 26 15:02:03.340: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 26 15:02:03.353: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 26 15:02:05.340: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 26 15:02:05.350: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 26 15:02:07.341: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 26 15:02:07.352: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 26 15:02:09.340: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 26 15:02:09.349: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 26 15:02:11.341: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 26 15:02:11.389: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 26 15:02:13.340: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 26 15:02:13.352: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 26 15:02:15.340: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 26 15:02:15.351: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 26 15:02:17.341: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 26 15:02:17.353: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 26 15:02:19.341: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 26 15:02:19.376: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 26 15:02:21.340: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 26 15:02:21.350: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 26 15:02:23.340: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 26 15:02:23.355: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 26 15:02:25.340: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 26 15:02:25.352: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 26 15:02:27.340: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 26 15:02:27.349: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 26 15:02:29.340: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 26 15:02:29.349: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 26 15:02:31.340: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 26 15:02:31.359: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 26 15:02:33.340: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 26 15:02:33.349: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 26 15:02:35.340: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 26 15:02:35.349: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 26 15:02:37.340: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 26 15:02:37.359: INFO: Pod pod-with-poststart-exec-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 26 15:02:37.360: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-2379" for this suite.
Dec 26 15:03:01.396: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 26 15:03:01.561: INFO: namespace container-lifecycle-hook-2379 deletion completed in 24.191142534s

• [SLOW TEST:301.732 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute poststart exec hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 26 15:03:01.562: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-97f3f1e3-c02a-42e7-a62e-66e52be0e90a
STEP: Creating a pod to test consume configMaps
Dec 26 15:03:01.841: INFO: Waiting up to 5m0s for pod "pod-configmaps-deee918b-0c23-4446-b250-24ae3fb1fe72" in namespace "configmap-6196" to be "success or failure"
Dec 26 15:03:01.886: INFO: Pod "pod-configmaps-deee918b-0c23-4446-b250-24ae3fb1fe72": Phase="Pending", Reason="", readiness=false. Elapsed: 45.328025ms
Dec 26 15:03:03.903: INFO: Pod "pod-configmaps-deee918b-0c23-4446-b250-24ae3fb1fe72": Phase="Pending", Reason="", readiness=false. Elapsed: 2.061532404s
Dec 26 15:03:05.918: INFO: Pod "pod-configmaps-deee918b-0c23-4446-b250-24ae3fb1fe72": Phase="Pending", Reason="", readiness=false. Elapsed: 4.076444462s
Dec 26 15:03:07.925: INFO: Pod "pod-configmaps-deee918b-0c23-4446-b250-24ae3fb1fe72": Phase="Pending", Reason="", readiness=false. Elapsed: 6.083542606s
Dec 26 15:03:09.936: INFO: Pod "pod-configmaps-deee918b-0c23-4446-b250-24ae3fb1fe72": Phase="Pending", Reason="", readiness=false. Elapsed: 8.09485728s
Dec 26 15:03:11.944: INFO: Pod "pod-configmaps-deee918b-0c23-4446-b250-24ae3fb1fe72": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.102600733s
STEP: Saw pod success
Dec 26 15:03:11.944: INFO: Pod "pod-configmaps-deee918b-0c23-4446-b250-24ae3fb1fe72" satisfied condition "success or failure"
Dec 26 15:03:11.950: INFO: Trying to get logs from node iruya-node pod pod-configmaps-deee918b-0c23-4446-b250-24ae3fb1fe72 container configmap-volume-test: 
STEP: delete the pod
Dec 26 15:03:12.133: INFO: Waiting for pod pod-configmaps-deee918b-0c23-4446-b250-24ae3fb1fe72 to disappear
Dec 26 15:03:12.138: INFO: Pod pod-configmaps-deee918b-0c23-4446-b250-24ae3fb1fe72 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 26 15:03:12.138: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-6196" for this suite.
Dec 26 15:03:18.185: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 26 15:03:18.322: INFO: namespace configmap-6196 deletion completed in 6.174531226s

• [SLOW TEST:16.760 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] Events 
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] [sig-node] Events
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 26 15:03:18.323: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename events
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: retrieving the pod
Dec 26 15:03:26.470: INFO: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:send-events-4a356a8e-8908-47b6-b739-3c0053bdb587,GenerateName:,Namespace:events-3690,SelfLink:/api/v1/namespaces/events-3690/pods/send-events-4a356a8e-8908-47b6-b739-3c0053bdb587,UID:ec2ffb4a-7e16-486e-a131-7acad1a6ce78,ResourceVersion:18158616,Generation:0,CreationTimestamp:2019-12-26 15:03:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: foo,time: 422375123,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-rss9p {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rss9p,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{p gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 [] []  [{ 0 80 TCP }] [] [] {map[] map[]} [{default-token-rss9p true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc003546e70} {node.kubernetes.io/unreachable Exists  NoExecute 0xc003546e90}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-26 15:03:18 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-26 15:03:25 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-26 15:03:25 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-26 15:03:18 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.1,StartTime:2019-12-26 15:03:18 +0000 UTC,ContainerStatuses:[{p {nil ContainerStateRunning{StartedAt:2019-12-26 15:03:24 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 docker-pullable://gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 docker://b5c8089770a65bb85dfda57fa1fdec9fb617d92e838d9a80ad4ce25b32dd46bb}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}

STEP: checking for scheduler event about the pod
Dec 26 15:03:28.491: INFO: Saw scheduler event for our pod.
STEP: checking for kubelet event about the pod
Dec 26 15:03:30.515: INFO: Saw kubelet event for our pod.
STEP: deleting the pod
[AfterEach] [k8s.io] [sig-node] Events
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 26 15:03:30.529: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "events-3690" for this suite.
Dec 26 15:04:22.594: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 26 15:04:22.708: INFO: namespace events-3690 deletion completed in 52.150621893s

• [SLOW TEST:64.385 seconds]
[k8s.io] [sig-node] Events
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 26 15:04:22.709: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod test-webserver-19055a26-7ea6-4709-b597-485d2814596c in namespace container-probe-4443
Dec 26 15:04:30.882: INFO: Started pod test-webserver-19055a26-7ea6-4709-b597-485d2814596c in namespace container-probe-4443
STEP: checking the pod's current state and verifying that restartCount is present
Dec 26 15:04:30.891: INFO: Initial restart count of pod test-webserver-19055a26-7ea6-4709-b597-485d2814596c is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 26 15:08:31.087: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-4443" for this suite.
Dec 26 15:08:37.220: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 26 15:08:37.364: INFO: namespace container-probe-4443 deletion completed in 6.169436031s

• [SLOW TEST:254.656 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl describe 
  should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 26 15:08:37.365: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Dec 26 15:08:37.466: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2075'
Dec 26 15:08:39.613: INFO: stderr: ""
Dec 26 15:08:39.613: INFO: stdout: "replicationcontroller/redis-master created\n"
Dec 26 15:08:39.614: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2075'
Dec 26 15:08:40.064: INFO: stderr: ""
Dec 26 15:08:40.065: INFO: stdout: "service/redis-master created\n"
STEP: Waiting for Redis master to start.
Dec 26 15:08:41.083: INFO: Selector matched 1 pods for map[app:redis]
Dec 26 15:08:41.083: INFO: Found 0 / 1
Dec 26 15:08:42.076: INFO: Selector matched 1 pods for map[app:redis]
Dec 26 15:08:42.076: INFO: Found 0 / 1
Dec 26 15:08:43.115: INFO: Selector matched 1 pods for map[app:redis]
Dec 26 15:08:43.115: INFO: Found 0 / 1
Dec 26 15:08:44.077: INFO: Selector matched 1 pods for map[app:redis]
Dec 26 15:08:44.077: INFO: Found 0 / 1
Dec 26 15:08:45.110: INFO: Selector matched 1 pods for map[app:redis]
Dec 26 15:08:45.110: INFO: Found 0 / 1
Dec 26 15:08:46.077: INFO: Selector matched 1 pods for map[app:redis]
Dec 26 15:08:46.078: INFO: Found 0 / 1
Dec 26 15:08:47.094: INFO: Selector matched 1 pods for map[app:redis]
Dec 26 15:08:47.094: INFO: Found 1 / 1
Dec 26 15:08:47.094: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Dec 26 15:08:47.099: INFO: Selector matched 1 pods for map[app:redis]
Dec 26 15:08:47.099: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Dec 26 15:08:47.099: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe pod redis-master-klvff --namespace=kubectl-2075'
Dec 26 15:08:47.260: INFO: stderr: ""
Dec 26 15:08:47.260: INFO: stdout: "Name:           redis-master-klvff\nNamespace:      kubectl-2075\nPriority:       0\nNode:           iruya-node/10.96.3.65\nStart Time:     Thu, 26 Dec 2019 15:08:39 +0000\nLabels:         app=redis\n                role=master\nAnnotations:    \nStatus:         Running\nIP:             10.44.0.1\nControlled By:  ReplicationController/redis-master\nContainers:\n  redis-master:\n    Container ID:   docker://040add7c58f0814781d086fa7b64925bd4530b9a726f2c28939f80d4f7b684e7\n    Image:          gcr.io/kubernetes-e2e-test-images/redis:1.0\n    Image ID:       docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830\n    Port:           6379/TCP\n    Host Port:      0/TCP\n    State:          Running\n      Started:      Thu, 26 Dec 2019 15:08:46 +0000\n    Ready:          True\n    Restart Count:  0\n    Environment:    \n    Mounts:\n      /var/run/secrets/kubernetes.io/serviceaccount from default-token-g6j25 (ro)\nConditions:\n  Type              Status\n  Initialized       True \n  Ready             True \n  ContainersReady   True \n  PodScheduled      True \nVolumes:\n  default-token-g6j25:\n    Type:        Secret (a volume populated by a Secret)\n    SecretName:  default-token-g6j25\n    Optional:    false\nQoS Class:       BestEffort\nNode-Selectors:  \nTolerations:     node.kubernetes.io/not-ready:NoExecute for 300s\n                 node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n  Type    Reason     Age   From                 Message\n  ----    ------     ----  ----                 -------\n  Normal  Scheduled  8s    default-scheduler    Successfully assigned kubectl-2075/redis-master-klvff to iruya-node\n  Normal  Pulled     4s    kubelet, iruya-node  Container image \"gcr.io/kubernetes-e2e-test-images/redis:1.0\" already present on machine\n  Normal  Created    2s    kubelet, iruya-node  Created container redis-master\n  Normal  Started    1s    kubelet, iruya-node  Started container redis-master\n"
Dec 26 15:08:47.261: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe rc redis-master --namespace=kubectl-2075'
Dec 26 15:08:47.401: INFO: stderr: ""
Dec 26 15:08:47.401: INFO: stdout: "Name:         redis-master\nNamespace:    kubectl-2075\nSelector:     app=redis,role=master\nLabels:       app=redis\n              role=master\nAnnotations:  \nReplicas:     1 current / 1 desired\nPods Status:  1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n  Labels:  app=redis\n           role=master\n  Containers:\n   redis-master:\n    Image:        gcr.io/kubernetes-e2e-test-images/redis:1.0\n    Port:         6379/TCP\n    Host Port:    0/TCP\n    Environment:  \n    Mounts:       \n  Volumes:        \nEvents:\n  Type    Reason            Age   From                    Message\n  ----    ------            ----  ----                    -------\n  Normal  SuccessfulCreate  8s    replication-controller  Created pod: redis-master-klvff\n"
Dec 26 15:08:47.401: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe service redis-master --namespace=kubectl-2075'
Dec 26 15:08:47.550: INFO: stderr: ""
Dec 26 15:08:47.550: INFO: stdout: "Name:              redis-master\nNamespace:         kubectl-2075\nLabels:            app=redis\n                   role=master\nAnnotations:       \nSelector:          app=redis,role=master\nType:              ClusterIP\nIP:                10.101.209.204\nPort:                6379/TCP\nTargetPort:        redis-server/TCP\nEndpoints:         10.44.0.1:6379\nSession Affinity:  None\nEvents:            \n"
Dec 26 15:08:47.555: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe node iruya-node'
Dec 26 15:08:47.662: INFO: stderr: ""
Dec 26 15:08:47.662: INFO: stdout: "Name:               iruya-node\nRoles:              \nLabels:             beta.kubernetes.io/arch=amd64\n                    beta.kubernetes.io/os=linux\n                    kubernetes.io/arch=amd64\n                    kubernetes.io/hostname=iruya-node\n                    kubernetes.io/os=linux\nAnnotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock\n                    node.alpha.kubernetes.io/ttl: 0\n                    volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp:  Sun, 04 Aug 2019 09:01:39 +0000\nTaints:             \nUnschedulable:      false\nConditions:\n  Type                 Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message\n  ----                 ------  -----------------                 ------------------                ------                       -------\n  NetworkUnavailable   False   Sat, 12 Oct 2019 11:56:49 +0000   Sat, 12 Oct 2019 11:56:49 +0000   WeaveIsUp                    Weave pod has set this\n  MemoryPressure       False   Thu, 26 Dec 2019 15:08:18 +0000   Sun, 04 Aug 2019 09:01:39 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available\n  DiskPressure         False   Thu, 26 Dec 2019 15:08:18 +0000   Sun, 04 Aug 2019 09:01:39 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure\n  PIDPressure          False   Thu, 26 Dec 2019 15:08:18 +0000   Sun, 04 Aug 2019 09:01:39 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available\n  Ready                True    Thu, 26 Dec 2019 15:08:18 +0000   Sun, 04 Aug 2019 09:02:19 +0000   KubeletReady                 kubelet is posting ready status. AppArmor enabled\nAddresses:\n  InternalIP:  10.96.3.65\n  Hostname:    iruya-node\nCapacity:\n cpu:                4\n ephemeral-storage:  20145724Ki\n hugepages-2Mi:      0\n memory:             4039076Ki\n pods:               110\nAllocatable:\n cpu:                4\n ephemeral-storage:  18566299208\n hugepages-2Mi:      0\n memory:             3936676Ki\n pods:               110\nSystem Info:\n Machine ID:                 f573dcf04d6f4a87856a35d266a2fa7a\n System UUID:                F573DCF0-4D6F-4A87-856A-35D266A2FA7A\n Boot ID:                    8baf4beb-8391-43e6-b17b-b1e184b5370a\n Kernel Version:             4.15.0-52-generic\n OS Image:                   Ubuntu 18.04.2 LTS\n Operating System:           linux\n Architecture:               amd64\n Container Runtime Version:  docker://18.9.7\n Kubelet Version:            v1.15.1\n Kube-Proxy Version:         v1.15.1\nPodCIDR:                     10.96.1.0/24\nNon-terminated Pods:         (3 in total)\n  Namespace                  Name                  CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE\n  ---------                  ----                  ------------  ----------  ---------------  -------------  ---\n  kube-system                kube-proxy-976zl      0 (0%)        0 (0%)      0 (0%)           0 (0%)         144d\n  kube-system                weave-net-rlp57       20m (0%)      0 (0%)      0 (0%)           0 (0%)         75d\n  kubectl-2075               redis-master-klvff    0 (0%)        0 (0%)      0 (0%)           0 (0%)         8s\nAllocated resources:\n  (Total limits may be over 100 percent, i.e., overcommitted.)\n  Resource           Requests  Limits\n  --------           --------  ------\n  cpu                20m (0%)  0 (0%)\n  memory             0 (0%)    0 (0%)\n  ephemeral-storage  0 (0%)    0 (0%)\nEvents:              \n"
Dec 26 15:08:47.662: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe namespace kubectl-2075'
Dec 26 15:08:47.822: INFO: stderr: ""
Dec 26 15:08:47.822: INFO: stdout: "Name:         kubectl-2075\nLabels:       e2e-framework=kubectl\n              e2e-run=d74e562d-b9c4-4da8-a239-ac6b8953e07c\nAnnotations:  \nStatus:       Active\n\nNo resource quota.\n\nNo resource limits.\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 26 15:08:47.823: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-2075" for this suite.
Dec 26 15:09:11.894: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 26 15:09:11.995: INFO: namespace kubectl-2075 deletion completed in 24.129397587s

• [SLOW TEST:34.630 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl describe
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should check if kubectl describe prints relevant information for rc and pods  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 26 15:09:11.996: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72
[It] deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Dec 26 15:09:12.109: INFO: Pod name cleanup-pod: Found 0 pods out of 1
Dec 26 15:09:17.122: INFO: Pod name cleanup-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Dec 26 15:09:19.180: INFO: Creating deployment test-cleanup-deployment
STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66
Dec 26 15:09:19.210: INFO: Deployment "test-cleanup-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment,GenerateName:,Namespace:deployment-8089,SelfLink:/apis/apps/v1/namespaces/deployment-8089/deployments/test-cleanup-deployment,UID:2f75eecd-4628-4fd7-a00c-4f271c2cd3e9,ResourceVersion:18159179,Generation:1,CreationTimestamp:2019-12-26 15:09:19 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[],ReadyReplicas:0,CollisionCount:nil,},}

Dec 26 15:09:19.232: INFO: New ReplicaSet "test-cleanup-deployment-55bbcbc84c" of Deployment "test-cleanup-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment-55bbcbc84c,GenerateName:,Namespace:deployment-8089,SelfLink:/apis/apps/v1/namespaces/deployment-8089/replicasets/test-cleanup-deployment-55bbcbc84c,UID:eece4dfd-fe7d-4f21-8869-a7dea491289e,ResourceVersion:18159181,Generation:1,CreationTimestamp:2019-12-26 15:09:19 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-cleanup-deployment 2f75eecd-4628-4fd7-a00c-4f271c2cd3e9 0xc00263c3b7 0xc00263c3b8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:0,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Dec 26 15:09:19.232: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment":
Dec 26 15:09:19.233: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller,GenerateName:,Namespace:deployment-8089,SelfLink:/apis/apps/v1/namespaces/deployment-8089/replicasets/test-cleanup-controller,UID:c51ab2b1-6ba8-4be4-b8c3-dd6e8c1476b7,ResourceVersion:18159180,Generation:1,CreationTimestamp:2019-12-26 15:09:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 Deployment test-cleanup-deployment 2f75eecd-4628-4fd7-a00c-4f271c2cd3e9 0xc00263c2e7 0xc00263c2e8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},}
Dec 26 15:09:19.380: INFO: Pod "test-cleanup-controller-rxccs" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller-rxccs,GenerateName:test-cleanup-controller-,Namespace:deployment-8089,SelfLink:/api/v1/namespaces/deployment-8089/pods/test-cleanup-controller-rxccs,UID:2e0f3189-5621-4cec-9746-9d535afeead0,ResourceVersion:18159174,Generation:0,CreationTimestamp:2019-12-26 15:09:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-cleanup-controller c51ab2b1-6ba8-4be4-b8c3-dd6e8c1476b7 0xc00263cc77 0xc00263cc78}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-zqfrf {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-zqfrf,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-zqfrf true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00263ccf0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00263cd10}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-26 15:09:12 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-26 15:09:18 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-26 15:09:18 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-26 15:09:12 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.1,StartTime:2019-12-26 15:09:12 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-12-26 15:09:18 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://212e123bfb66258e1ea8d0216ea1bf9d50b5091a39d411f82b33640f2b0ceb42}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 26 15:09:19.381: INFO: Pod "test-cleanup-deployment-55bbcbc84c-9bpm4" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment-55bbcbc84c-9bpm4,GenerateName:test-cleanup-deployment-55bbcbc84c-,Namespace:deployment-8089,SelfLink:/api/v1/namespaces/deployment-8089/pods/test-cleanup-deployment-55bbcbc84c-9bpm4,UID:c4262e05-8496-4e4d-acc7-83f69e941011,ResourceVersion:18159187,Generation:0,CreationTimestamp:2019-12-26 15:09:19 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-cleanup-deployment-55bbcbc84c eece4dfd-fe7d-4f21-8869-a7dea491289e 0xc00263cdf7 0xc00263cdf8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-zqfrf {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-zqfrf,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [{default-token-zqfrf true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00263ce70} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00263ce90}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-26 15:09:19 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 26 15:09:19.381: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-8089" for this suite.
Dec 26 15:09:25.559: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 26 15:09:25.720: INFO: namespace deployment-8089 deletion completed in 6.309592186s

• [SLOW TEST:13.725 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl cluster-info 
  should check if Kubernetes master services is included in cluster-info  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 26 15:09:25.721: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should check if Kubernetes master services is included in cluster-info  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: validating cluster-info
Dec 26 15:09:25.891: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info'
Dec 26 15:09:26.115: INFO: stderr: ""
Dec 26 15:09:26.115: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.24.4.57:6443\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.24.4.57:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 26 15:09:26.115: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-348" for this suite.
Dec 26 15:09:32.153: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 26 15:09:32.267: INFO: namespace kubectl-348 deletion completed in 6.146825345s

• [SLOW TEST:6.546 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl cluster-info
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should check if Kubernetes master services is included in cluster-info  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run pod 
  should create a pod from an image when restart is Never  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 26 15:09:32.268: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1685
[It] should create a pod from an image when restart is Never  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Dec 26 15:09:32.328: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-1519'
Dec 26 15:09:32.465: INFO: stderr: ""
Dec 26 15:09:32.465: INFO: stdout: "pod/e2e-test-nginx-pod created\n"
STEP: verifying the pod e2e-test-nginx-pod was created
[AfterEach] [k8s.io] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1690
Dec 26 15:09:32.474: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=kubectl-1519'
Dec 26 15:09:36.579: INFO: stderr: ""
Dec 26 15:09:36.579: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 26 15:09:36.579: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-1519" for this suite.
Dec 26 15:09:42.638: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 26 15:09:42.752: INFO: namespace kubectl-1519 deletion completed in 6.158130695s

• [SLOW TEST:10.485 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create a pod from an image when restart is Never  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 26 15:09:42.753: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75
STEP: Creating service test in namespace statefulset-6103
[It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Initializing watcher for selector baz=blah,foo=bar
STEP: Creating stateful set ss in namespace statefulset-6103
STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-6103
Dec 26 15:09:42.869: INFO: Found 0 stateful pods, waiting for 1
Dec 26 15:09:52.889: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod
Dec 26 15:09:52.897: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6103 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Dec 26 15:09:53.487: INFO: stderr: "+ mv -v /usr/share/nginx/html/index.html /tmp/\n"
Dec 26 15:09:53.488: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Dec 26 15:09:53.488: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Dec 26 15:09:53.497: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true
Dec 26 15:10:03.509: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Dec 26 15:10:03.509: INFO: Waiting for statefulset status.replicas updated to 0
Dec 26 15:10:03.574: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.99999961s
Dec 26 15:10:04.595: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.969871446s
Dec 26 15:10:05.615: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.948493419s
Dec 26 15:10:06.625: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.928681915s
Dec 26 15:10:07.681: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.918397491s
Dec 26 15:10:08.690: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.862613713s
Dec 26 15:10:09.722: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.853838867s
Dec 26 15:10:10.730: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.82105599s
Dec 26 15:10:11.743: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.814066925s
Dec 26 15:10:12.751: INFO: Verifying statefulset ss doesn't scale past 1 for another 800.466859ms
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-6103
Dec 26 15:10:13.761: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6103 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 26 15:10:14.361: INFO: stderr: "+ mv -v /tmp/index.html /usr/share/nginx/html/\n"
Dec 26 15:10:14.361: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Dec 26 15:10:14.361: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Dec 26 15:10:14.368: INFO: Found 1 stateful pods, waiting for 3
Dec 26 15:10:24.381: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Dec 26 15:10:24.381: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Dec 26 15:10:24.381: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Pending - Ready=false
Dec 26 15:10:34.380: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Dec 26 15:10:34.380: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Dec 26 15:10:34.380: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Verifying that stateful set ss was scaled up in order
STEP: Scale down will halt with unhealthy stateful pod
Dec 26 15:10:34.393: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6103 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Dec 26 15:10:35.179: INFO: stderr: "+ mv -v /usr/share/nginx/html/index.html /tmp/\n"
Dec 26 15:10:35.179: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Dec 26 15:10:35.180: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Dec 26 15:10:35.180: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6103 ss-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Dec 26 15:10:35.538: INFO: stderr: "+ mv -v /usr/share/nginx/html/index.html /tmp/\n"
Dec 26 15:10:35.538: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Dec 26 15:10:35.538: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Dec 26 15:10:35.538: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6103 ss-2 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Dec 26 15:10:36.172: INFO: stderr: "+ mv -v /usr/share/nginx/html/index.html /tmp/\n"
Dec 26 15:10:36.172: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Dec 26 15:10:36.172: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Dec 26 15:10:36.172: INFO: Waiting for statefulset status.replicas updated to 0
Dec 26 15:10:36.181: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1
Dec 26 15:10:46.206: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Dec 26 15:10:46.206: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Dec 26 15:10:46.206: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Dec 26 15:10:46.234: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999604s
Dec 26 15:10:47.252: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.982590514s
Dec 26 15:10:48.265: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.965038754s
Dec 26 15:10:49.318: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.951677459s
Dec 26 15:10:50.333: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.899277404s
Dec 26 15:10:51.368: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.883951754s
Dec 26 15:10:52.699: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.849050785s
Dec 26 15:10:53.712: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.518464991s
Dec 26 15:10:54.724: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.505517042s
Dec 26 15:10:55.742: INFO: Verifying statefulset ss doesn't scale past 3 for another 492.832496ms
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-6103
Dec 26 15:10:56.756: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6103 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 26 15:10:57.402: INFO: stderr: "+ mv -v /tmp/index.html /usr/share/nginx/html/\n"
Dec 26 15:10:57.402: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Dec 26 15:10:57.402: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Dec 26 15:10:57.402: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6103 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 26 15:10:57.793: INFO: stderr: "+ mv -v /tmp/index.html /usr/share/nginx/html/\n"
Dec 26 15:10:57.793: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Dec 26 15:10:57.793: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Dec 26 15:10:57.794: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6103 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 26 15:10:58.293: INFO: rc: 126
Dec 26 15:10:58.293: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6103 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []   cannot exec in a stopped state: unknown
 command terminated with exit code 126
 []  0xc000bf70e0 exit status 126   true [0xc00133f358 0xc00133f580 0xc00133f720] [0xc00133f358 0xc00133f580 0xc00133f720] [0xc00133f498 0xc00133f5c8] [0xba6c50 0xba6c50] 0xc00282e780 }:
Command stdout:
cannot exec in a stopped state: unknown

stderr:
command terminated with exit code 126

error:
exit status 126
Dec 26 15:11:08.294: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6103 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 26 15:11:08.547: INFO: rc: 1
Dec 26 15:11:08.548: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6103 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc000bf7290 exit status 1   true [0xc00133f748 0xc00133f8b8 0xc00133f980] [0xc00133f748 0xc00133f8b8 0xc00133f980] [0xc00133f7d8 0xc00133f978] [0xba6c50 0xba6c50] 0xc00282eb40 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Dec 26 15:11:18.549: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6103 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 26 15:11:18.693: INFO: rc: 1
Dec 26 15:11:18.693: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6103 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc002582330 exit status 1   true [0xc0006c1ce0 0xc0006c1dc8 0xc0006c1ee8] [0xc0006c1ce0 0xc0006c1dc8 0xc0006c1ee8] [0xc0006c1db8 0xc0006c1e78] [0xba6c50 0xba6c50] 0xc0024822a0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Dec 26 15:11:28.693: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6103 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 26 15:11:28.828: INFO: rc: 1
Dec 26 15:11:28.829: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6103 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc000bf7380 exit status 1   true [0xc00133f990 0xc00133fa78 0xc00133fb30] [0xc00133f990 0xc00133fa78 0xc00133fb30] [0xc00133fa58 0xc00133fac8] [0xba6c50 0xba6c50] 0xc00282eea0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Dec 26 15:11:38.829: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6103 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 26 15:11:38.973: INFO: rc: 1
Dec 26 15:11:38.973: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6103 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc00194b1d0 exit status 1   true [0xc000359770 0xc0003598f0 0xc0003599c0] [0xc000359770 0xc0003598f0 0xc0003599c0] [0xc000359880 0xc000359998] [0xba6c50 0xba6c50] 0xc00264a1e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Dec 26 15:11:48.974: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6103 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 26 15:11:49.138: INFO: rc: 1
Dec 26 15:11:49.139: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6103 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc000bf7440 exit status 1   true [0xc00133fb98 0xc00133fcd8 0xc00133fd90] [0xc00133fb98 0xc00133fcd8 0xc00133fd90] [0xc00133fc58 0xc00133fd70] [0xba6c50 0xba6c50] 0xc00282f200 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Dec 26 15:11:59.139: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6103 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 26 15:11:59.317: INFO: rc: 1
Dec 26 15:11:59.317: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6103 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc000bf7530 exit status 1   true [0xc00133fda0 0xc00133ff00 0xc002cec008] [0xc00133fda0 0xc00133ff00 0xc002cec008] [0xc00133fed0 0xc002cec000] [0xba6c50 0xba6c50] 0xc00282f980 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Dec 26 15:12:09.318: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6103 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 26 15:12:09.418: INFO: rc: 1
Dec 26 15:12:09.418: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6103 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc000bf75f0 exit status 1   true [0xc002cec010 0xc002cec028 0xc002cec040] [0xc002cec010 0xc002cec028 0xc002cec040] [0xc002cec020 0xc002cec038] [0xba6c50 0xba6c50] 0xc00282fe00 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Dec 26 15:12:19.419: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6103 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 26 15:12:19.645: INFO: rc: 1
Dec 26 15:12:19.645: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6103 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc0025823f0 exit status 1   true [0xc0006c1f50 0xc002540000 0xc002540018] [0xc0006c1f50 0xc002540000 0xc002540018] [0xc0006c1fd0 0xc002540010] [0xba6c50 0xba6c50] 0xc0024826c0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Dec 26 15:12:29.646: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6103 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 26 15:12:29.832: INFO: rc: 1
Dec 26 15:12:29.833: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6103 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc00071d6e0 exit status 1   true [0xc0006c0338 0xc0006c0658 0xc0006c0868] [0xc0006c0338 0xc0006c0658 0xc0006c0868] [0xc0006c0600 0xc0006c07a8] [0xba6c50 0xba6c50] 0xc002512660 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Dec 26 15:12:39.833: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6103 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 26 15:12:40.004: INFO: rc: 1
Dec 26 15:12:40.004: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6103 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc002f501e0 exit status 1   true [0xc00133e018 0xc00133e410 0xc00133e6d0] [0xc00133e018 0xc00133e410 0xc00133e6d0] [0xc00133e250 0xc00133e500] [0xba6c50 0xba6c50] 0xc0024fa2a0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Dec 26 15:12:50.005: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6103 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 26 15:12:50.119: INFO: rc: 1
Dec 26 15:12:50.119: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6103 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc002bac090 exit status 1   true [0xc000358ee0 0xc000358ff0 0xc000359080] [0xc000358ee0 0xc000358ff0 0xc000359080] [0xc000358fe8 0xc000359078] [0xba6c50 0xba6c50] 0xc0024b6c00 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Dec 26 15:13:00.120: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6103 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 26 15:13:00.313: INFO: rc: 1
Dec 26 15:13:00.314: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6103 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc001f54090 exit status 1   true [0xc002540000 0xc002540018 0xc002540030] [0xc002540000 0xc002540018 0xc002540030] [0xc002540010 0xc002540028] [0xba6c50 0xba6c50] 0xc00264a2a0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Dec 26 15:13:10.314: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6103 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 26 15:13:10.545: INFO: rc: 1
Dec 26 15:13:10.545: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6103 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc002bac180 exit status 1   true [0xc000359088 0xc000359178 0xc000359260] [0xc000359088 0xc000359178 0xc000359260] [0xc000359160 0xc0003591c8] [0xba6c50 0xba6c50] 0xc0024b7920 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Dec 26 15:13:20.547: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6103 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 26 15:13:21.206: INFO: rc: 1
Dec 26 15:13:21.207: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6103 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc001f54180 exit status 1   true [0xc002540038 0xc002540050 0xc002540068] [0xc002540038 0xc002540050 0xc002540068] [0xc002540048 0xc002540060] [0xba6c50 0xba6c50] 0xc00264a5a0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Dec 26 15:13:31.208: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6103 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 26 15:13:31.404: INFO: rc: 1
Dec 26 15:13:31.404: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6103 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc00071d7d0 exit status 1   true [0xc0006c0930 0xc0006c0bd0 0xc0006c0ea8] [0xc0006c0930 0xc0006c0bd0 0xc0006c0ea8] [0xc0006c0a48 0xc0006c0e38] [0xba6c50 0xba6c50] 0xc002512c00 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Dec 26 15:13:41.404: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6103 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 26 15:13:41.632: INFO: rc: 1
Dec 26 15:13:41.632: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6103 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc002bac270 exit status 1   true [0xc000359270 0xc000359320 0xc0003593d8] [0xc000359270 0xc000359320 0xc0003593d8] [0xc0003592e8 0xc0003593a8] [0xba6c50 0xba6c50] 0xc00282e060 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Dec 26 15:13:51.633: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6103 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 26 15:13:51.871: INFO: rc: 1
Dec 26 15:13:51.872: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6103 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc00071d8c0 exit status 1   true [0xc0006c0f28 0xc0006c1338 0xc0006c1578] [0xc0006c0f28 0xc0006c1338 0xc0006c1578] [0xc0006c1248 0xc0006c1438] [0xba6c50 0xba6c50] 0xc002513260 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Dec 26 15:14:01.873: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6103 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 26 15:14:02.004: INFO: rc: 1
Dec 26 15:14:02.004: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6103 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc002bac360 exit status 1   true [0xc000359478 0xc000359620 0xc000359738] [0xc000359478 0xc000359620 0xc000359738] [0xc000359590 0xc0003596c8] [0xba6c50 0xba6c50] 0xc00282e3c0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Dec 26 15:14:12.005: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6103 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 26 15:14:12.231: INFO: rc: 1
Dec 26 15:14:12.231: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6103 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc002bac420 exit status 1   true [0xc000359758 0xc000359880 0xc000359998] [0xc000359758 0xc000359880 0xc000359998] [0xc000359808 0xc000359968] [0xba6c50 0xba6c50] 0xc00282e720 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Dec 26 15:14:22.232: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6103 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 26 15:14:22.426: INFO: rc: 1
Dec 26 15:14:22.427: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6103 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc002f50330 exit status 1   true [0xc00133e7c0 0xc00133ebf0 0xc00133eee0] [0xc00133e7c0 0xc00133ebf0 0xc00133eee0] [0xc00133ea50 0xc00133ed50] [0xba6c50 0xba6c50] 0xc0024fa5a0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Dec 26 15:14:32.427: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6103 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 26 15:14:32.734: INFO: rc: 1
Dec 26 15:14:32.735: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6103 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc00071d710 exit status 1   true [0xc0006c0338 0xc0006c0658 0xc0006c0868] [0xc0006c0338 0xc0006c0658 0xc0006c0868] [0xc0006c0600 0xc0006c07a8] [0xba6c50 0xba6c50] 0xc0024b6c00 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Dec 26 15:14:42.735: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6103 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 26 15:14:42.918: INFO: rc: 1
Dec 26 15:14:42.918: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6103 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc001f540c0 exit status 1   true [0xc00133e018 0xc00133e410 0xc00133e6d0] [0xc00133e018 0xc00133e410 0xc00133e6d0] [0xc00133e250 0xc00133e500] [0xba6c50 0xba6c50] 0xc002512660 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Dec 26 15:14:52.919: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6103 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 26 15:14:53.058: INFO: rc: 1
Dec 26 15:14:53.059: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6103 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc002f50210 exit status 1   true [0xc002540000 0xc002540018 0xc002540030] [0xc002540000 0xc002540018 0xc002540030] [0xc002540010 0xc002540028] [0xba6c50 0xba6c50] 0xc0024fa2a0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Dec 26 15:15:03.059: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6103 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 26 15:15:03.179: INFO: rc: 1
Dec 26 15:15:03.180: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6103 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc002f50300 exit status 1   true [0xc002540038 0xc002540050 0xc002540068] [0xc002540038 0xc002540050 0xc002540068] [0xc002540048 0xc002540060] [0xba6c50 0xba6c50] 0xc0024fa5a0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Dec 26 15:15:13.180: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6103 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 26 15:15:13.385: INFO: rc: 1
Dec 26 15:15:13.385: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6103 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc002f503f0 exit status 1   true [0xc002540070 0xc002540088 0xc0025400a0] [0xc002540070 0xc002540088 0xc0025400a0] [0xc002540080 0xc002540098] [0xba6c50 0xba6c50] 0xc0024fa960 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Dec 26 15:15:23.386: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6103 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 26 15:15:23.574: INFO: rc: 1
Dec 26 15:15:23.574: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6103 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc001f541e0 exit status 1   true [0xc00133e7c0 0xc00133ebf0 0xc00133eee0] [0xc00133e7c0 0xc00133ebf0 0xc00133eee0] [0xc00133ea50 0xc00133ed50] [0xba6c50 0xba6c50] 0xc002512c00 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Dec 26 15:15:33.575: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6103 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 26 15:15:33.773: INFO: rc: 1
Dec 26 15:15:33.773: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6103 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc002f504e0 exit status 1   true [0xc0025400a8 0xc0025400c0 0xc0025400d8] [0xc0025400a8 0xc0025400c0 0xc0025400d8] [0xc0025400b8 0xc0025400d0] [0xba6c50 0xba6c50] 0xc0024faf60 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Dec 26 15:15:43.774: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6103 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 26 15:15:43.966: INFO: rc: 1
Dec 26 15:15:43.966: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6103 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc001f54300 exit status 1   true [0xc00133ef98 0xc00133f0f0 0xc00133f358] [0xc00133ef98 0xc00133f0f0 0xc00133f358] [0xc00133f0b0 0xc00133f288] [0xba6c50 0xba6c50] 0xc002513260 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Dec 26 15:15:53.967: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6103 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 26 15:15:54.090: INFO: rc: 1
Dec 26 15:15:54.090: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6103 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc002f505d0 exit status 1   true [0xc0025400e0 0xc002540100 0xc002540118] [0xc0025400e0 0xc002540100 0xc002540118] [0xc0025400f0 0xc002540110] [0xba6c50 0xba6c50] 0xc0024fb380 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Dec 26 15:16:04.091: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6103 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 26 15:16:04.201: INFO: rc: 1
Dec 26 15:16:04.201: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: 
Dec 26 15:16:04.201: INFO: Scaling statefulset ss to 0
STEP: Verifying that stateful set ss was scaled down in reverse order
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86
Dec 26 15:16:04.223: INFO: Deleting all statefulset in ns statefulset-6103
Dec 26 15:16:04.227: INFO: Scaling statefulset ss to 0
Dec 26 15:16:04.237: INFO: Waiting for statefulset status.replicas updated to 0
Dec 26 15:16:04.239: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 26 15:16:04.275: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-6103" for this suite.
Dec 26 15:16:10.307: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 26 15:16:10.440: INFO: namespace statefulset-6103 deletion completed in 6.158567446s

• [SLOW TEST:387.687 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 26 15:16:10.441: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-map-bc553ba6-f26c-4cda-9356-55161dccc8b6
STEP: Creating a pod to test consume configMaps
Dec 26 15:16:10.594: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-f493b222-cdb2-40b0-904a-d26dac70789d" in namespace "projected-5916" to be "success or failure"
Dec 26 15:16:10.625: INFO: Pod "pod-projected-configmaps-f493b222-cdb2-40b0-904a-d26dac70789d": Phase="Pending", Reason="", readiness=false. Elapsed: 30.405568ms
Dec 26 15:16:12.635: INFO: Pod "pod-projected-configmaps-f493b222-cdb2-40b0-904a-d26dac70789d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.040724995s
Dec 26 15:16:14.655: INFO: Pod "pod-projected-configmaps-f493b222-cdb2-40b0-904a-d26dac70789d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.060497383s
Dec 26 15:16:16.668: INFO: Pod "pod-projected-configmaps-f493b222-cdb2-40b0-904a-d26dac70789d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.074127748s
Dec 26 15:16:18.676: INFO: Pod "pod-projected-configmaps-f493b222-cdb2-40b0-904a-d26dac70789d": Phase="Pending", Reason="", readiness=false. Elapsed: 8.081978645s
Dec 26 15:16:20.684: INFO: Pod "pod-projected-configmaps-f493b222-cdb2-40b0-904a-d26dac70789d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.089470771s
STEP: Saw pod success
Dec 26 15:16:20.684: INFO: Pod "pod-projected-configmaps-f493b222-cdb2-40b0-904a-d26dac70789d" satisfied condition "success or failure"
Dec 26 15:16:20.688: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-f493b222-cdb2-40b0-904a-d26dac70789d container projected-configmap-volume-test: 
STEP: delete the pod
Dec 26 15:16:20.751: INFO: Waiting for pod pod-projected-configmaps-f493b222-cdb2-40b0-904a-d26dac70789d to disappear
Dec 26 15:16:20.757: INFO: Pod pod-projected-configmaps-f493b222-cdb2-40b0-904a-d26dac70789d no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 26 15:16:20.758: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5916" for this suite.
Dec 26 15:16:26.789: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 26 15:16:26.937: INFO: namespace projected-5916 deletion completed in 6.171598465s

• [SLOW TEST:16.496 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSDec 26 15:16:26.939: INFO: Running AfterSuite actions on all nodes
Dec 26 15:16:26.939: INFO: Running AfterSuite actions on node 1
Dec 26 15:16:26.939: INFO: Skipping dumping logs from cluster

Ran 215 of 4412 Specs in 8417.950 seconds
SUCCESS! -- 215 Passed | 0 Failed | 0 Pending | 4197 Skipped
PASS