I0404 12:55:41.677644 6 e2e.go:243] Starting e2e run "708dc7c9-956a-4ecc-a99f-7866581cd178" on Ginkgo node 1 Running Suite: Kubernetes e2e suite =================================== Random Seed: 1586004940 - Will randomize all specs Will run 215 of 4412 specs Apr 4 12:55:41.854: INFO: >>> kubeConfig: /root/.kube/config Apr 4 12:55:41.861: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Apr 4 12:55:41.881: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Apr 4 12:55:41.913: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Apr 4 12:55:41.913: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Apr 4 12:55:41.913: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Apr 4 12:55:41.921: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) Apr 4 12:55:41.921: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Apr 4 12:55:41.921: INFO: e2e test version: v1.15.10 Apr 4 12:55:41.922: INFO: kube-apiserver version: v1.15.7 SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 4 12:55:41.922: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected Apr 4 12:55:41.973: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Apr 4 12:55:41.990: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a80bac69-7c3b-40f8-ac4a-ef6960b3fe7d" in namespace "projected-1257" to be "success or failure" Apr 4 12:55:42.012: INFO: Pod "downwardapi-volume-a80bac69-7c3b-40f8-ac4a-ef6960b3fe7d": Phase="Pending", Reason="", readiness=false. Elapsed: 21.802008ms Apr 4 12:55:44.017: INFO: Pod "downwardapi-volume-a80bac69-7c3b-40f8-ac4a-ef6960b3fe7d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026491095s Apr 4 12:55:46.021: INFO: Pod "downwardapi-volume-a80bac69-7c3b-40f8-ac4a-ef6960b3fe7d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.030464566s STEP: Saw pod success Apr 4 12:55:46.021: INFO: Pod "downwardapi-volume-a80bac69-7c3b-40f8-ac4a-ef6960b3fe7d" satisfied condition "success or failure" Apr 4 12:55:46.025: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-a80bac69-7c3b-40f8-ac4a-ef6960b3fe7d container client-container: STEP: delete the pod Apr 4 12:55:46.071: INFO: Waiting for pod downwardapi-volume-a80bac69-7c3b-40f8-ac4a-ef6960b3fe7d to disappear Apr 4 12:55:46.078: INFO: Pod downwardapi-volume-a80bac69-7c3b-40f8-ac4a-ef6960b3fe7d no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 4 12:55:46.078: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1257" for this suite. Apr 4 12:55:52.099: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 4 12:55:52.223: INFO: namespace projected-1257 deletion completed in 6.142010331s • [SLOW TEST:10.301 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 4 12:55:52.224: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-b47c3aa8-bd28-4292-9136-73817a0c14fa STEP: Creating a pod to test consume configMaps Apr 4 12:55:52.302: INFO: Waiting up to 5m0s for pod "pod-configmaps-76904893-1681-4907-9fc2-eb5459624baf" in namespace "configmap-9025" to be "success or failure" Apr 4 12:55:52.305: INFO: Pod "pod-configmaps-76904893-1681-4907-9fc2-eb5459624baf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.75955ms Apr 4 12:55:54.309: INFO: Pod "pod-configmaps-76904893-1681-4907-9fc2-eb5459624baf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006957225s Apr 4 12:55:56.314: INFO: Pod "pod-configmaps-76904893-1681-4907-9fc2-eb5459624baf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011675406s STEP: Saw pod success Apr 4 12:55:56.314: INFO: Pod "pod-configmaps-76904893-1681-4907-9fc2-eb5459624baf" satisfied condition "success or failure" Apr 4 12:55:56.317: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-76904893-1681-4907-9fc2-eb5459624baf container configmap-volume-test: STEP: delete the pod Apr 4 12:55:56.380: INFO: Waiting for pod pod-configmaps-76904893-1681-4907-9fc2-eb5459624baf to disappear Apr 4 12:55:56.383: INFO: Pod pod-configmaps-76904893-1681-4907-9fc2-eb5459624baf no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 4 12:55:56.383: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9025" for this suite. Apr 4 12:56:02.396: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 4 12:56:02.470: INFO: namespace configmap-9025 deletion completed in 6.084132876s • [SLOW TEST:10.247 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 4 12:56:02.471: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change Apr 4 12:56:02.542: INFO: Pod name pod-release: Found 0 pods out of 1 Apr 4 12:56:07.546: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 4 12:56:08.564: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-1985" for this suite. Apr 4 12:56:14.587: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 4 12:56:14.663: INFO: namespace replication-controller-1985 deletion completed in 6.094534589s • [SLOW TEST:12.193 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 4 12:56:14.664: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap configmap-3458/configmap-test-fdf1a9bf-464c-43a8-8d85-98c0db198618 STEP: Creating a pod to test consume configMaps Apr 4 12:56:14.794: INFO: Waiting up to 5m0s for pod "pod-configmaps-72fc6926-562d-4b97-be79-979cc1cb8259" in namespace "configmap-3458" to be "success or failure" Apr 4 12:56:14.803: INFO: Pod "pod-configmaps-72fc6926-562d-4b97-be79-979cc1cb8259": Phase="Pending", Reason="", readiness=false. Elapsed: 8.16454ms Apr 4 12:56:16.806: INFO: Pod "pod-configmaps-72fc6926-562d-4b97-be79-979cc1cb8259": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01199335s Apr 4 12:56:18.810: INFO: Pod "pod-configmaps-72fc6926-562d-4b97-be79-979cc1cb8259": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.015598511s STEP: Saw pod success Apr 4 12:56:18.810: INFO: Pod "pod-configmaps-72fc6926-562d-4b97-be79-979cc1cb8259" satisfied condition "success or failure" Apr 4 12:56:18.813: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-72fc6926-562d-4b97-be79-979cc1cb8259 container env-test: STEP: delete the pod Apr 4 12:56:18.862: INFO: Waiting for pod pod-configmaps-72fc6926-562d-4b97-be79-979cc1cb8259 to disappear Apr 4 12:56:18.868: INFO: Pod pod-configmaps-72fc6926-562d-4b97-be79-979cc1cb8259 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 4 12:56:18.868: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3458" for this suite. Apr 4 12:56:24.886: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 4 12:56:24.964: INFO: namespace configmap-3458 deletion completed in 6.089392912s • [SLOW TEST:10.300 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 4 12:56:24.965: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test env composition Apr 4 12:56:25.027: INFO: Waiting up to 5m0s for pod "var-expansion-8d074b34-cd73-47fe-8a6c-16f7b5e4f4cf" in namespace "var-expansion-2788" to be "success or failure" Apr 4 12:56:25.105: INFO: Pod "var-expansion-8d074b34-cd73-47fe-8a6c-16f7b5e4f4cf": Phase="Pending", Reason="", readiness=false. Elapsed: 78.034417ms Apr 4 12:56:27.109: INFO: Pod "var-expansion-8d074b34-cd73-47fe-8a6c-16f7b5e4f4cf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.081633956s Apr 4 12:56:29.112: INFO: Pod "var-expansion-8d074b34-cd73-47fe-8a6c-16f7b5e4f4cf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.085296243s STEP: Saw pod success Apr 4 12:56:29.112: INFO: Pod "var-expansion-8d074b34-cd73-47fe-8a6c-16f7b5e4f4cf" satisfied condition "success or failure" Apr 4 12:56:29.114: INFO: Trying to get logs from node iruya-worker pod var-expansion-8d074b34-cd73-47fe-8a6c-16f7b5e4f4cf container dapi-container: STEP: delete the pod Apr 4 12:56:29.161: INFO: Waiting for pod var-expansion-8d074b34-cd73-47fe-8a6c-16f7b5e4f4cf to disappear Apr 4 12:56:29.171: INFO: Pod var-expansion-8d074b34-cd73-47fe-8a6c-16f7b5e4f4cf no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 4 12:56:29.171: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-2788" for this suite. Apr 4 12:56:35.186: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 4 12:56:35.325: INFO: namespace var-expansion-2788 deletion completed in 6.150026251s • [SLOW TEST:10.360 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 4 12:56:35.326: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 Apr 4 12:56:35.379: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Apr 4 12:56:35.387: INFO: Waiting for terminating namespaces to be deleted... Apr 4 12:56:35.390: INFO: Logging pods the kubelet thinks is on node iruya-worker before test Apr 4 12:56:35.395: INFO: kube-proxy-pmz4p from kube-system started at 2020-03-15 18:24:55 +0000 UTC (1 container statuses recorded) Apr 4 12:56:35.395: INFO: Container kube-proxy ready: true, restart count 0 Apr 4 12:56:35.395: INFO: kindnet-gwz5g from kube-system started at 2020-03-15 18:24:55 +0000 UTC (1 container statuses recorded) Apr 4 12:56:35.395: INFO: Container kindnet-cni ready: true, restart count 0 Apr 4 12:56:35.395: INFO: Logging pods the kubelet thinks is on node iruya-worker2 before test Apr 4 12:56:35.401: INFO: kube-proxy-vwbcj from kube-system started at 2020-03-15 18:24:42 +0000 UTC (1 container statuses recorded) Apr 4 12:56:35.401: INFO: Container kube-proxy ready: true, restart count 0 Apr 4 12:56:35.401: INFO: kindnet-mgd8b from kube-system started at 2020-03-15 18:24:43 +0000 UTC (1 container statuses recorded) Apr 4 12:56:35.401: INFO: Container kindnet-cni ready: true, restart count 0 Apr 4 12:56:35.401: INFO: coredns-5d4dd4b4db-gm7vr from kube-system started at 2020-03-15 18:24:52 +0000 UTC (1 container statuses recorded) Apr 4 12:56:35.401: INFO: Container coredns ready: true, restart count 0 Apr 4 12:56:35.401: INFO: coredns-5d4dd4b4db-6jcgz from kube-system started at 2020-03-15 18:24:54 +0000 UTC (1 container statuses recorded) Apr 4 12:56:35.401: INFO: Container coredns ready: true, restart count 0 [It] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: verifying the node has the label node iruya-worker STEP: verifying the node has the label node iruya-worker2 Apr 4 12:56:35.454: INFO: Pod coredns-5d4dd4b4db-6jcgz requesting resource cpu=100m on Node iruya-worker2 Apr 4 12:56:35.454: INFO: Pod coredns-5d4dd4b4db-gm7vr requesting resource cpu=100m on Node iruya-worker2 Apr 4 12:56:35.454: INFO: Pod kindnet-gwz5g requesting resource cpu=100m on Node iruya-worker Apr 4 12:56:35.454: INFO: Pod kindnet-mgd8b requesting resource cpu=100m on Node iruya-worker2 Apr 4 12:56:35.454: INFO: Pod kube-proxy-pmz4p requesting resource cpu=0m on Node iruya-worker Apr 4 12:56:35.454: INFO: Pod kube-proxy-vwbcj requesting resource cpu=0m on Node iruya-worker2 STEP: Starting Pods to consume most of the cluster CPU. STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-3823bdc1-7782-4ad8-a140-8a0319b497c3.16029f23eeb0cb8f], Reason = [Scheduled], Message = [Successfully assigned sched-pred-9309/filler-pod-3823bdc1-7782-4ad8-a140-8a0319b497c3 to iruya-worker] STEP: Considering event: Type = [Normal], Name = [filler-pod-3823bdc1-7782-4ad8-a140-8a0319b497c3.16029f243c2214c8], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-3823bdc1-7782-4ad8-a140-8a0319b497c3.16029f247faeeab8], Reason = [Created], Message = [Created container filler-pod-3823bdc1-7782-4ad8-a140-8a0319b497c3] STEP: Considering event: Type = [Normal], Name = [filler-pod-3823bdc1-7782-4ad8-a140-8a0319b497c3.16029f248c920707], Reason = [Started], Message = [Started container filler-pod-3823bdc1-7782-4ad8-a140-8a0319b497c3] STEP: Considering event: Type = [Normal], Name = [filler-pod-3fd28322-a7a7-4748-b407-4333dd769849.16029f23f12b483e], Reason = [Scheduled], Message = [Successfully assigned sched-pred-9309/filler-pod-3fd28322-a7a7-4748-b407-4333dd769849 to iruya-worker2] STEP: Considering event: Type = [Normal], Name = [filler-pod-3fd28322-a7a7-4748-b407-4333dd769849.16029f244ade54cf], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-3fd28322-a7a7-4748-b407-4333dd769849.16029f24889c14f2], Reason = [Created], Message = [Created container filler-pod-3fd28322-a7a7-4748-b407-4333dd769849] STEP: Considering event: Type = [Normal], Name = [filler-pod-3fd28322-a7a7-4748-b407-4333dd769849.16029f249975afa2], Reason = [Started], Message = [Started container filler-pod-3fd28322-a7a7-4748-b407-4333dd769849] STEP: Considering event: Type = [Warning], Name = [additional-pod.16029f24e0a0d090], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taints that the pod didn't tolerate, 2 Insufficient cpu.] STEP: removing the label node off the node iruya-worker STEP: verifying the node doesn't have the label node STEP: removing the label node off the node iruya-worker2 STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 4 12:56:40.576: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-9309" for this suite. Apr 4 12:56:46.591: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 4 12:56:46.667: INFO: namespace sched-pred-9309 deletion completed in 6.086644302s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72 • [SLOW TEST:11.341 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23 validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl logs should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 4 12:56:46.667: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1292 STEP: creating an rc Apr 4 12:56:46.745: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8003' Apr 4 12:56:49.190: INFO: stderr: "" Apr 4 12:56:49.190: INFO: stdout: "replicationcontroller/redis-master created\n" [It] should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Waiting for Redis master to start. Apr 4 12:56:50.194: INFO: Selector matched 1 pods for map[app:redis] Apr 4 12:56:50.194: INFO: Found 0 / 1 Apr 4 12:56:51.194: INFO: Selector matched 1 pods for map[app:redis] Apr 4 12:56:51.194: INFO: Found 0 / 1 Apr 4 12:56:52.194: INFO: Selector matched 1 pods for map[app:redis] Apr 4 12:56:52.194: INFO: Found 1 / 1 Apr 4 12:56:52.194: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Apr 4 12:56:52.198: INFO: Selector matched 1 pods for map[app:redis] Apr 4 12:56:52.198: INFO: ForEach: Found 1 pods from the filter. Now looping through them. STEP: checking for a matching strings Apr 4 12:56:52.198: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-j5rw6 redis-master --namespace=kubectl-8003' Apr 4 12:56:52.305: INFO: stderr: "" Apr 4 12:56:52.305: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 04 Apr 12:56:51.440 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 04 Apr 12:56:51.440 # Server started, Redis version 3.2.12\n1:M 04 Apr 12:56:51.440 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 04 Apr 12:56:51.440 * The server is now ready to accept connections on port 6379\n" STEP: limiting log lines Apr 4 12:56:52.305: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-j5rw6 redis-master --namespace=kubectl-8003 --tail=1' Apr 4 12:56:52.414: INFO: stderr: "" Apr 4 12:56:52.414: INFO: stdout: "1:M 04 Apr 12:56:51.440 * The server is now ready to accept connections on port 6379\n" STEP: limiting log bytes Apr 4 12:56:52.414: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-j5rw6 redis-master --namespace=kubectl-8003 --limit-bytes=1' Apr 4 12:56:52.533: INFO: stderr: "" Apr 4 12:56:52.534: INFO: stdout: " " STEP: exposing timestamps Apr 4 12:56:52.534: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-j5rw6 redis-master --namespace=kubectl-8003 --tail=1 --timestamps' Apr 4 12:56:52.646: INFO: stderr: "" Apr 4 12:56:52.646: INFO: stdout: "2020-04-04T12:56:51.441240245Z 1:M 04 Apr 12:56:51.440 * The server is now ready to accept connections on port 6379\n" STEP: restricting to a time range Apr 4 12:56:55.146: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-j5rw6 redis-master --namespace=kubectl-8003 --since=1s' Apr 4 12:56:55.249: INFO: stderr: "" Apr 4 12:56:55.249: INFO: stdout: "" Apr 4 12:56:55.249: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-j5rw6 redis-master --namespace=kubectl-8003 --since=24h' Apr 4 12:56:55.355: INFO: stderr: "" Apr 4 12:56:55.355: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 04 Apr 12:56:51.440 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 04 Apr 12:56:51.440 # Server started, Redis version 3.2.12\n1:M 04 Apr 12:56:51.440 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 04 Apr 12:56:51.440 * The server is now ready to accept connections on port 6379\n" [AfterEach] [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1298 STEP: using delete to clean up resources Apr 4 12:56:55.355: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8003' Apr 4 12:56:55.454: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 4 12:56:55.454: INFO: stdout: "replicationcontroller \"redis-master\" force deleted\n" Apr 4 12:56:55.454: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=nginx --no-headers --namespace=kubectl-8003' Apr 4 12:56:55.561: INFO: stderr: "No resources found.\n" Apr 4 12:56:55.561: INFO: stdout: "" Apr 4 12:56:55.561: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=nginx --namespace=kubectl-8003 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Apr 4 12:56:55.646: INFO: stderr: "" Apr 4 12:56:55.647: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 4 12:56:55.647: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8003" for this suite. Apr 4 12:57:17.703: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 4 12:57:17.779: INFO: namespace kubectl-8003 deletion completed in 22.129625933s • [SLOW TEST:31.112 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 4 12:57:17.780: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name s-test-opt-del-1a29132a-40e2-4575-afdb-e2b0b2b940a5 STEP: Creating secret with name s-test-opt-upd-17a80ab2-9d64-4142-8fbb-5cb45ba9f10d STEP: Creating the pod STEP: Deleting secret s-test-opt-del-1a29132a-40e2-4575-afdb-e2b0b2b940a5 STEP: Updating secret s-test-opt-upd-17a80ab2-9d64-4142-8fbb-5cb45ba9f10d STEP: Creating secret with name s-test-opt-create-51aa2a72-a4cc-4472-b2d5-03cd8c9b990d STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 4 12:58:40.462: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6266" for this suite. Apr 4 12:59:02.476: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 4 12:59:02.552: INFO: namespace projected-6266 deletion completed in 22.086965765s • [SLOW TEST:104.773 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 4 12:59:02.553: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Apr 4 12:59:02.604: INFO: Waiting up to 5m0s for pod "downwardapi-volume-600fac8b-bf37-4f4d-8ef7-ccc302db0938" in namespace "downward-api-8084" to be "success or failure" Apr 4 12:59:02.609: INFO: Pod "downwardapi-volume-600fac8b-bf37-4f4d-8ef7-ccc302db0938": Phase="Pending", Reason="", readiness=false. Elapsed: 3.993672ms Apr 4 12:59:04.612: INFO: Pod "downwardapi-volume-600fac8b-bf37-4f4d-8ef7-ccc302db0938": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00724622s Apr 4 12:59:06.616: INFO: Pod "downwardapi-volume-600fac8b-bf37-4f4d-8ef7-ccc302db0938": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011679545s STEP: Saw pod success Apr 4 12:59:06.616: INFO: Pod "downwardapi-volume-600fac8b-bf37-4f4d-8ef7-ccc302db0938" satisfied condition "success or failure" Apr 4 12:59:06.619: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-600fac8b-bf37-4f4d-8ef7-ccc302db0938 container client-container: STEP: delete the pod Apr 4 12:59:06.653: INFO: Waiting for pod downwardapi-volume-600fac8b-bf37-4f4d-8ef7-ccc302db0938 to disappear Apr 4 12:59:06.662: INFO: Pod downwardapi-volume-600fac8b-bf37-4f4d-8ef7-ccc302db0938 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 4 12:59:06.662: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8084" for this suite. Apr 4 12:59:12.678: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 4 12:59:12.756: INFO: namespace downward-api-8084 deletion completed in 6.090546244s • [SLOW TEST:10.203 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 4 12:59:12.756: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-configmap-b9r8 STEP: Creating a pod to test atomic-volume-subpath Apr 4 12:59:12.852: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-b9r8" in namespace "subpath-5159" to be "success or failure" Apr 4 12:59:12.856: INFO: Pod "pod-subpath-test-configmap-b9r8": Phase="Pending", Reason="", readiness=false. Elapsed: 3.476469ms Apr 4 12:59:14.868: INFO: Pod "pod-subpath-test-configmap-b9r8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015117801s Apr 4 12:59:16.872: INFO: Pod "pod-subpath-test-configmap-b9r8": Phase="Running", Reason="", readiness=true. Elapsed: 4.019342009s Apr 4 12:59:18.876: INFO: Pod "pod-subpath-test-configmap-b9r8": Phase="Running", Reason="", readiness=true. Elapsed: 6.023244384s Apr 4 12:59:20.880: INFO: Pod "pod-subpath-test-configmap-b9r8": Phase="Running", Reason="", readiness=true. Elapsed: 8.027484778s Apr 4 12:59:22.884: INFO: Pod "pod-subpath-test-configmap-b9r8": Phase="Running", Reason="", readiness=true. Elapsed: 10.031844004s Apr 4 12:59:24.889: INFO: Pod "pod-subpath-test-configmap-b9r8": Phase="Running", Reason="", readiness=true. Elapsed: 12.036608555s Apr 4 12:59:26.893: INFO: Pod "pod-subpath-test-configmap-b9r8": Phase="Running", Reason="", readiness=true. Elapsed: 14.040333424s Apr 4 12:59:28.897: INFO: Pod "pod-subpath-test-configmap-b9r8": Phase="Running", Reason="", readiness=true. Elapsed: 16.04439079s Apr 4 12:59:30.901: INFO: Pod "pod-subpath-test-configmap-b9r8": Phase="Running", Reason="", readiness=true. Elapsed: 18.048472784s Apr 4 12:59:32.905: INFO: Pod "pod-subpath-test-configmap-b9r8": Phase="Running", Reason="", readiness=true. Elapsed: 20.053063206s Apr 4 12:59:34.909: INFO: Pod "pod-subpath-test-configmap-b9r8": Phase="Running", Reason="", readiness=true. Elapsed: 22.057080262s Apr 4 12:59:36.914: INFO: Pod "pod-subpath-test-configmap-b9r8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.061520224s STEP: Saw pod success Apr 4 12:59:36.914: INFO: Pod "pod-subpath-test-configmap-b9r8" satisfied condition "success or failure" Apr 4 12:59:36.917: INFO: Trying to get logs from node iruya-worker pod pod-subpath-test-configmap-b9r8 container test-container-subpath-configmap-b9r8: STEP: delete the pod Apr 4 12:59:36.936: INFO: Waiting for pod pod-subpath-test-configmap-b9r8 to disappear Apr 4 12:59:36.951: INFO: Pod pod-subpath-test-configmap-b9r8 no longer exists STEP: Deleting pod pod-subpath-test-configmap-b9r8 Apr 4 12:59:36.951: INFO: Deleting pod "pod-subpath-test-configmap-b9r8" in namespace "subpath-5159" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 4 12:59:36.954: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-5159" for this suite. Apr 4 12:59:42.992: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 4 12:59:43.070: INFO: namespace subpath-5159 deletion completed in 6.112967928s • [SLOW TEST:30.314 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 4 12:59:43.071: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod liveness-72d370c3-166e-4edb-b679-812cbc2362c6 in namespace container-probe-4559 Apr 4 12:59:47.143: INFO: Started pod liveness-72d370c3-166e-4edb-b679-812cbc2362c6 in namespace container-probe-4559 STEP: checking the pod's current state and verifying that restartCount is present Apr 4 12:59:47.145: INFO: Initial restart count of pod liveness-72d370c3-166e-4edb-b679-812cbc2362c6 is 0 Apr 4 13:00:05.187: INFO: Restart count of pod container-probe-4559/liveness-72d370c3-166e-4edb-b679-812cbc2362c6 is now 1 (18.041464108s elapsed) Apr 4 13:00:25.230: INFO: Restart count of pod container-probe-4559/liveness-72d370c3-166e-4edb-b679-812cbc2362c6 is now 2 (38.084482909s elapsed) Apr 4 13:00:45.273: INFO: Restart count of pod container-probe-4559/liveness-72d370c3-166e-4edb-b679-812cbc2362c6 is now 3 (58.127488261s elapsed) Apr 4 13:01:05.315: INFO: Restart count of pod container-probe-4559/liveness-72d370c3-166e-4edb-b679-812cbc2362c6 is now 4 (1m18.169733026s elapsed) Apr 4 13:02:13.471: INFO: Restart count of pod container-probe-4559/liveness-72d370c3-166e-4edb-b679-812cbc2362c6 is now 5 (2m26.325333432s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 4 13:02:13.482: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-4559" for this suite. Apr 4 13:02:19.514: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 4 13:02:19.590: INFO: namespace container-probe-4559 deletion completed in 6.10332718s • [SLOW TEST:156.519 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 4 13:02:19.590: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-map-25777e2c-9711-4902-9e6f-342b264e164a STEP: Creating a pod to test consume configMaps Apr 4 13:02:19.669: INFO: Waiting up to 5m0s for pod "pod-configmaps-170e85b0-3228-46f4-973b-4f3915ee1750" in namespace "configmap-6747" to be "success or failure" Apr 4 13:02:19.686: INFO: Pod "pod-configmaps-170e85b0-3228-46f4-973b-4f3915ee1750": Phase="Pending", Reason="", readiness=false. Elapsed: 16.058634ms Apr 4 13:02:21.690: INFO: Pod "pod-configmaps-170e85b0-3228-46f4-973b-4f3915ee1750": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020320162s Apr 4 13:02:23.694: INFO: Pod "pod-configmaps-170e85b0-3228-46f4-973b-4f3915ee1750": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024878151s STEP: Saw pod success Apr 4 13:02:23.694: INFO: Pod "pod-configmaps-170e85b0-3228-46f4-973b-4f3915ee1750" satisfied condition "success or failure" Apr 4 13:02:23.698: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-170e85b0-3228-46f4-973b-4f3915ee1750 container configmap-volume-test: STEP: delete the pod Apr 4 13:02:23.721: INFO: Waiting for pod pod-configmaps-170e85b0-3228-46f4-973b-4f3915ee1750 to disappear Apr 4 13:02:23.792: INFO: Pod pod-configmaps-170e85b0-3228-46f4-973b-4f3915ee1750 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 4 13:02:23.792: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6747" for this suite. Apr 4 13:02:29.813: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 4 13:02:29.890: INFO: namespace configmap-6747 deletion completed in 6.094312903s • [SLOW TEST:10.299 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run rc should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 4 13:02:29.890: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1456 [It] should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Apr 4 13:02:29.956: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=kubectl-8426' Apr 4 13:02:30.055: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Apr 4 13:02:30.055: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n" STEP: verifying the rc e2e-test-nginx-rc was created STEP: verifying the pod controlled by rc e2e-test-nginx-rc was created STEP: confirm that you can get logs from an rc Apr 4 13:02:30.140: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-nginx-rc-knm52] Apr 4 13:02:30.140: INFO: Waiting up to 5m0s for pod "e2e-test-nginx-rc-knm52" in namespace "kubectl-8426" to be "running and ready" Apr 4 13:02:30.143: INFO: Pod "e2e-test-nginx-rc-knm52": Phase="Pending", Reason="", readiness=false. Elapsed: 2.854648ms Apr 4 13:02:32.148: INFO: Pod "e2e-test-nginx-rc-knm52": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007794651s Apr 4 13:02:34.152: INFO: Pod "e2e-test-nginx-rc-knm52": Phase="Running", Reason="", readiness=true. Elapsed: 4.012637642s Apr 4 13:02:34.152: INFO: Pod "e2e-test-nginx-rc-knm52" satisfied condition "running and ready" Apr 4 13:02:34.153: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-nginx-rc-knm52] Apr 4 13:02:34.153: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-nginx-rc --namespace=kubectl-8426' Apr 4 13:02:34.273: INFO: stderr: "" Apr 4 13:02:34.273: INFO: stdout: "" [AfterEach] [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1461 Apr 4 13:02:34.273: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=kubectl-8426' Apr 4 13:02:34.371: INFO: stderr: "" Apr 4 13:02:34.371: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 4 13:02:34.371: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8426" for this suite. Apr 4 13:02:56.396: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 4 13:02:56.484: INFO: namespace kubectl-8426 deletion completed in 22.110273158s • [SLOW TEST:26.594 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 4 13:02:56.486: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the container STEP: wait for the container to reach Failed STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Apr 4 13:03:00.582: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 4 13:03:00.625: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-6493" for this suite. Apr 4 13:03:06.640: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 4 13:03:06.716: INFO: namespace container-runtime-6493 deletion completed in 6.08708223s • [SLOW TEST:10.230 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129 should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 4 13:03:06.716: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Apr 4 13:03:06.814: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d943abc9-4804-43ad-b312-c418fba04151" in namespace "downward-api-4366" to be "success or failure" Apr 4 13:03:06.823: INFO: Pod "downwardapi-volume-d943abc9-4804-43ad-b312-c418fba04151": Phase="Pending", Reason="", readiness=false. Elapsed: 9.246256ms Apr 4 13:03:08.829: INFO: Pod "downwardapi-volume-d943abc9-4804-43ad-b312-c418fba04151": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015611009s Apr 4 13:03:10.835: INFO: Pod "downwardapi-volume-d943abc9-4804-43ad-b312-c418fba04151": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.021015934s STEP: Saw pod success Apr 4 13:03:10.835: INFO: Pod "downwardapi-volume-d943abc9-4804-43ad-b312-c418fba04151" satisfied condition "success or failure" Apr 4 13:03:10.837: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-d943abc9-4804-43ad-b312-c418fba04151 container client-container: STEP: delete the pod Apr 4 13:03:10.854: INFO: Waiting for pod downwardapi-volume-d943abc9-4804-43ad-b312-c418fba04151 to disappear Apr 4 13:03:10.865: INFO: Pod downwardapi-volume-d943abc9-4804-43ad-b312-c418fba04151 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 4 13:03:10.865: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4366" for this suite. Apr 4 13:03:16.880: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 4 13:03:16.954: INFO: namespace downward-api-4366 deletion completed in 6.08608855s • [SLOW TEST:10.238 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 4 13:03:16.954: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating pod Apr 4 13:03:21.049: INFO: Pod pod-hostip-65491ee4-df7b-43ba-87ff-8a695b97d506 has hostIP: 172.17.0.6 [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 4 13:03:21.049: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-407" for this suite. Apr 4 13:03:43.067: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 4 13:03:43.145: INFO: namespace pods-407 deletion completed in 22.091884654s • [SLOW TEST:26.190 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 4 13:03:43.145: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification Apr 4 13:03:43.230: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-9521,SelfLink:/api/v1/namespaces/watch-9521/configmaps/e2e-watch-test-configmap-a,UID:e4b6dfe7-8e37-432c-b218-9dccf17b9fe2,ResourceVersion:3582176,Generation:0,CreationTimestamp:2020-04-04 13:03:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} Apr 4 13:03:43.230: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-9521,SelfLink:/api/v1/namespaces/watch-9521/configmaps/e2e-watch-test-configmap-a,UID:e4b6dfe7-8e37-432c-b218-9dccf17b9fe2,ResourceVersion:3582176,Generation:0,CreationTimestamp:2020-04-04 13:03:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: modifying configmap A and ensuring the correct watchers observe the notification Apr 4 13:03:53.239: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-9521,SelfLink:/api/v1/namespaces/watch-9521/configmaps/e2e-watch-test-configmap-a,UID:e4b6dfe7-8e37-432c-b218-9dccf17b9fe2,ResourceVersion:3582196,Generation:0,CreationTimestamp:2020-04-04 13:03:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} Apr 4 13:03:53.239: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-9521,SelfLink:/api/v1/namespaces/watch-9521/configmaps/e2e-watch-test-configmap-a,UID:e4b6dfe7-8e37-432c-b218-9dccf17b9fe2,ResourceVersion:3582196,Generation:0,CreationTimestamp:2020-04-04 13:03:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying configmap A again and ensuring the correct watchers observe the notification Apr 4 13:04:03.248: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-9521,SelfLink:/api/v1/namespaces/watch-9521/configmaps/e2e-watch-test-configmap-a,UID:e4b6dfe7-8e37-432c-b218-9dccf17b9fe2,ResourceVersion:3582216,Generation:0,CreationTimestamp:2020-04-04 13:03:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Apr 4 13:04:03.248: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-9521,SelfLink:/api/v1/namespaces/watch-9521/configmaps/e2e-watch-test-configmap-a,UID:e4b6dfe7-8e37-432c-b218-9dccf17b9fe2,ResourceVersion:3582216,Generation:0,CreationTimestamp:2020-04-04 13:03:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: deleting configmap A and ensuring the correct watchers observe the notification Apr 4 13:04:13.255: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-9521,SelfLink:/api/v1/namespaces/watch-9521/configmaps/e2e-watch-test-configmap-a,UID:e4b6dfe7-8e37-432c-b218-9dccf17b9fe2,ResourceVersion:3582237,Generation:0,CreationTimestamp:2020-04-04 13:03:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Apr 4 13:04:13.256: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-9521,SelfLink:/api/v1/namespaces/watch-9521/configmaps/e2e-watch-test-configmap-a,UID:e4b6dfe7-8e37-432c-b218-9dccf17b9fe2,ResourceVersion:3582237,Generation:0,CreationTimestamp:2020-04-04 13:03:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification Apr 4 13:04:23.264: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-9521,SelfLink:/api/v1/namespaces/watch-9521/configmaps/e2e-watch-test-configmap-b,UID:2505317b-56d8-4314-ad67-89258c691787,ResourceVersion:3582257,Generation:0,CreationTimestamp:2020-04-04 13:04:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} Apr 4 13:04:23.264: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-9521,SelfLink:/api/v1/namespaces/watch-9521/configmaps/e2e-watch-test-configmap-b,UID:2505317b-56d8-4314-ad67-89258c691787,ResourceVersion:3582257,Generation:0,CreationTimestamp:2020-04-04 13:04:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: deleting configmap B and ensuring the correct watchers observe the notification Apr 4 13:04:33.271: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-9521,SelfLink:/api/v1/namespaces/watch-9521/configmaps/e2e-watch-test-configmap-b,UID:2505317b-56d8-4314-ad67-89258c691787,ResourceVersion:3582279,Generation:0,CreationTimestamp:2020-04-04 13:04:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} Apr 4 13:04:33.271: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-9521,SelfLink:/api/v1/namespaces/watch-9521/configmaps/e2e-watch-test-configmap-b,UID:2505317b-56d8-4314-ad67-89258c691787,ResourceVersion:3582279,Generation:0,CreationTimestamp:2020-04-04 13:04:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 4 13:04:43.271: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-9521" for this suite. Apr 4 13:04:49.290: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 4 13:04:49.374: INFO: namespace watch-9521 deletion completed in 6.098209737s • [SLOW TEST:66.229 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 4 13:04:49.375: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88 [It] should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating service endpoint-test2 in namespace services-9175 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-9175 to expose endpoints map[] Apr 4 13:04:49.514: INFO: Get endpoints failed (12.588814ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found Apr 4 13:04:50.518: INFO: successfully validated that service endpoint-test2 in namespace services-9175 exposes endpoints map[] (1.016631093s elapsed) STEP: Creating pod pod1 in namespace services-9175 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-9175 to expose endpoints map[pod1:[80]] Apr 4 13:04:54.583: INFO: successfully validated that service endpoint-test2 in namespace services-9175 exposes endpoints map[pod1:[80]] (4.05764176s elapsed) STEP: Creating pod pod2 in namespace services-9175 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-9175 to expose endpoints map[pod1:[80] pod2:[80]] Apr 4 13:04:57.649: INFO: successfully validated that service endpoint-test2 in namespace services-9175 exposes endpoints map[pod1:[80] pod2:[80]] (3.062551632s elapsed) STEP: Deleting pod pod1 in namespace services-9175 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-9175 to expose endpoints map[pod2:[80]] Apr 4 13:04:58.677: INFO: successfully validated that service endpoint-test2 in namespace services-9175 exposes endpoints map[pod2:[80]] (1.019121551s elapsed) STEP: Deleting pod pod2 in namespace services-9175 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-9175 to expose endpoints map[] Apr 4 13:04:59.732: INFO: successfully validated that service endpoint-test2 in namespace services-9175 exposes endpoints map[] (1.050222476s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 4 13:04:59.756: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-9175" for this suite. Apr 4 13:05:21.779: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 4 13:05:21.859: INFO: namespace services-9175 deletion completed in 22.099475096s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92 • [SLOW TEST:32.484 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 4 13:05:21.859: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating Pod STEP: Waiting for the pod running STEP: Geting the pod STEP: Reading file content from the nginx-container Apr 4 13:05:25.964: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec pod-sharedvolume-0a2e1345-56c7-47a7-aeaf-9d8aaa1cc22b -c busybox-main-container --namespace=emptydir-7456 -- cat /usr/share/volumeshare/shareddata.txt' Apr 4 13:05:26.183: INFO: stderr: "I0404 13:05:26.094763 315 log.go:172] (0xc000130f20) (0xc0009ac8c0) Create stream\nI0404 13:05:26.094828 315 log.go:172] (0xc000130f20) (0xc0009ac8c0) Stream added, broadcasting: 1\nI0404 13:05:26.098402 315 log.go:172] (0xc000130f20) Reply frame received for 1\nI0404 13:05:26.098464 315 log.go:172] (0xc000130f20) (0xc0005c2280) Create stream\nI0404 13:05:26.098490 315 log.go:172] (0xc000130f20) (0xc0005c2280) Stream added, broadcasting: 3\nI0404 13:05:26.099635 315 log.go:172] (0xc000130f20) Reply frame received for 3\nI0404 13:05:26.099667 315 log.go:172] (0xc000130f20) (0xc0005c2320) Create stream\nI0404 13:05:26.099677 315 log.go:172] (0xc000130f20) (0xc0005c2320) Stream added, broadcasting: 5\nI0404 13:05:26.100802 315 log.go:172] (0xc000130f20) Reply frame received for 5\nI0404 13:05:26.176895 315 log.go:172] (0xc000130f20) Data frame received for 5\nI0404 13:05:26.176942 315 log.go:172] (0xc0005c2320) (5) Data frame handling\nI0404 13:05:26.176965 315 log.go:172] (0xc000130f20) Data frame received for 3\nI0404 13:05:26.176976 315 log.go:172] (0xc0005c2280) (3) Data frame handling\nI0404 13:05:26.176986 315 log.go:172] (0xc0005c2280) (3) Data frame sent\nI0404 13:05:26.177000 315 log.go:172] (0xc000130f20) Data frame received for 3\nI0404 13:05:26.177012 315 log.go:172] (0xc0005c2280) (3) Data frame handling\nI0404 13:05:26.179099 315 log.go:172] (0xc000130f20) Data frame received for 1\nI0404 13:05:26.179132 315 log.go:172] (0xc0009ac8c0) (1) Data frame handling\nI0404 13:05:26.179162 315 log.go:172] (0xc0009ac8c0) (1) Data frame sent\nI0404 13:05:26.179198 315 log.go:172] (0xc000130f20) (0xc0009ac8c0) Stream removed, broadcasting: 1\nI0404 13:05:26.179221 315 log.go:172] (0xc000130f20) Go away received\nI0404 13:05:26.179556 315 log.go:172] (0xc000130f20) (0xc0009ac8c0) Stream removed, broadcasting: 1\nI0404 13:05:26.179571 315 log.go:172] (0xc000130f20) (0xc0005c2280) Stream removed, broadcasting: 3\nI0404 13:05:26.179577 315 log.go:172] (0xc000130f20) (0xc0005c2320) Stream removed, broadcasting: 5\n" Apr 4 13:05:26.183: INFO: stdout: "Hello from the busy-box sub-container\n" [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 4 13:05:26.183: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7456" for this suite. Apr 4 13:05:32.213: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 4 13:05:32.295: INFO: namespace emptydir-7456 deletion completed in 6.107430584s • [SLOW TEST:10.436 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 4 13:05:32.296: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-projected-all-test-volume-14076c1f-4577-4fcb-8209-ecc43627a95b STEP: Creating secret with name secret-projected-all-test-volume-680437a5-1077-4551-8306-2a85e8cf29a7 STEP: Creating a pod to test Check all projections for projected volume plugin Apr 4 13:05:32.385: INFO: Waiting up to 5m0s for pod "projected-volume-f42d2909-d728-433d-9f5a-5022907f4f39" in namespace "projected-3172" to be "success or failure" Apr 4 13:05:32.388: INFO: Pod "projected-volume-f42d2909-d728-433d-9f5a-5022907f4f39": Phase="Pending", Reason="", readiness=false. Elapsed: 3.798404ms Apr 4 13:05:34.393: INFO: Pod "projected-volume-f42d2909-d728-433d-9f5a-5022907f4f39": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00828378s Apr 4 13:05:36.397: INFO: Pod "projected-volume-f42d2909-d728-433d-9f5a-5022907f4f39": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012526407s STEP: Saw pod success Apr 4 13:05:36.397: INFO: Pod "projected-volume-f42d2909-d728-433d-9f5a-5022907f4f39" satisfied condition "success or failure" Apr 4 13:05:36.400: INFO: Trying to get logs from node iruya-worker pod projected-volume-f42d2909-d728-433d-9f5a-5022907f4f39 container projected-all-volume-test: STEP: delete the pod Apr 4 13:05:36.432: INFO: Waiting for pod projected-volume-f42d2909-d728-433d-9f5a-5022907f4f39 to disappear Apr 4 13:05:36.436: INFO: Pod projected-volume-f42d2909-d728-433d-9f5a-5022907f4f39 no longer exists [AfterEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 4 13:05:36.436: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3172" for this suite. Apr 4 13:05:42.475: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 4 13:05:42.552: INFO: namespace projected-3172 deletion completed in 6.112929414s • [SLOW TEST:10.256 seconds] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:31 should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 4 13:05:42.552: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Apr 4 13:05:42.602: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) Apr 4 13:05:42.622: INFO: Pod name sample-pod: Found 0 pods out of 1 Apr 4 13:05:47.626: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Apr 4 13:05:47.626: INFO: Creating deployment "test-rolling-update-deployment" Apr 4 13:05:47.630: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has Apr 4 13:05:47.644: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created Apr 4 13:05:49.652: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected Apr 4 13:05:49.655: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721602347, loc:(*time.Location)(0x7ea78c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721602347, loc:(*time.Location)(0x7ea78c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721602347, loc:(*time.Location)(0x7ea78c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721602347, loc:(*time.Location)(0x7ea78c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-79f6b9d75c\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 4 13:05:51.658: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 Apr 4 13:05:51.669: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment,GenerateName:,Namespace:deployment-6359,SelfLink:/apis/apps/v1/namespaces/deployment-6359/deployments/test-rolling-update-deployment,UID:337ec877-6181-42a0-bc42-08066d596bc0,ResourceVersion:3582589,Generation:1,CreationTimestamp:2020-04-04 13:05:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-04-04 13:05:47 +0000 UTC 2020-04-04 13:05:47 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-04-04 13:05:50 +0000 UTC 2020-04-04 13:05:47 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rolling-update-deployment-79f6b9d75c" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},} Apr 4 13:05:51.672: INFO: New ReplicaSet "test-rolling-update-deployment-79f6b9d75c" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-79f6b9d75c,GenerateName:,Namespace:deployment-6359,SelfLink:/apis/apps/v1/namespaces/deployment-6359/replicasets/test-rolling-update-deployment-79f6b9d75c,UID:d2a8a691-76dc-48a6-b965-894237e4c7c3,ResourceVersion:3582578,Generation:1,CreationTimestamp:2020-04-04 13:05:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment 337ec877-6181-42a0-bc42-08066d596bc0 0xc00172ed77 0xc00172ed78}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} Apr 4 13:05:51.672: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": Apr 4 13:05:51.673: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-controller,GenerateName:,Namespace:deployment-6359,SelfLink:/apis/apps/v1/namespaces/deployment-6359/replicasets/test-rolling-update-controller,UID:8d4b89f5-9532-4f11-ad33-61bba01c85b6,ResourceVersion:3582587,Generation:2,CreationTimestamp:2020-04-04 13:05:42 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305832,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment 337ec877-6181-42a0-bc42-08066d596bc0 0xc00172ec8f 0xc00172eca0}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Apr 4 13:05:51.676: INFO: Pod "test-rolling-update-deployment-79f6b9d75c-5qvvh" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-79f6b9d75c-5qvvh,GenerateName:test-rolling-update-deployment-79f6b9d75c-,Namespace:deployment-6359,SelfLink:/api/v1/namespaces/deployment-6359/pods/test-rolling-update-deployment-79f6b9d75c-5qvvh,UID:f37fd830-20d5-47c5-8ed4-8d362cfbd097,ResourceVersion:3582577,Generation:0,CreationTimestamp:2020-04-04 13:05:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rolling-update-deployment-79f6b9d75c d2a8a691-76dc-48a6-b965-894237e4c7c3 0xc00172f637 0xc00172f638}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-g4xh9 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-g4xh9,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-g4xh9 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00172f6b0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00172f6d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-04 13:05:47 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-04-04 13:05:50 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-04-04 13:05:50 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-04 13:05:47 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:10.244.2.181,StartTime:2020-04-04 13:05:47 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-04-04 13:05:50 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 containerd://1140e9f3b698c3f403405fe8d0aa8212837e84bf696f6edaeefcc153ee0cc5b9}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 4 13:05:51.677: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-6359" for this suite. Apr 4 13:05:57.693: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 4 13:05:57.764: INFO: namespace deployment-6359 deletion completed in 6.083745267s • [SLOW TEST:15.212 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 4 13:05:57.764: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Apr 4 13:05:57.848: INFO: Waiting up to 5m0s for pod "downwardapi-volume-6b5f42f9-0ca3-4a0d-a658-d693f1523cbd" in namespace "downward-api-4916" to be "success or failure" Apr 4 13:05:57.851: INFO: Pod "downwardapi-volume-6b5f42f9-0ca3-4a0d-a658-d693f1523cbd": Phase="Pending", Reason="", readiness=false. Elapsed: 3.134035ms Apr 4 13:05:59.856: INFO: Pod "downwardapi-volume-6b5f42f9-0ca3-4a0d-a658-d693f1523cbd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008200626s Apr 4 13:06:01.860: INFO: Pod "downwardapi-volume-6b5f42f9-0ca3-4a0d-a658-d693f1523cbd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011937573s STEP: Saw pod success Apr 4 13:06:01.860: INFO: Pod "downwardapi-volume-6b5f42f9-0ca3-4a0d-a658-d693f1523cbd" satisfied condition "success or failure" Apr 4 13:06:01.862: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-6b5f42f9-0ca3-4a0d-a658-d693f1523cbd container client-container: STEP: delete the pod Apr 4 13:06:01.881: INFO: Waiting for pod downwardapi-volume-6b5f42f9-0ca3-4a0d-a658-d693f1523cbd to disappear Apr 4 13:06:01.885: INFO: Pod downwardapi-volume-6b5f42f9-0ca3-4a0d-a658-d693f1523cbd no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 4 13:06:01.886: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4916" for this suite. Apr 4 13:06:07.901: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 4 13:06:07.983: INFO: namespace downward-api-4916 deletion completed in 6.095060326s • [SLOW TEST:10.219 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 4 13:06:07.984: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-a86e3acf-8f40-467d-8075-490f8661de17 STEP: Creating a pod to test consume secrets Apr 4 13:06:08.050: INFO: Waiting up to 5m0s for pod "pod-secrets-8bb85b5e-9a65-41c7-a48d-3ec4db5575b5" in namespace "secrets-7439" to be "success or failure" Apr 4 13:06:08.055: INFO: Pod "pod-secrets-8bb85b5e-9a65-41c7-a48d-3ec4db5575b5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.825654ms Apr 4 13:06:10.059: INFO: Pod "pod-secrets-8bb85b5e-9a65-41c7-a48d-3ec4db5575b5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008674768s Apr 4 13:06:12.063: INFO: Pod "pod-secrets-8bb85b5e-9a65-41c7-a48d-3ec4db5575b5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012689284s STEP: Saw pod success Apr 4 13:06:12.063: INFO: Pod "pod-secrets-8bb85b5e-9a65-41c7-a48d-3ec4db5575b5" satisfied condition "success or failure" Apr 4 13:06:12.066: INFO: Trying to get logs from node iruya-worker2 pod pod-secrets-8bb85b5e-9a65-41c7-a48d-3ec4db5575b5 container secret-volume-test: STEP: delete the pod Apr 4 13:06:12.114: INFO: Waiting for pod pod-secrets-8bb85b5e-9a65-41c7-a48d-3ec4db5575b5 to disappear Apr 4 13:06:12.119: INFO: Pod pod-secrets-8bb85b5e-9a65-41c7-a48d-3ec4db5575b5 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 4 13:06:12.119: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7439" for this suite. Apr 4 13:06:18.135: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 4 13:06:18.232: INFO: namespace secrets-7439 deletion completed in 6.109087471s • [SLOW TEST:10.248 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 4 13:06:18.232: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-map-605801f3-93c9-405f-92e2-d070e281ab54 STEP: Creating a pod to test consume secrets Apr 4 13:06:18.308: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-da61c56a-8a00-4b39-9840-972947c74d4b" in namespace "projected-538" to be "success or failure" Apr 4 13:06:18.346: INFO: Pod "pod-projected-secrets-da61c56a-8a00-4b39-9840-972947c74d4b": Phase="Pending", Reason="", readiness=false. Elapsed: 37.777719ms Apr 4 13:06:20.350: INFO: Pod "pod-projected-secrets-da61c56a-8a00-4b39-9840-972947c74d4b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.041477017s Apr 4 13:06:22.354: INFO: Pod "pod-projected-secrets-da61c56a-8a00-4b39-9840-972947c74d4b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.045603221s STEP: Saw pod success Apr 4 13:06:22.354: INFO: Pod "pod-projected-secrets-da61c56a-8a00-4b39-9840-972947c74d4b" satisfied condition "success or failure" Apr 4 13:06:22.357: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-secrets-da61c56a-8a00-4b39-9840-972947c74d4b container projected-secret-volume-test: STEP: delete the pod Apr 4 13:06:22.392: INFO: Waiting for pod pod-projected-secrets-da61c56a-8a00-4b39-9840-972947c74d4b to disappear Apr 4 13:06:22.418: INFO: Pod pod-projected-secrets-da61c56a-8a00-4b39-9840-972947c74d4b no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 4 13:06:22.418: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-538" for this suite. Apr 4 13:06:28.436: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 4 13:06:28.519: INFO: namespace projected-538 deletion completed in 6.098328467s • [SLOW TEST:10.287 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 4 13:06:28.520: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-2d276581-508e-4a2a-b5b1-d8257e6b122c STEP: Creating a pod to test consume secrets Apr 4 13:06:28.584: INFO: Waiting up to 5m0s for pod "pod-secrets-14d77580-1fba-4f11-8395-b29fc25dc5d4" in namespace "secrets-7305" to be "success or failure" Apr 4 13:06:28.588: INFO: Pod "pod-secrets-14d77580-1fba-4f11-8395-b29fc25dc5d4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.181439ms Apr 4 13:06:30.591: INFO: Pod "pod-secrets-14d77580-1fba-4f11-8395-b29fc25dc5d4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00739841s Apr 4 13:06:32.596: INFO: Pod "pod-secrets-14d77580-1fba-4f11-8395-b29fc25dc5d4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011705676s STEP: Saw pod success Apr 4 13:06:32.596: INFO: Pod "pod-secrets-14d77580-1fba-4f11-8395-b29fc25dc5d4" satisfied condition "success or failure" Apr 4 13:06:32.599: INFO: Trying to get logs from node iruya-worker pod pod-secrets-14d77580-1fba-4f11-8395-b29fc25dc5d4 container secret-volume-test: STEP: delete the pod Apr 4 13:06:32.635: INFO: Waiting for pod pod-secrets-14d77580-1fba-4f11-8395-b29fc25dc5d4 to disappear Apr 4 13:06:32.642: INFO: Pod pod-secrets-14d77580-1fba-4f11-8395-b29fc25dc5d4 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 4 13:06:32.642: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7305" for this suite. Apr 4 13:06:38.660: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 4 13:06:38.739: INFO: namespace secrets-7305 deletion completed in 6.092876583s • [SLOW TEST:10.220 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 4 13:06:38.740: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test substitution in container's args Apr 4 13:06:38.781: INFO: Waiting up to 5m0s for pod "var-expansion-a849d189-d849-44ec-a3f6-0946dd573552" in namespace "var-expansion-154" to be "success or failure" Apr 4 13:06:38.809: INFO: Pod "var-expansion-a849d189-d849-44ec-a3f6-0946dd573552": Phase="Pending", Reason="", readiness=false. Elapsed: 28.510548ms Apr 4 13:06:40.814: INFO: Pod "var-expansion-a849d189-d849-44ec-a3f6-0946dd573552": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033111489s Apr 4 13:06:42.817: INFO: Pod "var-expansion-a849d189-d849-44ec-a3f6-0946dd573552": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.036744089s STEP: Saw pod success Apr 4 13:06:42.817: INFO: Pod "var-expansion-a849d189-d849-44ec-a3f6-0946dd573552" satisfied condition "success or failure" Apr 4 13:06:42.821: INFO: Trying to get logs from node iruya-worker2 pod var-expansion-a849d189-d849-44ec-a3f6-0946dd573552 container dapi-container: STEP: delete the pod Apr 4 13:06:42.840: INFO: Waiting for pod var-expansion-a849d189-d849-44ec-a3f6-0946dd573552 to disappear Apr 4 13:06:42.857: INFO: Pod var-expansion-a849d189-d849-44ec-a3f6-0946dd573552 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 4 13:06:42.857: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-154" for this suite. Apr 4 13:06:48.872: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 4 13:06:48.954: INFO: namespace var-expansion-154 deletion completed in 6.094043291s • [SLOW TEST:10.214 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 4 13:06:48.954: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Apr 4 13:06:57.078: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Apr 4 13:06:57.086: INFO: Pod pod-with-poststart-http-hook still exists Apr 4 13:06:59.086: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Apr 4 13:06:59.091: INFO: Pod pod-with-poststart-http-hook still exists Apr 4 13:07:01.086: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Apr 4 13:07:01.091: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 4 13:07:01.091: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-2027" for this suite. Apr 4 13:07:23.113: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 4 13:07:23.234: INFO: namespace container-lifecycle-hook-2027 deletion completed in 22.138978654s • [SLOW TEST:34.280 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 4 13:07:23.235: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with configMap that has name projected-configmap-test-upd-d9cfde03-6443-4b78-9f3b-96199de05cff STEP: Creating the pod STEP: Updating configmap projected-configmap-test-upd-d9cfde03-6443-4b78-9f3b-96199de05cff STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 4 13:08:33.679: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6442" for this suite. Apr 4 13:08:55.710: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 4 13:08:55.794: INFO: namespace projected-6442 deletion completed in 22.11185004s • [SLOW TEST:92.558 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 4 13:08:55.795: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0777 on node default medium Apr 4 13:08:55.876: INFO: Waiting up to 5m0s for pod "pod-e1a99f8a-7aa4-492c-b7b1-b820815d3fd9" in namespace "emptydir-9435" to be "success or failure" Apr 4 13:08:55.879: INFO: Pod "pod-e1a99f8a-7aa4-492c-b7b1-b820815d3fd9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.989953ms Apr 4 13:08:57.884: INFO: Pod "pod-e1a99f8a-7aa4-492c-b7b1-b820815d3fd9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00753158s Apr 4 13:08:59.887: INFO: Pod "pod-e1a99f8a-7aa4-492c-b7b1-b820815d3fd9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011055714s STEP: Saw pod success Apr 4 13:08:59.887: INFO: Pod "pod-e1a99f8a-7aa4-492c-b7b1-b820815d3fd9" satisfied condition "success or failure" Apr 4 13:08:59.890: INFO: Trying to get logs from node iruya-worker2 pod pod-e1a99f8a-7aa4-492c-b7b1-b820815d3fd9 container test-container: STEP: delete the pod Apr 4 13:08:59.907: INFO: Waiting for pod pod-e1a99f8a-7aa4-492c-b7b1-b820815d3fd9 to disappear Apr 4 13:08:59.912: INFO: Pod pod-e1a99f8a-7aa4-492c-b7b1-b820815d3fd9 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 4 13:08:59.912: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9435" for this suite. Apr 4 13:09:05.950: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 4 13:09:06.025: INFO: namespace emptydir-9435 deletion completed in 6.110029114s • [SLOW TEST:10.230 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-apps] Deployment deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 4 13:09:06.025: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Apr 4 13:09:06.109: INFO: Pod name rollover-pod: Found 0 pods out of 1 Apr 4 13:09:11.114: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Apr 4 13:09:11.114: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready Apr 4 13:09:13.118: INFO: Creating deployment "test-rollover-deployment" Apr 4 13:09:13.132: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations Apr 4 13:09:15.139: INFO: Check revision of new replica set for deployment "test-rollover-deployment" Apr 4 13:09:15.146: INFO: Ensure that both replica sets have 1 created replica Apr 4 13:09:15.152: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update Apr 4 13:09:15.159: INFO: Updating deployment test-rollover-deployment Apr 4 13:09:15.159: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller Apr 4 13:09:17.186: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 Apr 4 13:09:17.193: INFO: Make sure deployment "test-rollover-deployment" is complete Apr 4 13:09:17.198: INFO: all replica sets need to contain the pod-template-hash label Apr 4 13:09:17.198: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721602553, loc:(*time.Location)(0x7ea78c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721602553, loc:(*time.Location)(0x7ea78c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721602555, loc:(*time.Location)(0x7ea78c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721602553, loc:(*time.Location)(0x7ea78c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 4 13:09:19.207: INFO: all replica sets need to contain the pod-template-hash label Apr 4 13:09:19.207: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721602553, loc:(*time.Location)(0x7ea78c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721602553, loc:(*time.Location)(0x7ea78c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721602558, loc:(*time.Location)(0x7ea78c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721602553, loc:(*time.Location)(0x7ea78c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 4 13:09:21.207: INFO: all replica sets need to contain the pod-template-hash label Apr 4 13:09:21.207: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721602553, loc:(*time.Location)(0x7ea78c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721602553, loc:(*time.Location)(0x7ea78c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721602558, loc:(*time.Location)(0x7ea78c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721602553, loc:(*time.Location)(0x7ea78c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 4 13:09:23.207: INFO: all replica sets need to contain the pod-template-hash label Apr 4 13:09:23.207: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721602553, loc:(*time.Location)(0x7ea78c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721602553, loc:(*time.Location)(0x7ea78c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721602558, loc:(*time.Location)(0x7ea78c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721602553, loc:(*time.Location)(0x7ea78c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 4 13:09:25.215: INFO: all replica sets need to contain the pod-template-hash label Apr 4 13:09:25.215: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721602553, loc:(*time.Location)(0x7ea78c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721602553, loc:(*time.Location)(0x7ea78c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721602558, loc:(*time.Location)(0x7ea78c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721602553, loc:(*time.Location)(0x7ea78c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 4 13:09:27.206: INFO: all replica sets need to contain the pod-template-hash label Apr 4 13:09:27.206: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721602553, loc:(*time.Location)(0x7ea78c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721602553, loc:(*time.Location)(0x7ea78c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721602558, loc:(*time.Location)(0x7ea78c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721602553, loc:(*time.Location)(0x7ea78c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 4 13:09:29.207: INFO: Apr 4 13:09:29.207: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 Apr 4 13:09:29.218: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment,GenerateName:,Namespace:deployment-8742,SelfLink:/apis/apps/v1/namespaces/deployment-8742/deployments/test-rollover-deployment,UID:13baa435-a48a-4f6a-89f3-6a7d870c3ea7,ResourceVersion:3583329,Generation:2,CreationTimestamp:2020-04-04 13:09:13 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-04-04 13:09:13 +0000 UTC 2020-04-04 13:09:13 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-04-04 13:09:28 +0000 UTC 2020-04-04 13:09:13 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rollover-deployment-854595fc44" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},} Apr 4 13:09:29.221: INFO: New ReplicaSet "test-rollover-deployment-854595fc44" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-854595fc44,GenerateName:,Namespace:deployment-8742,SelfLink:/apis/apps/v1/namespaces/deployment-8742/replicasets/test-rollover-deployment-854595fc44,UID:e0b8fbbd-046d-4cf0-b165-5d4c4093848c,ResourceVersion:3583318,Generation:2,CreationTimestamp:2020-04-04 13:09:15 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 13baa435-a48a-4f6a-89f3-6a7d870c3ea7 0xc000d072c7 0xc000d072c8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} Apr 4 13:09:29.221: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": Apr 4 13:09:29.221: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-controller,GenerateName:,Namespace:deployment-8742,SelfLink:/apis/apps/v1/namespaces/deployment-8742/replicasets/test-rollover-controller,UID:a3b16905-ccc1-4b52-b41b-d8fe64ed4389,ResourceVersion:3583327,Generation:2,CreationTimestamp:2020-04-04 13:09:06 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 13baa435-a48a-4f6a-89f3-6a7d870c3ea7 0xc000d071f7 0xc000d071f8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Apr 4 13:09:29.221: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-9b8b997cf,GenerateName:,Namespace:deployment-8742,SelfLink:/apis/apps/v1/namespaces/deployment-8742/replicasets/test-rollover-deployment-9b8b997cf,UID:81851f40-98c5-4ab5-a36e-06cf57779a07,ResourceVersion:3583282,Generation:2,CreationTimestamp:2020-04-04 13:09:13 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 13baa435-a48a-4f6a-89f3-6a7d870c3ea7 0xc000d073a0 0xc000d073a1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Apr 4 13:09:29.224: INFO: Pod "test-rollover-deployment-854595fc44-s22dq" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-854595fc44-s22dq,GenerateName:test-rollover-deployment-854595fc44-,Namespace:deployment-8742,SelfLink:/api/v1/namespaces/deployment-8742/pods/test-rollover-deployment-854595fc44-s22dq,UID:360cda80-5fdb-42ff-8c11-9b1d74a04b83,ResourceVersion:3583296,Generation:0,CreationTimestamp:2020-04-04 13:09:15 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rollover-deployment-854595fc44 e0b8fbbd-046d-4cf0-b165-5d4c4093848c 0xc000942057 0xc000942058}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-dnlqb {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dnlqb,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-dnlqb true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000942150} {node.kubernetes.io/unreachable Exists NoExecute 0xc000942210}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-04 13:09:15 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-04-04 13:09:18 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-04-04 13:09:18 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-04 13:09:15 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:10.244.1.70,StartTime:2020-04-04 13:09:15 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-04-04 13:09:17 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 containerd://16c6074d8fbad6aa30f489d5027c048de0b21674b61340b77e8a20c0dfdaeedf}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 4 13:09:29.224: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-8742" for this suite. Apr 4 13:09:35.252: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 4 13:09:35.348: INFO: namespace deployment-8742 deletion completed in 6.120699135s • [SLOW TEST:29.322 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 4 13:09:35.349: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Apr 4 13:09:35.403: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace STEP: Creating rc "condition-test" that asks for more than the allowed pod quota STEP: Checking rc "condition-test" has the desired failure condition set STEP: Scaling down rc "condition-test" to satisfy pod quota Apr 4 13:09:37.443: INFO: Updating replication controller "condition-test" STEP: Checking rc "condition-test" has no failure condition set [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 4 13:09:38.469: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-459" for this suite. Apr 4 13:09:44.514: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 4 13:09:44.590: INFO: namespace replication-controller-459 deletion completed in 6.117277038s • [SLOW TEST:9.241 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 4 13:09:44.590: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-4045 [It] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace statefulset-4045 STEP: Creating statefulset with conflicting port in namespace statefulset-4045 STEP: Waiting until pod test-pod will start running in namespace statefulset-4045 STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-4045 Apr 4 13:09:48.842: INFO: Observed stateful pod in namespace: statefulset-4045, name: ss-0, uid: 37b48acf-75fc-468d-84b9-0e615c45a2ca, status phase: Pending. Waiting for statefulset controller to delete. Apr 4 13:09:48.994: INFO: Observed stateful pod in namespace: statefulset-4045, name: ss-0, uid: 37b48acf-75fc-468d-84b9-0e615c45a2ca, status phase: Failed. Waiting for statefulset controller to delete. Apr 4 13:09:49.008: INFO: Observed stateful pod in namespace: statefulset-4045, name: ss-0, uid: 37b48acf-75fc-468d-84b9-0e615c45a2ca, status phase: Failed. Waiting for statefulset controller to delete. Apr 4 13:09:49.037: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-4045 STEP: Removing pod with conflicting port in namespace statefulset-4045 STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-4045 and will be in running state [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 Apr 4 13:09:53.169: INFO: Deleting all statefulset in ns statefulset-4045 Apr 4 13:09:53.172: INFO: Scaling statefulset ss to 0 Apr 4 13:10:03.186: INFO: Waiting for statefulset status.replicas updated to 0 Apr 4 13:10:03.190: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 4 13:10:03.231: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-4045" for this suite. Apr 4 13:10:09.250: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 4 13:10:09.341: INFO: namespace statefulset-4045 deletion completed in 6.107266683s • [SLOW TEST:24.751 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 4 13:10:09.342: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test override arguments Apr 4 13:10:09.401: INFO: Waiting up to 5m0s for pod "client-containers-9de67f3f-a97f-408f-966e-8c866733dea3" in namespace "containers-7594" to be "success or failure" Apr 4 13:10:09.405: INFO: Pod "client-containers-9de67f3f-a97f-408f-966e-8c866733dea3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.191899ms Apr 4 13:10:11.409: INFO: Pod "client-containers-9de67f3f-a97f-408f-966e-8c866733dea3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008373239s Apr 4 13:10:13.414: INFO: Pod "client-containers-9de67f3f-a97f-408f-966e-8c866733dea3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013040363s STEP: Saw pod success Apr 4 13:10:13.414: INFO: Pod "client-containers-9de67f3f-a97f-408f-966e-8c866733dea3" satisfied condition "success or failure" Apr 4 13:10:13.417: INFO: Trying to get logs from node iruya-worker pod client-containers-9de67f3f-a97f-408f-966e-8c866733dea3 container test-container: STEP: delete the pod Apr 4 13:10:13.436: INFO: Waiting for pod client-containers-9de67f3f-a97f-408f-966e-8c866733dea3 to disappear Apr 4 13:10:13.441: INFO: Pod client-containers-9de67f3f-a97f-408f-966e-8c866733dea3 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 4 13:10:13.441: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-7594" for this suite. Apr 4 13:10:19.456: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 4 13:10:19.535: INFO: namespace containers-7594 deletion completed in 6.090431581s • [SLOW TEST:10.193 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 4 13:10:19.535: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88 [It] should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 4 13:10:19.614: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-7530" for this suite. Apr 4 13:10:25.643: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 4 13:10:25.719: INFO: namespace services-7530 deletion completed in 6.100835444s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92 • [SLOW TEST:6.184 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 4 13:10:25.719: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Apr 4 13:10:25.787: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1228' Apr 4 13:10:28.451: INFO: stderr: "" Apr 4 13:10:28.451: INFO: stdout: "replicationcontroller/redis-master created\n" Apr 4 13:10:28.451: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1228' Apr 4 13:10:28.807: INFO: stderr: "" Apr 4 13:10:28.807: INFO: stdout: "service/redis-master created\n" STEP: Waiting for Redis master to start. Apr 4 13:10:29.812: INFO: Selector matched 1 pods for map[app:redis] Apr 4 13:10:29.812: INFO: Found 0 / 1 Apr 4 13:10:30.827: INFO: Selector matched 1 pods for map[app:redis] Apr 4 13:10:30.827: INFO: Found 0 / 1 Apr 4 13:10:31.812: INFO: Selector matched 1 pods for map[app:redis] Apr 4 13:10:31.812: INFO: Found 0 / 1 Apr 4 13:10:32.812: INFO: Selector matched 1 pods for map[app:redis] Apr 4 13:10:32.812: INFO: Found 1 / 1 Apr 4 13:10:32.812: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Apr 4 13:10:32.815: INFO: Selector matched 1 pods for map[app:redis] Apr 4 13:10:32.815: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Apr 4 13:10:32.815: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe pod redis-master-b8nqt --namespace=kubectl-1228' Apr 4 13:10:32.938: INFO: stderr: "" Apr 4 13:10:32.938: INFO: stdout: "Name: redis-master-b8nqt\nNamespace: kubectl-1228\nPriority: 0\nNode: iruya-worker/172.17.0.6\nStart Time: Sat, 04 Apr 2020 13:10:28 +0000\nLabels: app=redis\n role=master\nAnnotations: \nStatus: Running\nIP: 10.244.2.190\nControlled By: ReplicationController/redis-master\nContainers:\n redis-master:\n Container ID: containerd://de97102ece0951fd51fd54e7dc623b2a11bb82741b27de5e576c4b0d65c72795\n Image: gcr.io/kubernetes-e2e-test-images/redis:1.0\n Image ID: gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Sat, 04 Apr 2020 13:10:31 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-9vxd4 (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-9vxd4:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-9vxd4\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute for 300s\n node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 4s default-scheduler Successfully assigned kubectl-1228/redis-master-b8nqt to iruya-worker\n Normal Pulled 3s kubelet, iruya-worker Container image \"gcr.io/kubernetes-e2e-test-images/redis:1.0\" already present on machine\n Normal Created 2s kubelet, iruya-worker Created container redis-master\n Normal Started 1s kubelet, iruya-worker Started container redis-master\n" Apr 4 13:10:32.938: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe rc redis-master --namespace=kubectl-1228' Apr 4 13:10:33.073: INFO: stderr: "" Apr 4 13:10:33.073: INFO: stdout: "Name: redis-master\nNamespace: kubectl-1228\nSelector: app=redis,role=master\nLabels: app=redis\n role=master\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=redis\n role=master\n Containers:\n redis-master:\n Image: gcr.io/kubernetes-e2e-test-images/redis:1.0\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 5s replication-controller Created pod: redis-master-b8nqt\n" Apr 4 13:10:33.073: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe service redis-master --namespace=kubectl-1228' Apr 4 13:10:33.174: INFO: stderr: "" Apr 4 13:10:33.174: INFO: stdout: "Name: redis-master\nNamespace: kubectl-1228\nLabels: app=redis\n role=master\nAnnotations: \nSelector: app=redis,role=master\nType: ClusterIP\nIP: 10.110.68.75\nPort: 6379/TCP\nTargetPort: redis-server/TCP\nEndpoints: 10.244.2.190:6379\nSession Affinity: None\nEvents: \n" Apr 4 13:10:33.178: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe node iruya-control-plane' Apr 4 13:10:33.312: INFO: stderr: "" Apr 4 13:10:33.312: INFO: stdout: "Name: iruya-control-plane\nRoles: master\nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=iruya-control-plane\n kubernetes.io/os=linux\n node-role.kubernetes.io/master=\nAnnotations: kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Sun, 15 Mar 2020 18:24:20 +0000\nTaints: node-role.kubernetes.io/master:NoSchedule\nUnschedulable: false\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n MemoryPressure False Sat, 04 Apr 2020 13:10:31 +0000 Sun, 15 Mar 2020 18:24:20 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Sat, 04 Apr 2020 13:10:31 +0000 Sun, 15 Mar 2020 18:24:20 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Sat, 04 Apr 2020 13:10:31 +0000 Sun, 15 Mar 2020 18:24:20 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Sat, 04 Apr 2020 13:10:31 +0000 Sun, 15 Mar 2020 18:25:00 +0000 KubeletReady kubelet is posting ready status\nAddresses:\n InternalIP: 172.17.0.7\n Hostname: iruya-control-plane\nCapacity:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nAllocatable:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nSystem Info:\n Machine ID: 09f14f6f4d1640fcaab2243401c9f154\n System UUID: 7c6ca533-492e-400c-b058-c282f97a69ec\n Boot ID: ca2aa731-f890-4956-92a1-ff8c7560d571\n Kernel Version: 4.15.0-88-generic\n OS Image: Ubuntu 19.10\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: containerd://1.3.2\n Kubelet Version: v1.15.7\n Kube-Proxy Version: v1.15.7\nPodCIDR: 10.244.0.0/24\nNon-terminated Pods: (7 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system etcd-iruya-control-plane 0 (0%) 0 (0%) 0 (0%) 0 (0%) 19d\n kube-system kindnet-zn8sx 100m (0%) 100m (0%) 50Mi (0%) 50Mi (0%) 19d\n kube-system kube-apiserver-iruya-control-plane 250m (1%) 0 (0%) 0 (0%) 0 (0%) 19d\n kube-system kube-controller-manager-iruya-control-plane 200m (1%) 0 (0%) 0 (0%) 0 (0%) 19d\n kube-system kube-proxy-46nsr 0 (0%) 0 (0%) 0 (0%) 0 (0%) 19d\n kube-system kube-scheduler-iruya-control-plane 100m (0%) 0 (0%) 0 (0%) 0 (0%) 19d\n local-path-storage local-path-provisioner-d4947b89c-72frh 0 (0%) 0 (0%) 0 (0%) 0 (0%) 19d\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 650m (4%) 100m (0%)\n memory 50Mi (0%) 50Mi (0%)\n ephemeral-storage 0 (0%) 0 (0%)\nEvents: \n" Apr 4 13:10:33.313: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe namespace kubectl-1228' Apr 4 13:10:33.421: INFO: stderr: "" Apr 4 13:10:33.421: INFO: stdout: "Name: kubectl-1228\nLabels: e2e-framework=kubectl\n e2e-run=708dc7c9-956a-4ecc-a99f-7866581cd178\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo resource limits.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 4 13:10:33.421: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1228" for this suite. Apr 4 13:10:55.443: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 4 13:10:55.518: INFO: namespace kubectl-1228 deletion completed in 22.091182047s • [SLOW TEST:29.800 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl describe /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 4 13:10:55.519: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W0404 13:11:05.606016 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Apr 4 13:11:05.606: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 4 13:11:05.606: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-4817" for this suite. Apr 4 13:11:11.626: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 4 13:11:11.704: INFO: namespace gc-4817 deletion completed in 6.094017025s • [SLOW TEST:16.185 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run job should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 4 13:11:11.705: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1612 [It] should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Apr 4 13:11:11.754: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-8272' Apr 4 13:11:11.860: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Apr 4 13:11:11.860: INFO: stdout: "job.batch/e2e-test-nginx-job created\n" STEP: verifying the job e2e-test-nginx-job was created [AfterEach] [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1617 Apr 4 13:11:11.879: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-nginx-job --namespace=kubectl-8272' Apr 4 13:11:11.991: INFO: stderr: "" Apr 4 13:11:11.991: INFO: stdout: "job.batch \"e2e-test-nginx-job\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 4 13:11:11.991: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8272" for this suite. Apr 4 13:11:18.009: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 4 13:11:18.088: INFO: namespace kubectl-8272 deletion completed in 6.093970265s • [SLOW TEST:6.384 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 4 13:11:18.089: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [It] should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the initial replication controller Apr 4 13:11:18.168: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4053' Apr 4 13:11:18.423: INFO: stderr: "" Apr 4 13:11:18.423: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Apr 4 13:11:18.423: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4053' Apr 4 13:11:18.558: INFO: stderr: "" Apr 4 13:11:18.558: INFO: stdout: "update-demo-nautilus-8qm5d update-demo-nautilus-dl8hd " Apr 4 13:11:18.558: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8qm5d -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4053' Apr 4 13:11:18.652: INFO: stderr: "" Apr 4 13:11:18.652: INFO: stdout: "" Apr 4 13:11:18.653: INFO: update-demo-nautilus-8qm5d is created but not running Apr 4 13:11:23.653: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4053' Apr 4 13:11:23.757: INFO: stderr: "" Apr 4 13:11:23.757: INFO: stdout: "update-demo-nautilus-8qm5d update-demo-nautilus-dl8hd " Apr 4 13:11:23.757: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8qm5d -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4053' Apr 4 13:11:23.845: INFO: stderr: "" Apr 4 13:11:23.845: INFO: stdout: "true" Apr 4 13:11:23.845: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8qm5d -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4053' Apr 4 13:11:23.941: INFO: stderr: "" Apr 4 13:11:23.941: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 4 13:11:23.941: INFO: validating pod update-demo-nautilus-8qm5d Apr 4 13:11:23.945: INFO: got data: { "image": "nautilus.jpg" } Apr 4 13:11:23.945: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 4 13:11:23.945: INFO: update-demo-nautilus-8qm5d is verified up and running Apr 4 13:11:23.945: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-dl8hd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4053' Apr 4 13:11:24.037: INFO: stderr: "" Apr 4 13:11:24.037: INFO: stdout: "true" Apr 4 13:11:24.037: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-dl8hd -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4053' Apr 4 13:11:24.121: INFO: stderr: "" Apr 4 13:11:24.121: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 4 13:11:24.121: INFO: validating pod update-demo-nautilus-dl8hd Apr 4 13:11:24.125: INFO: got data: { "image": "nautilus.jpg" } Apr 4 13:11:24.125: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 4 13:11:24.125: INFO: update-demo-nautilus-dl8hd is verified up and running STEP: rolling-update to new replication controller Apr 4 13:11:24.127: INFO: scanned /root for discovery docs: Apr 4 13:11:24.127: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=kubectl-4053' Apr 4 13:11:46.787: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" Apr 4 13:11:46.787: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n" STEP: waiting for all containers in name=update-demo pods to come up. Apr 4 13:11:46.788: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4053' Apr 4 13:11:46.906: INFO: stderr: "" Apr 4 13:11:46.906: INFO: stdout: "update-demo-kitten-gsk87 update-demo-kitten-qxqz6 " Apr 4 13:11:46.906: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-gsk87 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4053' Apr 4 13:11:46.993: INFO: stderr: "" Apr 4 13:11:46.993: INFO: stdout: "true" Apr 4 13:11:46.993: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-gsk87 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4053' Apr 4 13:11:47.075: INFO: stderr: "" Apr 4 13:11:47.075: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" Apr 4 13:11:47.075: INFO: validating pod update-demo-kitten-gsk87 Apr 4 13:11:47.079: INFO: got data: { "image": "kitten.jpg" } Apr 4 13:11:47.079: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . Apr 4 13:11:47.079: INFO: update-demo-kitten-gsk87 is verified up and running Apr 4 13:11:47.079: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-qxqz6 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4053' Apr 4 13:11:47.173: INFO: stderr: "" Apr 4 13:11:47.173: INFO: stdout: "true" Apr 4 13:11:47.173: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-qxqz6 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4053' Apr 4 13:11:47.260: INFO: stderr: "" Apr 4 13:11:47.260: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" Apr 4 13:11:47.260: INFO: validating pod update-demo-kitten-qxqz6 Apr 4 13:11:47.263: INFO: got data: { "image": "kitten.jpg" } Apr 4 13:11:47.263: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . Apr 4 13:11:47.263: INFO: update-demo-kitten-qxqz6 is verified up and running [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 4 13:11:47.263: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4053" for this suite. Apr 4 13:12:09.277: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 4 13:12:09.351: INFO: namespace kubectl-4053 deletion completed in 22.085636261s • [SLOW TEST:51.262 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl api-versions should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 4 13:12:09.351: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: validating api versions Apr 4 13:12:09.399: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions' Apr 4 13:12:09.568: INFO: stderr: "" Apr 4 13:12:09.568: INFO: stdout: "admissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\napps/v1beta1\napps/v1beta2\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 4 13:12:09.569: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-132" for this suite. Apr 4 13:12:15.589: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 4 13:12:15.665: INFO: namespace kubectl-132 deletion completed in 6.092212098s • [SLOW TEST:6.314 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl api-versions /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 4 13:12:15.666: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-9016 [It] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a new StatefulSet Apr 4 13:12:15.760: INFO: Found 0 stateful pods, waiting for 3 Apr 4 13:12:25.765: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Apr 4 13:12:25.765: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Apr 4 13:12:25.765: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true Apr 4 13:12:25.775: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9016 ss2-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Apr 4 13:12:26.041: INFO: stderr: "I0404 13:12:25.919672 833 log.go:172] (0xc000a64420) (0xc000988640) Create stream\nI0404 13:12:25.919730 833 log.go:172] (0xc000a64420) (0xc000988640) Stream added, broadcasting: 1\nI0404 13:12:25.922357 833 log.go:172] (0xc000a64420) Reply frame received for 1\nI0404 13:12:25.922391 833 log.go:172] (0xc000a64420) (0xc0009a0000) Create stream\nI0404 13:12:25.922405 833 log.go:172] (0xc000a64420) (0xc0009a0000) Stream added, broadcasting: 3\nI0404 13:12:25.923386 833 log.go:172] (0xc000a64420) Reply frame received for 3\nI0404 13:12:25.923431 833 log.go:172] (0xc000a64420) (0xc0005fe320) Create stream\nI0404 13:12:25.923455 833 log.go:172] (0xc000a64420) (0xc0005fe320) Stream added, broadcasting: 5\nI0404 13:12:25.924285 833 log.go:172] (0xc000a64420) Reply frame received for 5\nI0404 13:12:26.009904 833 log.go:172] (0xc000a64420) Data frame received for 5\nI0404 13:12:26.009935 833 log.go:172] (0xc0005fe320) (5) Data frame handling\nI0404 13:12:26.009958 833 log.go:172] (0xc0005fe320) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0404 13:12:26.035071 833 log.go:172] (0xc000a64420) Data frame received for 3\nI0404 13:12:26.035091 833 log.go:172] (0xc0009a0000) (3) Data frame handling\nI0404 13:12:26.035201 833 log.go:172] (0xc0009a0000) (3) Data frame sent\nI0404 13:12:26.035234 833 log.go:172] (0xc000a64420) Data frame received for 3\nI0404 13:12:26.035241 833 log.go:172] (0xc0009a0000) (3) Data frame handling\nI0404 13:12:26.035524 833 log.go:172] (0xc000a64420) Data frame received for 5\nI0404 13:12:26.035537 833 log.go:172] (0xc0005fe320) (5) Data frame handling\nI0404 13:12:26.037335 833 log.go:172] (0xc000a64420) Data frame received for 1\nI0404 13:12:26.037359 833 log.go:172] (0xc000988640) (1) Data frame handling\nI0404 13:12:26.037523 833 log.go:172] (0xc000988640) (1) Data frame sent\nI0404 13:12:26.037542 833 log.go:172] (0xc000a64420) (0xc000988640) Stream removed, broadcasting: 1\nI0404 13:12:26.037556 833 log.go:172] (0xc000a64420) Go away received\nI0404 13:12:26.038004 833 log.go:172] (0xc000a64420) (0xc000988640) Stream removed, broadcasting: 1\nI0404 13:12:26.038035 833 log.go:172] (0xc000a64420) (0xc0009a0000) Stream removed, broadcasting: 3\nI0404 13:12:26.038048 833 log.go:172] (0xc000a64420) (0xc0005fe320) Stream removed, broadcasting: 5\n" Apr 4 13:12:26.041: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Apr 4 13:12:26.041: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine Apr 4 13:12:36.075: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order Apr 4 13:12:46.117: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9016 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 4 13:12:46.360: INFO: stderr: "I0404 13:12:46.248875 854 log.go:172] (0xc000117080) (0xc000606be0) Create stream\nI0404 13:12:46.248934 854 log.go:172] (0xc000117080) (0xc000606be0) Stream added, broadcasting: 1\nI0404 13:12:46.251725 854 log.go:172] (0xc000117080) Reply frame received for 1\nI0404 13:12:46.251792 854 log.go:172] (0xc000117080) (0xc0008fe000) Create stream\nI0404 13:12:46.251813 854 log.go:172] (0xc000117080) (0xc0008fe000) Stream added, broadcasting: 3\nI0404 13:12:46.253416 854 log.go:172] (0xc000117080) Reply frame received for 3\nI0404 13:12:46.253458 854 log.go:172] (0xc000117080) (0xc000606c80) Create stream\nI0404 13:12:46.253471 854 log.go:172] (0xc000117080) (0xc000606c80) Stream added, broadcasting: 5\nI0404 13:12:46.254508 854 log.go:172] (0xc000117080) Reply frame received for 5\nI0404 13:12:46.353840 854 log.go:172] (0xc000117080) Data frame received for 5\nI0404 13:12:46.353905 854 log.go:172] (0xc000606c80) (5) Data frame handling\nI0404 13:12:46.353930 854 log.go:172] (0xc000606c80) (5) Data frame sent\nI0404 13:12:46.353948 854 log.go:172] (0xc000117080) Data frame received for 5\nI0404 13:12:46.353966 854 log.go:172] (0xc000606c80) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0404 13:12:46.354004 854 log.go:172] (0xc000117080) Data frame received for 3\nI0404 13:12:46.354025 854 log.go:172] (0xc0008fe000) (3) Data frame handling\nI0404 13:12:46.354058 854 log.go:172] (0xc0008fe000) (3) Data frame sent\nI0404 13:12:46.354083 854 log.go:172] (0xc000117080) Data frame received for 3\nI0404 13:12:46.354104 854 log.go:172] (0xc0008fe000) (3) Data frame handling\nI0404 13:12:46.355599 854 log.go:172] (0xc000117080) Data frame received for 1\nI0404 13:12:46.355626 854 log.go:172] (0xc000606be0) (1) Data frame handling\nI0404 13:12:46.355654 854 log.go:172] (0xc000606be0) (1) Data frame sent\nI0404 13:12:46.355692 854 log.go:172] (0xc000117080) (0xc000606be0) Stream removed, broadcasting: 1\nI0404 13:12:46.355718 854 log.go:172] (0xc000117080) Go away received\nI0404 13:12:46.356172 854 log.go:172] (0xc000117080) (0xc000606be0) Stream removed, broadcasting: 1\nI0404 13:12:46.356197 854 log.go:172] (0xc000117080) (0xc0008fe000) Stream removed, broadcasting: 3\nI0404 13:12:46.356225 854 log.go:172] (0xc000117080) (0xc000606c80) Stream removed, broadcasting: 5\n" Apr 4 13:12:46.360: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Apr 4 13:12:46.360: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Apr 4 13:12:56.380: INFO: Waiting for StatefulSet statefulset-9016/ss2 to complete update Apr 4 13:12:56.380: INFO: Waiting for Pod statefulset-9016/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Apr 4 13:12:56.380: INFO: Waiting for Pod statefulset-9016/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Apr 4 13:13:06.388: INFO: Waiting for StatefulSet statefulset-9016/ss2 to complete update Apr 4 13:13:06.388: INFO: Waiting for Pod statefulset-9016/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c STEP: Rolling back to a previous revision Apr 4 13:13:16.388: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9016 ss2-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Apr 4 13:13:16.662: INFO: stderr: "I0404 13:13:16.524616 874 log.go:172] (0xc000118e70) (0xc0004406e0) Create stream\nI0404 13:13:16.524689 874 log.go:172] (0xc000118e70) (0xc0004406e0) Stream added, broadcasting: 1\nI0404 13:13:16.527254 874 log.go:172] (0xc000118e70) Reply frame received for 1\nI0404 13:13:16.527329 874 log.go:172] (0xc000118e70) (0xc000958000) Create stream\nI0404 13:13:16.527353 874 log.go:172] (0xc000118e70) (0xc000958000) Stream added, broadcasting: 3\nI0404 13:13:16.528251 874 log.go:172] (0xc000118e70) Reply frame received for 3\nI0404 13:13:16.528298 874 log.go:172] (0xc000118e70) (0xc0005d83c0) Create stream\nI0404 13:13:16.528315 874 log.go:172] (0xc000118e70) (0xc0005d83c0) Stream added, broadcasting: 5\nI0404 13:13:16.529410 874 log.go:172] (0xc000118e70) Reply frame received for 5\nI0404 13:13:16.624687 874 log.go:172] (0xc000118e70) Data frame received for 5\nI0404 13:13:16.624720 874 log.go:172] (0xc0005d83c0) (5) Data frame handling\nI0404 13:13:16.624740 874 log.go:172] (0xc0005d83c0) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0404 13:13:16.654790 874 log.go:172] (0xc000118e70) Data frame received for 3\nI0404 13:13:16.654832 874 log.go:172] (0xc000958000) (3) Data frame handling\nI0404 13:13:16.654871 874 log.go:172] (0xc000958000) (3) Data frame sent\nI0404 13:13:16.655220 874 log.go:172] (0xc000118e70) Data frame received for 5\nI0404 13:13:16.655249 874 log.go:172] (0xc0005d83c0) (5) Data frame handling\nI0404 13:13:16.655318 874 log.go:172] (0xc000118e70) Data frame received for 3\nI0404 13:13:16.655350 874 log.go:172] (0xc000958000) (3) Data frame handling\nI0404 13:13:16.657563 874 log.go:172] (0xc000118e70) Data frame received for 1\nI0404 13:13:16.657605 874 log.go:172] (0xc0004406e0) (1) Data frame handling\nI0404 13:13:16.657646 874 log.go:172] (0xc0004406e0) (1) Data frame sent\nI0404 13:13:16.657697 874 log.go:172] (0xc000118e70) (0xc0004406e0) Stream removed, broadcasting: 1\nI0404 13:13:16.657749 874 log.go:172] (0xc000118e70) Go away received\nI0404 13:13:16.658185 874 log.go:172] (0xc000118e70) (0xc0004406e0) Stream removed, broadcasting: 1\nI0404 13:13:16.658208 874 log.go:172] (0xc000118e70) (0xc000958000) Stream removed, broadcasting: 3\nI0404 13:13:16.658220 874 log.go:172] (0xc000118e70) (0xc0005d83c0) Stream removed, broadcasting: 5\n" Apr 4 13:13:16.662: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Apr 4 13:13:16.662: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Apr 4 13:13:26.693: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order Apr 4 13:13:36.732: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9016 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 4 13:13:36.948: INFO: stderr: "I0404 13:13:36.855407 895 log.go:172] (0xc000a584d0) (0xc000310820) Create stream\nI0404 13:13:36.855462 895 log.go:172] (0xc000a584d0) (0xc000310820) Stream added, broadcasting: 1\nI0404 13:13:36.858034 895 log.go:172] (0xc000a584d0) Reply frame received for 1\nI0404 13:13:36.858108 895 log.go:172] (0xc000a584d0) (0xc00073e000) Create stream\nI0404 13:13:36.858132 895 log.go:172] (0xc000a584d0) (0xc00073e000) Stream added, broadcasting: 3\nI0404 13:13:36.859241 895 log.go:172] (0xc000a584d0) Reply frame received for 3\nI0404 13:13:36.859315 895 log.go:172] (0xc000a584d0) (0xc00073e0a0) Create stream\nI0404 13:13:36.859346 895 log.go:172] (0xc000a584d0) (0xc00073e0a0) Stream added, broadcasting: 5\nI0404 13:13:36.860393 895 log.go:172] (0xc000a584d0) Reply frame received for 5\nI0404 13:13:36.941012 895 log.go:172] (0xc000a584d0) Data frame received for 5\nI0404 13:13:36.941071 895 log.go:172] (0xc00073e0a0) (5) Data frame handling\nI0404 13:13:36.941095 895 log.go:172] (0xc00073e0a0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0404 13:13:36.941174 895 log.go:172] (0xc000a584d0) Data frame received for 3\nI0404 13:13:36.941250 895 log.go:172] (0xc00073e000) (3) Data frame handling\nI0404 13:13:36.941280 895 log.go:172] (0xc00073e000) (3) Data frame sent\nI0404 13:13:36.941309 895 log.go:172] (0xc000a584d0) Data frame received for 5\nI0404 13:13:36.941363 895 log.go:172] (0xc00073e0a0) (5) Data frame handling\nI0404 13:13:36.941399 895 log.go:172] (0xc000a584d0) Data frame received for 3\nI0404 13:13:36.941432 895 log.go:172] (0xc00073e000) (3) Data frame handling\nI0404 13:13:36.942998 895 log.go:172] (0xc000a584d0) Data frame received for 1\nI0404 13:13:36.943042 895 log.go:172] (0xc000310820) (1) Data frame handling\nI0404 13:13:36.943070 895 log.go:172] (0xc000310820) (1) Data frame sent\nI0404 13:13:36.943091 895 log.go:172] (0xc000a584d0) (0xc000310820) Stream removed, broadcasting: 1\nI0404 13:13:36.943173 895 log.go:172] (0xc000a584d0) Go away received\nI0404 13:13:36.943543 895 log.go:172] (0xc000a584d0) (0xc000310820) Stream removed, broadcasting: 1\nI0404 13:13:36.943577 895 log.go:172] (0xc000a584d0) (0xc00073e000) Stream removed, broadcasting: 3\nI0404 13:13:36.943588 895 log.go:172] (0xc000a584d0) (0xc00073e0a0) Stream removed, broadcasting: 5\n" Apr 4 13:13:36.948: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Apr 4 13:13:36.948: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Apr 4 13:13:56.973: INFO: Waiting for StatefulSet statefulset-9016/ss2 to complete update Apr 4 13:13:56.973: INFO: Waiting for Pod statefulset-9016/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 Apr 4 13:14:06.981: INFO: Deleting all statefulset in ns statefulset-9016 Apr 4 13:14:06.985: INFO: Scaling statefulset ss2 to 0 Apr 4 13:14:27.001: INFO: Waiting for statefulset status.replicas updated to 0 Apr 4 13:14:27.004: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 4 13:14:27.022: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-9016" for this suite. Apr 4 13:14:33.057: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 4 13:14:33.145: INFO: namespace statefulset-9016 deletion completed in 6.11893024s • [SLOW TEST:137.479 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 4 13:14:33.145: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-9220.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-9220.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9220.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-9220.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-9220.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9220.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe /etc/hosts STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 4 13:14:39.272: INFO: DNS probes using dns-9220/dns-test-5dd45dab-f583-429d-9f22-6f677b696f95 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 4 13:14:39.285: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-9220" for this suite. Apr 4 13:14:45.367: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 4 13:14:45.446: INFO: namespace dns-9220 deletion completed in 6.14170418s • [SLOW TEST:12.301 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 4 13:14:45.446: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [It] should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a replication controller Apr 4 13:14:45.508: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6366' Apr 4 13:14:45.789: INFO: stderr: "" Apr 4 13:14:45.789: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Apr 4 13:14:45.789: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6366' Apr 4 13:14:45.903: INFO: stderr: "" Apr 4 13:14:45.903: INFO: stdout: "update-demo-nautilus-vvzqc update-demo-nautilus-ztx9s " Apr 4 13:14:45.903: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-vvzqc -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6366' Apr 4 13:14:46.000: INFO: stderr: "" Apr 4 13:14:46.000: INFO: stdout: "" Apr 4 13:14:46.000: INFO: update-demo-nautilus-vvzqc is created but not running Apr 4 13:14:51.000: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6366' Apr 4 13:14:51.096: INFO: stderr: "" Apr 4 13:14:51.096: INFO: stdout: "update-demo-nautilus-vvzqc update-demo-nautilus-ztx9s " Apr 4 13:14:51.096: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-vvzqc -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6366' Apr 4 13:14:51.186: INFO: stderr: "" Apr 4 13:14:51.186: INFO: stdout: "true" Apr 4 13:14:51.186: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-vvzqc -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6366' Apr 4 13:14:51.275: INFO: stderr: "" Apr 4 13:14:51.275: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 4 13:14:51.275: INFO: validating pod update-demo-nautilus-vvzqc Apr 4 13:14:51.278: INFO: got data: { "image": "nautilus.jpg" } Apr 4 13:14:51.278: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 4 13:14:51.278: INFO: update-demo-nautilus-vvzqc is verified up and running Apr 4 13:14:51.278: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-ztx9s -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6366' Apr 4 13:14:51.377: INFO: stderr: "" Apr 4 13:14:51.377: INFO: stdout: "true" Apr 4 13:14:51.377: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-ztx9s -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6366' Apr 4 13:14:51.472: INFO: stderr: "" Apr 4 13:14:51.472: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 4 13:14:51.472: INFO: validating pod update-demo-nautilus-ztx9s Apr 4 13:14:51.475: INFO: got data: { "image": "nautilus.jpg" } Apr 4 13:14:51.475: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 4 13:14:51.475: INFO: update-demo-nautilus-ztx9s is verified up and running STEP: scaling down the replication controller Apr 4 13:14:51.477: INFO: scanned /root for discovery docs: Apr 4 13:14:51.477: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-6366' Apr 4 13:14:52.594: INFO: stderr: "" Apr 4 13:14:52.594: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Apr 4 13:14:52.594: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6366' Apr 4 13:14:52.689: INFO: stderr: "" Apr 4 13:14:52.689: INFO: stdout: "update-demo-nautilus-vvzqc update-demo-nautilus-ztx9s " STEP: Replicas for name=update-demo: expected=1 actual=2 Apr 4 13:14:57.689: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6366' Apr 4 13:14:57.791: INFO: stderr: "" Apr 4 13:14:57.791: INFO: stdout: "update-demo-nautilus-ztx9s " Apr 4 13:14:57.791: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-ztx9s -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6366' Apr 4 13:14:57.887: INFO: stderr: "" Apr 4 13:14:57.887: INFO: stdout: "true" Apr 4 13:14:57.887: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-ztx9s -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6366' Apr 4 13:14:57.977: INFO: stderr: "" Apr 4 13:14:57.977: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 4 13:14:57.977: INFO: validating pod update-demo-nautilus-ztx9s Apr 4 13:14:57.980: INFO: got data: { "image": "nautilus.jpg" } Apr 4 13:14:57.980: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 4 13:14:57.980: INFO: update-demo-nautilus-ztx9s is verified up and running STEP: scaling up the replication controller Apr 4 13:14:57.981: INFO: scanned /root for discovery docs: Apr 4 13:14:57.981: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-6366' Apr 4 13:14:59.091: INFO: stderr: "" Apr 4 13:14:59.091: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Apr 4 13:14:59.091: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6366' Apr 4 13:14:59.187: INFO: stderr: "" Apr 4 13:14:59.187: INFO: stdout: "update-demo-nautilus-ksz94 update-demo-nautilus-ztx9s " Apr 4 13:14:59.187: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-ksz94 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6366' Apr 4 13:14:59.268: INFO: stderr: "" Apr 4 13:14:59.268: INFO: stdout: "" Apr 4 13:14:59.268: INFO: update-demo-nautilus-ksz94 is created but not running Apr 4 13:15:04.268: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6366' Apr 4 13:15:04.360: INFO: stderr: "" Apr 4 13:15:04.360: INFO: stdout: "update-demo-nautilus-ksz94 update-demo-nautilus-ztx9s " Apr 4 13:15:04.360: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-ksz94 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6366' Apr 4 13:15:04.451: INFO: stderr: "" Apr 4 13:15:04.451: INFO: stdout: "true" Apr 4 13:15:04.451: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-ksz94 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6366' Apr 4 13:15:04.539: INFO: stderr: "" Apr 4 13:15:04.539: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 4 13:15:04.539: INFO: validating pod update-demo-nautilus-ksz94 Apr 4 13:15:04.543: INFO: got data: { "image": "nautilus.jpg" } Apr 4 13:15:04.543: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 4 13:15:04.543: INFO: update-demo-nautilus-ksz94 is verified up and running Apr 4 13:15:04.543: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-ztx9s -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6366' Apr 4 13:15:04.638: INFO: stderr: "" Apr 4 13:15:04.638: INFO: stdout: "true" Apr 4 13:15:04.638: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-ztx9s -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6366' Apr 4 13:15:04.732: INFO: stderr: "" Apr 4 13:15:04.732: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 4 13:15:04.732: INFO: validating pod update-demo-nautilus-ztx9s Apr 4 13:15:04.736: INFO: got data: { "image": "nautilus.jpg" } Apr 4 13:15:04.736: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 4 13:15:04.736: INFO: update-demo-nautilus-ztx9s is verified up and running STEP: using delete to clean up resources Apr 4 13:15:04.736: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6366' Apr 4 13:15:04.826: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 4 13:15:04.826: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Apr 4 13:15:04.826: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-6366' Apr 4 13:15:04.923: INFO: stderr: "No resources found.\n" Apr 4 13:15:04.923: INFO: stdout: "" Apr 4 13:15:04.923: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-6366 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Apr 4 13:15:05.013: INFO: stderr: "" Apr 4 13:15:05.013: INFO: stdout: "update-demo-nautilus-ksz94\nupdate-demo-nautilus-ztx9s\n" Apr 4 13:15:05.514: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-6366' Apr 4 13:15:05.609: INFO: stderr: "No resources found.\n" Apr 4 13:15:05.609: INFO: stdout: "" Apr 4 13:15:05.609: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-6366 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Apr 4 13:15:05.757: INFO: stderr: "" Apr 4 13:15:05.757: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 4 13:15:05.757: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6366" for this suite. Apr 4 13:15:27.791: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 4 13:15:27.869: INFO: namespace kubectl-6366 deletion completed in 22.107934115s • [SLOW TEST:42.423 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 4 13:15:27.870: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-projected-d5ph STEP: Creating a pod to test atomic-volume-subpath Apr 4 13:15:27.945: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-d5ph" in namespace "subpath-5562" to be "success or failure" Apr 4 13:15:27.949: INFO: Pod "pod-subpath-test-projected-d5ph": Phase="Pending", Reason="", readiness=false. Elapsed: 3.995362ms Apr 4 13:15:29.954: INFO: Pod "pod-subpath-test-projected-d5ph": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00831541s Apr 4 13:15:31.957: INFO: Pod "pod-subpath-test-projected-d5ph": Phase="Running", Reason="", readiness=true. Elapsed: 4.011925105s Apr 4 13:15:33.962: INFO: Pod "pod-subpath-test-projected-d5ph": Phase="Running", Reason="", readiness=true. Elapsed: 6.016209899s Apr 4 13:15:35.966: INFO: Pod "pod-subpath-test-projected-d5ph": Phase="Running", Reason="", readiness=true. Elapsed: 8.020683885s Apr 4 13:15:37.970: INFO: Pod "pod-subpath-test-projected-d5ph": Phase="Running", Reason="", readiness=true. Elapsed: 10.024850572s Apr 4 13:15:39.974: INFO: Pod "pod-subpath-test-projected-d5ph": Phase="Running", Reason="", readiness=true. Elapsed: 12.028835391s Apr 4 13:15:41.979: INFO: Pod "pod-subpath-test-projected-d5ph": Phase="Running", Reason="", readiness=true. Elapsed: 14.033230149s Apr 4 13:15:43.983: INFO: Pod "pod-subpath-test-projected-d5ph": Phase="Running", Reason="", readiness=true. Elapsed: 16.037157504s Apr 4 13:15:45.987: INFO: Pod "pod-subpath-test-projected-d5ph": Phase="Running", Reason="", readiness=true. Elapsed: 18.041543078s Apr 4 13:15:47.991: INFO: Pod "pod-subpath-test-projected-d5ph": Phase="Running", Reason="", readiness=true. Elapsed: 20.045602016s Apr 4 13:15:49.995: INFO: Pod "pod-subpath-test-projected-d5ph": Phase="Running", Reason="", readiness=true. Elapsed: 22.04996227s Apr 4 13:15:52.000: INFO: Pod "pod-subpath-test-projected-d5ph": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.054315056s STEP: Saw pod success Apr 4 13:15:52.000: INFO: Pod "pod-subpath-test-projected-d5ph" satisfied condition "success or failure" Apr 4 13:15:52.003: INFO: Trying to get logs from node iruya-worker2 pod pod-subpath-test-projected-d5ph container test-container-subpath-projected-d5ph: STEP: delete the pod Apr 4 13:15:52.023: INFO: Waiting for pod pod-subpath-test-projected-d5ph to disappear Apr 4 13:15:52.027: INFO: Pod pod-subpath-test-projected-d5ph no longer exists STEP: Deleting pod pod-subpath-test-projected-d5ph Apr 4 13:15:52.027: INFO: Deleting pod "pod-subpath-test-projected-d5ph" in namespace "subpath-5562" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 4 13:15:52.029: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-5562" for this suite. Apr 4 13:15:58.045: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 4 13:15:58.129: INFO: namespace subpath-5562 deletion completed in 6.098105614s • [SLOW TEST:30.260 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 4 13:15:58.130: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Performing setup for networking test in namespace pod-network-test-2912 STEP: creating a selector STEP: Creating the service pods in kubernetes Apr 4 13:15:58.194: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Apr 4 13:16:24.315: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.1.84:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-2912 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 4 13:16:24.315: INFO: >>> kubeConfig: /root/.kube/config I0404 13:16:24.350913 6 log.go:172] (0xc001ebe8f0) (0xc002b2f180) Create stream I0404 13:16:24.351020 6 log.go:172] (0xc001ebe8f0) (0xc002b2f180) Stream added, broadcasting: 1 I0404 13:16:24.355124 6 log.go:172] (0xc001ebe8f0) Reply frame received for 1 I0404 13:16:24.355167 6 log.go:172] (0xc001ebe8f0) (0xc00232e640) Create stream I0404 13:16:24.355196 6 log.go:172] (0xc001ebe8f0) (0xc00232e640) Stream added, broadcasting: 3 I0404 13:16:24.356118 6 log.go:172] (0xc001ebe8f0) Reply frame received for 3 I0404 13:16:24.356176 6 log.go:172] (0xc001ebe8f0) (0xc00232e6e0) Create stream I0404 13:16:24.356194 6 log.go:172] (0xc001ebe8f0) (0xc00232e6e0) Stream added, broadcasting: 5 I0404 13:16:24.357107 6 log.go:172] (0xc001ebe8f0) Reply frame received for 5 I0404 13:16:24.456382 6 log.go:172] (0xc001ebe8f0) Data frame received for 3 I0404 13:16:24.456419 6 log.go:172] (0xc00232e640) (3) Data frame handling I0404 13:16:24.456451 6 log.go:172] (0xc00232e640) (3) Data frame sent I0404 13:16:24.456470 6 log.go:172] (0xc001ebe8f0) Data frame received for 3 I0404 13:16:24.456478 6 log.go:172] (0xc00232e640) (3) Data frame handling I0404 13:16:24.456543 6 log.go:172] (0xc001ebe8f0) Data frame received for 5 I0404 13:16:24.456556 6 log.go:172] (0xc00232e6e0) (5) Data frame handling I0404 13:16:24.458774 6 log.go:172] (0xc001ebe8f0) Data frame received for 1 I0404 13:16:24.458831 6 log.go:172] (0xc002b2f180) (1) Data frame handling I0404 13:16:24.458854 6 log.go:172] (0xc002b2f180) (1) Data frame sent I0404 13:16:24.458874 6 log.go:172] (0xc001ebe8f0) (0xc002b2f180) Stream removed, broadcasting: 1 I0404 13:16:24.458916 6 log.go:172] (0xc001ebe8f0) Go away received I0404 13:16:24.459395 6 log.go:172] (0xc001ebe8f0) (0xc002b2f180) Stream removed, broadcasting: 1 I0404 13:16:24.459430 6 log.go:172] (0xc001ebe8f0) (0xc00232e640) Stream removed, broadcasting: 3 I0404 13:16:24.459450 6 log.go:172] (0xc001ebe8f0) (0xc00232e6e0) Stream removed, broadcasting: 5 Apr 4 13:16:24.459: INFO: Found all expected endpoints: [netserver-0] Apr 4 13:16:24.463: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.2.200:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-2912 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 4 13:16:24.463: INFO: >>> kubeConfig: /root/.kube/config I0404 13:16:24.497928 6 log.go:172] (0xc002efc9a0) (0xc0031e86e0) Create stream I0404 13:16:24.497998 6 log.go:172] (0xc002efc9a0) (0xc0031e86e0) Stream added, broadcasting: 1 I0404 13:16:24.502014 6 log.go:172] (0xc002efc9a0) Reply frame received for 1 I0404 13:16:24.502068 6 log.go:172] (0xc002efc9a0) (0xc00232e780) Create stream I0404 13:16:24.502084 6 log.go:172] (0xc002efc9a0) (0xc00232e780) Stream added, broadcasting: 3 I0404 13:16:24.503139 6 log.go:172] (0xc002efc9a0) Reply frame received for 3 I0404 13:16:24.503197 6 log.go:172] (0xc002efc9a0) (0xc002f643c0) Create stream I0404 13:16:24.503223 6 log.go:172] (0xc002efc9a0) (0xc002f643c0) Stream added, broadcasting: 5 I0404 13:16:24.504286 6 log.go:172] (0xc002efc9a0) Reply frame received for 5 I0404 13:16:24.570049 6 log.go:172] (0xc002efc9a0) Data frame received for 5 I0404 13:16:24.570083 6 log.go:172] (0xc002f643c0) (5) Data frame handling I0404 13:16:24.570111 6 log.go:172] (0xc002efc9a0) Data frame received for 3 I0404 13:16:24.570150 6 log.go:172] (0xc00232e780) (3) Data frame handling I0404 13:16:24.570168 6 log.go:172] (0xc00232e780) (3) Data frame sent I0404 13:16:24.570178 6 log.go:172] (0xc002efc9a0) Data frame received for 3 I0404 13:16:24.570184 6 log.go:172] (0xc00232e780) (3) Data frame handling I0404 13:16:24.571714 6 log.go:172] (0xc002efc9a0) Data frame received for 1 I0404 13:16:24.571750 6 log.go:172] (0xc0031e86e0) (1) Data frame handling I0404 13:16:24.571774 6 log.go:172] (0xc0031e86e0) (1) Data frame sent I0404 13:16:24.571803 6 log.go:172] (0xc002efc9a0) (0xc0031e86e0) Stream removed, broadcasting: 1 I0404 13:16:24.571840 6 log.go:172] (0xc002efc9a0) Go away received I0404 13:16:24.571900 6 log.go:172] (0xc002efc9a0) (0xc0031e86e0) Stream removed, broadcasting: 1 I0404 13:16:24.571917 6 log.go:172] (0xc002efc9a0) (0xc00232e780) Stream removed, broadcasting: 3 I0404 13:16:24.571926 6 log.go:172] (0xc002efc9a0) (0xc002f643c0) Stream removed, broadcasting: 5 Apr 4 13:16:24.571: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 4 13:16:24.571: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-2912" for this suite. Apr 4 13:16:46.589: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 4 13:16:46.670: INFO: namespace pod-network-test-2912 deletion completed in 22.094665194s • [SLOW TEST:48.540 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 4 13:16:46.672: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-downwardapi-fxk9 STEP: Creating a pod to test atomic-volume-subpath Apr 4 13:16:46.774: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-fxk9" in namespace "subpath-4870" to be "success or failure" Apr 4 13:16:46.777: INFO: Pod "pod-subpath-test-downwardapi-fxk9": Phase="Pending", Reason="", readiness=false. Elapsed: 3.771725ms Apr 4 13:16:48.782: INFO: Pod "pod-subpath-test-downwardapi-fxk9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00796161s Apr 4 13:16:50.786: INFO: Pod "pod-subpath-test-downwardapi-fxk9": Phase="Running", Reason="", readiness=true. Elapsed: 4.01228396s Apr 4 13:16:52.791: INFO: Pod "pod-subpath-test-downwardapi-fxk9": Phase="Running", Reason="", readiness=true. Elapsed: 6.017037418s Apr 4 13:16:54.795: INFO: Pod "pod-subpath-test-downwardapi-fxk9": Phase="Running", Reason="", readiness=true. Elapsed: 8.021494243s Apr 4 13:16:56.799: INFO: Pod "pod-subpath-test-downwardapi-fxk9": Phase="Running", Reason="", readiness=true. Elapsed: 10.025204746s Apr 4 13:16:58.802: INFO: Pod "pod-subpath-test-downwardapi-fxk9": Phase="Running", Reason="", readiness=true. Elapsed: 12.028398024s Apr 4 13:17:00.806: INFO: Pod "pod-subpath-test-downwardapi-fxk9": Phase="Running", Reason="", readiness=true. Elapsed: 14.032655661s Apr 4 13:17:02.811: INFO: Pod "pod-subpath-test-downwardapi-fxk9": Phase="Running", Reason="", readiness=true. Elapsed: 16.037174786s Apr 4 13:17:04.815: INFO: Pod "pod-subpath-test-downwardapi-fxk9": Phase="Running", Reason="", readiness=true. Elapsed: 18.041351313s Apr 4 13:17:06.819: INFO: Pod "pod-subpath-test-downwardapi-fxk9": Phase="Running", Reason="", readiness=true. Elapsed: 20.044891916s Apr 4 13:17:08.822: INFO: Pod "pod-subpath-test-downwardapi-fxk9": Phase="Running", Reason="", readiness=true. Elapsed: 22.04805671s Apr 4 13:17:10.826: INFO: Pod "pod-subpath-test-downwardapi-fxk9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.052239981s STEP: Saw pod success Apr 4 13:17:10.826: INFO: Pod "pod-subpath-test-downwardapi-fxk9" satisfied condition "success or failure" Apr 4 13:17:10.828: INFO: Trying to get logs from node iruya-worker pod pod-subpath-test-downwardapi-fxk9 container test-container-subpath-downwardapi-fxk9: STEP: delete the pod Apr 4 13:17:10.852: INFO: Waiting for pod pod-subpath-test-downwardapi-fxk9 to disappear Apr 4 13:17:10.855: INFO: Pod pod-subpath-test-downwardapi-fxk9 no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-fxk9 Apr 4 13:17:10.855: INFO: Deleting pod "pod-subpath-test-downwardapi-fxk9" in namespace "subpath-4870" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 4 13:17:10.857: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-4870" for this suite. Apr 4 13:17:16.910: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 4 13:17:16.980: INFO: namespace subpath-4870 deletion completed in 6.120579086s • [SLOW TEST:30.309 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 4 13:17:16.981: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 Apr 4 13:17:17.029: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Apr 4 13:17:17.044: INFO: Waiting for terminating namespaces to be deleted... Apr 4 13:17:17.046: INFO: Logging pods the kubelet thinks is on node iruya-worker before test Apr 4 13:17:17.051: INFO: kube-proxy-pmz4p from kube-system started at 2020-03-15 18:24:55 +0000 UTC (1 container statuses recorded) Apr 4 13:17:17.052: INFO: Container kube-proxy ready: true, restart count 0 Apr 4 13:17:17.052: INFO: kindnet-gwz5g from kube-system started at 2020-03-15 18:24:55 +0000 UTC (1 container statuses recorded) Apr 4 13:17:17.052: INFO: Container kindnet-cni ready: true, restart count 0 Apr 4 13:17:17.052: INFO: Logging pods the kubelet thinks is on node iruya-worker2 before test Apr 4 13:17:17.057: INFO: kube-proxy-vwbcj from kube-system started at 2020-03-15 18:24:42 +0000 UTC (1 container statuses recorded) Apr 4 13:17:17.057: INFO: Container kube-proxy ready: true, restart count 0 Apr 4 13:17:17.057: INFO: kindnet-mgd8b from kube-system started at 2020-03-15 18:24:43 +0000 UTC (1 container statuses recorded) Apr 4 13:17:17.057: INFO: Container kindnet-cni ready: true, restart count 0 Apr 4 13:17:17.057: INFO: coredns-5d4dd4b4db-gm7vr from kube-system started at 2020-03-15 18:24:52 +0000 UTC (1 container statuses recorded) Apr 4 13:17:17.057: INFO: Container coredns ready: true, restart count 0 Apr 4 13:17:17.057: INFO: coredns-5d4dd4b4db-6jcgz from kube-system started at 2020-03-15 18:24:54 +0000 UTC (1 container statuses recorded) Apr 4 13:17:17.057: INFO: Container coredns ready: true, restart count 0 [It] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-83968404-8ccf-42cc-94ea-2ba8988b75f7 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-83968404-8ccf-42cc-94ea-2ba8988b75f7 off the node iruya-worker2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-83968404-8ccf-42cc-94ea-2ba8988b75f7 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 4 13:17:25.232: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-4531" for this suite. Apr 4 13:17:43.253: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 4 13:17:43.350: INFO: namespace sched-pred-4531 deletion completed in 18.114818199s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72 • [SLOW TEST:26.369 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23 validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run --rm job should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 4 13:17:43.350: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: executing a command with run --rm and attach with stdin Apr 4 13:17:43.403: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-3072 run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed'' Apr 4 13:17:46.839: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\nI0404 13:17:46.757428 1446 log.go:172] (0xc000118790) (0xc00078e6e0) Create stream\nI0404 13:17:46.757498 1446 log.go:172] (0xc000118790) (0xc00078e6e0) Stream added, broadcasting: 1\nI0404 13:17:46.762348 1446 log.go:172] (0xc000118790) Reply frame received for 1\nI0404 13:17:46.762399 1446 log.go:172] (0xc000118790) (0xc00078e000) Create stream\nI0404 13:17:46.762414 1446 log.go:172] (0xc000118790) (0xc00078e000) Stream added, broadcasting: 3\nI0404 13:17:46.763487 1446 log.go:172] (0xc000118790) Reply frame received for 3\nI0404 13:17:46.763518 1446 log.go:172] (0xc000118790) (0xc00002c000) Create stream\nI0404 13:17:46.763529 1446 log.go:172] (0xc000118790) (0xc00002c000) Stream added, broadcasting: 5\nI0404 13:17:46.764441 1446 log.go:172] (0xc000118790) Reply frame received for 5\nI0404 13:17:46.764500 1446 log.go:172] (0xc000118790) (0xc00078e0a0) Create stream\nI0404 13:17:46.764519 1446 log.go:172] (0xc000118790) (0xc00078e0a0) Stream added, broadcasting: 7\nI0404 13:17:46.765514 1446 log.go:172] (0xc000118790) Reply frame received for 7\nI0404 13:17:46.765709 1446 log.go:172] (0xc00078e000) (3) Writing data frame\nI0404 13:17:46.765812 1446 log.go:172] (0xc00078e000) (3) Writing data frame\nI0404 13:17:46.766555 1446 log.go:172] (0xc000118790) Data frame received for 5\nI0404 13:17:46.766571 1446 log.go:172] (0xc00002c000) (5) Data frame handling\nI0404 13:17:46.766586 1446 log.go:172] (0xc00002c000) (5) Data frame sent\nI0404 13:17:46.767330 1446 log.go:172] (0xc000118790) Data frame received for 5\nI0404 13:17:46.767358 1446 log.go:172] (0xc00002c000) (5) Data frame handling\nI0404 13:17:46.767399 1446 log.go:172] (0xc00002c000) (5) Data frame sent\nI0404 13:17:46.803136 1446 log.go:172] (0xc000118790) Data frame received for 7\nI0404 13:17:46.803177 1446 log.go:172] (0xc00078e0a0) (7) Data frame handling\nI0404 13:17:46.803215 1446 log.go:172] (0xc000118790) Data frame received for 5\nI0404 13:17:46.803237 1446 log.go:172] (0xc00002c000) (5) Data frame handling\nI0404 13:17:46.803849 1446 log.go:172] (0xc000118790) Data frame received for 1\nI0404 13:17:46.803883 1446 log.go:172] (0xc00078e6e0) (1) Data frame handling\nI0404 13:17:46.803906 1446 log.go:172] (0xc00078e6e0) (1) Data frame sent\nI0404 13:17:46.803934 1446 log.go:172] (0xc000118790) (0xc00078e6e0) Stream removed, broadcasting: 1\nI0404 13:17:46.804007 1446 log.go:172] (0xc000118790) (0xc00078e6e0) Stream removed, broadcasting: 1\nI0404 13:17:46.804037 1446 log.go:172] (0xc000118790) (0xc00078e000) Stream removed, broadcasting: 3\nI0404 13:17:46.804073 1446 log.go:172] (0xc000118790) (0xc00002c000) Stream removed, broadcasting: 5\nI0404 13:17:46.804101 1446 log.go:172] (0xc000118790) (0xc00078e0a0) Stream removed, broadcasting: 7\nI0404 13:17:46.804234 1446 log.go:172] (0xc000118790) (0xc00078e000) Stream removed, broadcasting: 3\nI0404 13:17:46.804295 1446 log.go:172] (0xc000118790) Go away received\n" Apr 4 13:17:46.839: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n" STEP: verifying the job e2e-test-rm-busybox-job was deleted [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 4 13:17:48.845: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3072" for this suite. Apr 4 13:17:54.859: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 4 13:17:54.939: INFO: namespace kubectl-3072 deletion completed in 6.089884444s • [SLOW TEST:11.588 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run --rm job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 4 13:17:54.939: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Apr 4 13:17:55.016: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c7b1766f-5b55-419f-81a3-e8f13c37c603" in namespace "projected-6459" to be "success or failure" Apr 4 13:17:55.035: INFO: Pod "downwardapi-volume-c7b1766f-5b55-419f-81a3-e8f13c37c603": Phase="Pending", Reason="", readiness=false. Elapsed: 18.58779ms Apr 4 13:17:57.039: INFO: Pod "downwardapi-volume-c7b1766f-5b55-419f-81a3-e8f13c37c603": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022726586s Apr 4 13:17:59.043: INFO: Pod "downwardapi-volume-c7b1766f-5b55-419f-81a3-e8f13c37c603": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.026891064s STEP: Saw pod success Apr 4 13:17:59.043: INFO: Pod "downwardapi-volume-c7b1766f-5b55-419f-81a3-e8f13c37c603" satisfied condition "success or failure" Apr 4 13:17:59.046: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-c7b1766f-5b55-419f-81a3-e8f13c37c603 container client-container: STEP: delete the pod Apr 4 13:17:59.081: INFO: Waiting for pod downwardapi-volume-c7b1766f-5b55-419f-81a3-e8f13c37c603 to disappear Apr 4 13:17:59.095: INFO: Pod downwardapi-volume-c7b1766f-5b55-419f-81a3-e8f13c37c603 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 4 13:17:59.096: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6459" for this suite. Apr 4 13:18:05.111: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 4 13:18:05.193: INFO: namespace projected-6459 deletion completed in 6.093026071s • [SLOW TEST:10.254 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 4 13:18:05.193: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-dd030f71-4444-475e-876d-64b7760633a8 STEP: Creating a pod to test consume configMaps Apr 4 13:18:05.259: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-7c258355-6286-46b4-8395-7e0ed799660d" in namespace "projected-5238" to be "success or failure" Apr 4 13:18:05.263: INFO: Pod "pod-projected-configmaps-7c258355-6286-46b4-8395-7e0ed799660d": Phase="Pending", Reason="", readiness=false. Elapsed: 3.900024ms Apr 4 13:18:07.268: INFO: Pod "pod-projected-configmaps-7c258355-6286-46b4-8395-7e0ed799660d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008238187s Apr 4 13:18:09.272: INFO: Pod "pod-projected-configmaps-7c258355-6286-46b4-8395-7e0ed799660d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012577303s STEP: Saw pod success Apr 4 13:18:09.272: INFO: Pod "pod-projected-configmaps-7c258355-6286-46b4-8395-7e0ed799660d" satisfied condition "success or failure" Apr 4 13:18:09.275: INFO: Trying to get logs from node iruya-worker pod pod-projected-configmaps-7c258355-6286-46b4-8395-7e0ed799660d container projected-configmap-volume-test: STEP: delete the pod Apr 4 13:18:09.334: INFO: Waiting for pod pod-projected-configmaps-7c258355-6286-46b4-8395-7e0ed799660d to disappear Apr 4 13:18:09.342: INFO: Pod pod-projected-configmaps-7c258355-6286-46b4-8395-7e0ed799660d no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 4 13:18:09.342: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5238" for this suite. Apr 4 13:18:15.387: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 4 13:18:15.458: INFO: namespace projected-5238 deletion completed in 6.112738954s • [SLOW TEST:10.265 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 4 13:18:15.459: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Apr 4 13:18:15.519: INFO: (0) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 8.023063ms) Apr 4 13:18:15.523: INFO: (1) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.548532ms) Apr 4 13:18:15.526: INFO: (2) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.243912ms) Apr 4 13:18:15.530: INFO: (3) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.448679ms) Apr 4 13:18:15.533: INFO: (4) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.686798ms) Apr 4 13:18:15.537: INFO: (5) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.500634ms) Apr 4 13:18:15.540: INFO: (6) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.31273ms) Apr 4 13:18:15.544: INFO: (7) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.266251ms) Apr 4 13:18:15.547: INFO: (8) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.869159ms) Apr 4 13:18:15.551: INFO: (9) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.515085ms) Apr 4 13:18:15.584: INFO: (10) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 33.4094ms) Apr 4 13:18:15.587: INFO: (11) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 2.889671ms) Apr 4 13:18:15.590: INFO: (12) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.054048ms) Apr 4 13:18:15.594: INFO: (13) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.127464ms) Apr 4 13:18:15.596: INFO: (14) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 2.545583ms) Apr 4 13:18:15.599: INFO: (15) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.076582ms) Apr 4 13:18:15.602: INFO: (16) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 2.990776ms) Apr 4 13:18:15.605: INFO: (17) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.07602ms) Apr 4 13:18:15.608: INFO: (18) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 2.953977ms) Apr 4 13:18:15.611: INFO: (19) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 2.775958ms) [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 4 13:18:15.611: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-9596" for this suite. Apr 4 13:18:21.628: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 4 13:18:21.714: INFO: namespace proxy-9596 deletion completed in 6.099718772s • [SLOW TEST:6.255 seconds] [sig-network] Proxy /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:58 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 4 13:18:21.714: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [It] should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a replication controller Apr 4 13:18:21.770: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2997' Apr 4 13:18:22.020: INFO: stderr: "" Apr 4 13:18:22.020: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Apr 4 13:18:22.020: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2997' Apr 4 13:18:22.143: INFO: stderr: "" Apr 4 13:18:22.143: INFO: stdout: "update-demo-nautilus-2ggtj update-demo-nautilus-kckfz " Apr 4 13:18:22.143: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2ggtj -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2997' Apr 4 13:18:22.230: INFO: stderr: "" Apr 4 13:18:22.230: INFO: stdout: "" Apr 4 13:18:22.230: INFO: update-demo-nautilus-2ggtj is created but not running Apr 4 13:18:27.231: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2997' Apr 4 13:18:27.324: INFO: stderr: "" Apr 4 13:18:27.324: INFO: stdout: "update-demo-nautilus-2ggtj update-demo-nautilus-kckfz " Apr 4 13:18:27.325: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2ggtj -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2997' Apr 4 13:18:27.415: INFO: stderr: "" Apr 4 13:18:27.415: INFO: stdout: "true" Apr 4 13:18:27.415: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2ggtj -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-2997' Apr 4 13:18:27.508: INFO: stderr: "" Apr 4 13:18:27.508: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 4 13:18:27.508: INFO: validating pod update-demo-nautilus-2ggtj Apr 4 13:18:27.512: INFO: got data: { "image": "nautilus.jpg" } Apr 4 13:18:27.512: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 4 13:18:27.512: INFO: update-demo-nautilus-2ggtj is verified up and running Apr 4 13:18:27.512: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-kckfz -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2997' Apr 4 13:18:27.597: INFO: stderr: "" Apr 4 13:18:27.597: INFO: stdout: "true" Apr 4 13:18:27.597: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-kckfz -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-2997' Apr 4 13:18:27.694: INFO: stderr: "" Apr 4 13:18:27.694: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 4 13:18:27.694: INFO: validating pod update-demo-nautilus-kckfz Apr 4 13:18:27.698: INFO: got data: { "image": "nautilus.jpg" } Apr 4 13:18:27.698: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 4 13:18:27.698: INFO: update-demo-nautilus-kckfz is verified up and running STEP: using delete to clean up resources Apr 4 13:18:27.698: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2997' Apr 4 13:18:27.800: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 4 13:18:27.800: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Apr 4 13:18:27.800: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-2997' Apr 4 13:18:27.903: INFO: stderr: "No resources found.\n" Apr 4 13:18:27.903: INFO: stdout: "" Apr 4 13:18:27.903: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-2997 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Apr 4 13:18:28.010: INFO: stderr: "" Apr 4 13:18:28.010: INFO: stdout: "update-demo-nautilus-2ggtj\nupdate-demo-nautilus-kckfz\n" Apr 4 13:18:28.510: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-2997' Apr 4 13:18:28.604: INFO: stderr: "No resources found.\n" Apr 4 13:18:28.604: INFO: stdout: "" Apr 4 13:18:28.604: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-2997 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Apr 4 13:18:28.696: INFO: stderr: "" Apr 4 13:18:28.696: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 4 13:18:28.696: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2997" for this suite. Apr 4 13:18:34.716: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 4 13:18:34.791: INFO: namespace kubectl-2997 deletion completed in 6.09115757s • [SLOW TEST:13.077 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 4 13:18:34.792: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Performing setup for networking test in namespace pod-network-test-4998 STEP: creating a selector STEP: Creating the service pods in kubernetes Apr 4 13:18:34.843: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Apr 4 13:19:00.965: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.90:8080/dial?request=hostName&protocol=http&host=10.244.1.89&port=8080&tries=1'] Namespace:pod-network-test-4998 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 4 13:19:00.965: INFO: >>> kubeConfig: /root/.kube/config I0404 13:19:01.001722 6 log.go:172] (0xc000b993f0) (0xc0012daf00) Create stream I0404 13:19:01.001765 6 log.go:172] (0xc000b993f0) (0xc0012daf00) Stream added, broadcasting: 1 I0404 13:19:01.003474 6 log.go:172] (0xc000b993f0) Reply frame received for 1 I0404 13:19:01.003515 6 log.go:172] (0xc000b993f0) (0xc0010e45a0) Create stream I0404 13:19:01.003524 6 log.go:172] (0xc000b993f0) (0xc0010e45a0) Stream added, broadcasting: 3 I0404 13:19:01.004412 6 log.go:172] (0xc000b993f0) Reply frame received for 3 I0404 13:19:01.004446 6 log.go:172] (0xc000b993f0) (0xc001e228c0) Create stream I0404 13:19:01.004456 6 log.go:172] (0xc000b993f0) (0xc001e228c0) Stream added, broadcasting: 5 I0404 13:19:01.005384 6 log.go:172] (0xc000b993f0) Reply frame received for 5 I0404 13:19:01.086026 6 log.go:172] (0xc000b993f0) Data frame received for 3 I0404 13:19:01.086054 6 log.go:172] (0xc0010e45a0) (3) Data frame handling I0404 13:19:01.086067 6 log.go:172] (0xc0010e45a0) (3) Data frame sent I0404 13:19:01.086953 6 log.go:172] (0xc000b993f0) Data frame received for 5 I0404 13:19:01.086982 6 log.go:172] (0xc001e228c0) (5) Data frame handling I0404 13:19:01.087014 6 log.go:172] (0xc000b993f0) Data frame received for 3 I0404 13:19:01.087029 6 log.go:172] (0xc0010e45a0) (3) Data frame handling I0404 13:19:01.088491 6 log.go:172] (0xc000b993f0) Data frame received for 1 I0404 13:19:01.088505 6 log.go:172] (0xc0012daf00) (1) Data frame handling I0404 13:19:01.088515 6 log.go:172] (0xc0012daf00) (1) Data frame sent I0404 13:19:01.088526 6 log.go:172] (0xc000b993f0) (0xc0012daf00) Stream removed, broadcasting: 1 I0404 13:19:01.088602 6 log.go:172] (0xc000b993f0) Go away received I0404 13:19:01.088621 6 log.go:172] (0xc000b993f0) (0xc0012daf00) Stream removed, broadcasting: 1 I0404 13:19:01.088649 6 log.go:172] (0xc000b993f0) (0xc0010e45a0) Stream removed, broadcasting: 3 I0404 13:19:01.088666 6 log.go:172] (0xc000b993f0) (0xc001e228c0) Stream removed, broadcasting: 5 Apr 4 13:19:01.088: INFO: Waiting for endpoints: map[] Apr 4 13:19:01.091: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.90:8080/dial?request=hostName&protocol=http&host=10.244.2.206&port=8080&tries=1'] Namespace:pod-network-test-4998 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 4 13:19:01.091: INFO: >>> kubeConfig: /root/.kube/config I0404 13:19:01.126978 6 log.go:172] (0xc0009d1290) (0xc0010e4780) Create stream I0404 13:19:01.127006 6 log.go:172] (0xc0009d1290) (0xc0010e4780) Stream added, broadcasting: 1 I0404 13:19:01.130343 6 log.go:172] (0xc0009d1290) Reply frame received for 1 I0404 13:19:01.130395 6 log.go:172] (0xc0009d1290) (0xc002c14000) Create stream I0404 13:19:01.130415 6 log.go:172] (0xc0009d1290) (0xc002c14000) Stream added, broadcasting: 3 I0404 13:19:01.131625 6 log.go:172] (0xc0009d1290) Reply frame received for 3 I0404 13:19:01.131685 6 log.go:172] (0xc0009d1290) (0xc0010e4820) Create stream I0404 13:19:01.131707 6 log.go:172] (0xc0009d1290) (0xc0010e4820) Stream added, broadcasting: 5 I0404 13:19:01.132668 6 log.go:172] (0xc0009d1290) Reply frame received for 5 I0404 13:19:01.204796 6 log.go:172] (0xc0009d1290) Data frame received for 3 I0404 13:19:01.204824 6 log.go:172] (0xc002c14000) (3) Data frame handling I0404 13:19:01.204839 6 log.go:172] (0xc002c14000) (3) Data frame sent I0404 13:19:01.205754 6 log.go:172] (0xc0009d1290) Data frame received for 5 I0404 13:19:01.205786 6 log.go:172] (0xc0010e4820) (5) Data frame handling I0404 13:19:01.205813 6 log.go:172] (0xc0009d1290) Data frame received for 3 I0404 13:19:01.205831 6 log.go:172] (0xc002c14000) (3) Data frame handling I0404 13:19:01.207431 6 log.go:172] (0xc0009d1290) Data frame received for 1 I0404 13:19:01.207457 6 log.go:172] (0xc0010e4780) (1) Data frame handling I0404 13:19:01.207473 6 log.go:172] (0xc0010e4780) (1) Data frame sent I0404 13:19:01.207492 6 log.go:172] (0xc0009d1290) (0xc0010e4780) Stream removed, broadcasting: 1 I0404 13:19:01.207521 6 log.go:172] (0xc0009d1290) Go away received I0404 13:19:01.207636 6 log.go:172] (0xc0009d1290) (0xc0010e4780) Stream removed, broadcasting: 1 I0404 13:19:01.207671 6 log.go:172] (0xc0009d1290) (0xc002c14000) Stream removed, broadcasting: 3 I0404 13:19:01.207698 6 log.go:172] (0xc0009d1290) (0xc0010e4820) Stream removed, broadcasting: 5 Apr 4 13:19:01.207: INFO: Waiting for endpoints: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 4 13:19:01.207: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-4998" for this suite. Apr 4 13:19:23.233: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 4 13:19:23.324: INFO: namespace pod-network-test-4998 deletion completed in 22.104288012s • [SLOW TEST:48.532 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 4 13:19:23.324: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-aeca12b4-5248-406d-80e3-a6f75d0359e0 STEP: Creating a pod to test consume secrets Apr 4 13:19:23.387: INFO: Waiting up to 5m0s for pod "pod-secrets-551c24eb-73f1-4a8c-bbb6-77a8709c36af" in namespace "secrets-7455" to be "success or failure" Apr 4 13:19:23.390: INFO: Pod "pod-secrets-551c24eb-73f1-4a8c-bbb6-77a8709c36af": Phase="Pending", Reason="", readiness=false. Elapsed: 3.467523ms Apr 4 13:19:25.395: INFO: Pod "pod-secrets-551c24eb-73f1-4a8c-bbb6-77a8709c36af": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007891405s Apr 4 13:19:27.399: INFO: Pod "pod-secrets-551c24eb-73f1-4a8c-bbb6-77a8709c36af": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012047394s STEP: Saw pod success Apr 4 13:19:27.399: INFO: Pod "pod-secrets-551c24eb-73f1-4a8c-bbb6-77a8709c36af" satisfied condition "success or failure" Apr 4 13:19:27.402: INFO: Trying to get logs from node iruya-worker2 pod pod-secrets-551c24eb-73f1-4a8c-bbb6-77a8709c36af container secret-env-test: STEP: delete the pod Apr 4 13:19:27.441: INFO: Waiting for pod pod-secrets-551c24eb-73f1-4a8c-bbb6-77a8709c36af to disappear Apr 4 13:19:27.444: INFO: Pod pod-secrets-551c24eb-73f1-4a8c-bbb6-77a8709c36af no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 4 13:19:27.444: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7455" for this suite. Apr 4 13:19:33.460: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 4 13:19:33.550: INFO: namespace secrets-7455 deletion completed in 6.102148984s • [SLOW TEST:10.226 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31 should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl rolling-update should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 4 13:19:33.551: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1516 [It] should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Apr 4 13:19:33.639: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=kubectl-6789' Apr 4 13:19:33.740: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Apr 4 13:19:33.740: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n" STEP: verifying the rc e2e-test-nginx-rc was created STEP: rolling-update to same image controller Apr 4 13:19:33.770: INFO: scanned /root for discovery docs: Apr 4 13:19:33.770: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-nginx-rc --update-period=1s --image=docker.io/library/nginx:1.14-alpine --image-pull-policy=IfNotPresent --namespace=kubectl-6789' Apr 4 13:19:49.588: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" Apr 4 13:19:49.588: INFO: stdout: "Created e2e-test-nginx-rc-e72d096d60ba15219d73c61c94ae9983\nScaling up e2e-test-nginx-rc-e72d096d60ba15219d73c61c94ae9983 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-e72d096d60ba15219d73c61c94ae9983 up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-e72d096d60ba15219d73c61c94ae9983 to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n" Apr 4 13:19:49.588: INFO: stdout: "Created e2e-test-nginx-rc-e72d096d60ba15219d73c61c94ae9983\nScaling up e2e-test-nginx-rc-e72d096d60ba15219d73c61c94ae9983 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-e72d096d60ba15219d73c61c94ae9983 up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-e72d096d60ba15219d73c61c94ae9983 to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n" STEP: waiting for all containers in run=e2e-test-nginx-rc pods to come up. Apr 4 13:19:49.588: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-6789' Apr 4 13:19:49.693: INFO: stderr: "" Apr 4 13:19:49.693: INFO: stdout: "e2e-test-nginx-rc-e72d096d60ba15219d73c61c94ae9983-77lnn " Apr 4 13:19:49.693: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-e72d096d60ba15219d73c61c94ae9983-77lnn -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-nginx-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6789' Apr 4 13:19:49.788: INFO: stderr: "" Apr 4 13:19:49.788: INFO: stdout: "true" Apr 4 13:19:49.788: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-e72d096d60ba15219d73c61c94ae9983-77lnn -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-nginx-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6789' Apr 4 13:19:49.877: INFO: stderr: "" Apr 4 13:19:49.877: INFO: stdout: "docker.io/library/nginx:1.14-alpine" Apr 4 13:19:49.877: INFO: e2e-test-nginx-rc-e72d096d60ba15219d73c61c94ae9983-77lnn is verified up and running [AfterEach] [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1522 Apr 4 13:19:49.877: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=kubectl-6789' Apr 4 13:19:49.988: INFO: stderr: "" Apr 4 13:19:49.988: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 4 13:19:49.988: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6789" for this suite. Apr 4 13:19:56.029: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 4 13:19:56.118: INFO: namespace kubectl-6789 deletion completed in 6.113081995s • [SLOW TEST:22.568 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 4 13:19:56.119: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-8a76da45-9856-4244-ba1a-7b4ce5a705a1 STEP: Creating a pod to test consume configMaps Apr 4 13:19:56.201: INFO: Waiting up to 5m0s for pod "pod-configmaps-20e41fff-3ede-4c65-8ee0-a59c0c229427" in namespace "configmap-7869" to be "success or failure" Apr 4 13:19:56.206: INFO: Pod "pod-configmaps-20e41fff-3ede-4c65-8ee0-a59c0c229427": Phase="Pending", Reason="", readiness=false. Elapsed: 4.11758ms Apr 4 13:19:58.210: INFO: Pod "pod-configmaps-20e41fff-3ede-4c65-8ee0-a59c0c229427": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008489832s Apr 4 13:20:00.214: INFO: Pod "pod-configmaps-20e41fff-3ede-4c65-8ee0-a59c0c229427": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012876243s STEP: Saw pod success Apr 4 13:20:00.214: INFO: Pod "pod-configmaps-20e41fff-3ede-4c65-8ee0-a59c0c229427" satisfied condition "success or failure" Apr 4 13:20:00.218: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-20e41fff-3ede-4c65-8ee0-a59c0c229427 container configmap-volume-test: STEP: delete the pod Apr 4 13:20:00.269: INFO: Waiting for pod pod-configmaps-20e41fff-3ede-4c65-8ee0-a59c0c229427 to disappear Apr 4 13:20:00.278: INFO: Pod pod-configmaps-20e41fff-3ede-4c65-8ee0-a59c0c229427 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 4 13:20:00.278: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7869" for this suite. Apr 4 13:20:06.294: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 4 13:20:06.369: INFO: namespace configmap-7869 deletion completed in 6.087223117s • [SLOW TEST:10.250 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 4 13:20:06.369: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 4 13:20:10.443: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-7154" for this suite. Apr 4 13:20:52.463: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 4 13:20:52.537: INFO: namespace kubelet-test-7154 deletion completed in 42.091464226s • [SLOW TEST:46.168 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a busybox Pod with hostAliases /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:136 should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 4 13:20:52.537: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test substitution in container's command Apr 4 13:20:52.633: INFO: Waiting up to 5m0s for pod "var-expansion-bcc6a64f-f73b-4382-b69b-bb8bb29af27f" in namespace "var-expansion-7653" to be "success or failure" Apr 4 13:20:52.643: INFO: Pod "var-expansion-bcc6a64f-f73b-4382-b69b-bb8bb29af27f": Phase="Pending", Reason="", readiness=false. Elapsed: 9.874711ms Apr 4 13:20:54.648: INFO: Pod "var-expansion-bcc6a64f-f73b-4382-b69b-bb8bb29af27f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014800215s Apr 4 13:20:56.653: INFO: Pod "var-expansion-bcc6a64f-f73b-4382-b69b-bb8bb29af27f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019215424s STEP: Saw pod success Apr 4 13:20:56.653: INFO: Pod "var-expansion-bcc6a64f-f73b-4382-b69b-bb8bb29af27f" satisfied condition "success or failure" Apr 4 13:20:56.656: INFO: Trying to get logs from node iruya-worker pod var-expansion-bcc6a64f-f73b-4382-b69b-bb8bb29af27f container dapi-container: STEP: delete the pod Apr 4 13:20:56.675: INFO: Waiting for pod var-expansion-bcc6a64f-f73b-4382-b69b-bb8bb29af27f to disappear Apr 4 13:20:56.679: INFO: Pod var-expansion-bcc6a64f-f73b-4382-b69b-bb8bb29af27f no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 4 13:20:56.679: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-7653" for this suite. Apr 4 13:21:02.701: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 4 13:21:02.779: INFO: namespace var-expansion-7653 deletion completed in 6.096890935s • [SLOW TEST:10.241 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 4 13:21:02.779: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 4 13:22:02.892: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-1470" for this suite. Apr 4 13:22:24.911: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 4 13:22:24.974: INFO: namespace container-probe-1470 deletion completed in 22.078405757s • [SLOW TEST:82.195 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 4 13:22:24.975: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Apr 4 13:22:25.012: INFO: Creating deployment "test-recreate-deployment" Apr 4 13:22:25.025: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 Apr 4 13:22:25.036: INFO: deployment "test-recreate-deployment" doesn't have the required revision set Apr 4 13:22:27.045: INFO: Waiting deployment "test-recreate-deployment" to complete Apr 4 13:22:27.047: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721603345, loc:(*time.Location)(0x7ea78c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721603345, loc:(*time.Location)(0x7ea78c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721603345, loc:(*time.Location)(0x7ea78c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721603345, loc:(*time.Location)(0x7ea78c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 4 13:22:29.054: INFO: Triggering a new rollout for deployment "test-recreate-deployment" Apr 4 13:22:29.059: INFO: Updating deployment test-recreate-deployment Apr 4 13:22:29.059: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 Apr 4 13:22:29.298: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment,GenerateName:,Namespace:deployment-7334,SelfLink:/apis/apps/v1/namespaces/deployment-7334/deployments/test-recreate-deployment,UID:9ce8e147-ed68-42a8-bb5a-1fd15330acf9,ResourceVersion:3586476,Generation:2,CreationTimestamp:2020-04-04 13:22:25 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[{Available False 2020-04-04 13:22:29 +0000 UTC 2020-04-04 13:22:29 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2020-04-04 13:22:29 +0000 UTC 2020-04-04 13:22:25 +0000 UTC ReplicaSetUpdated ReplicaSet "test-recreate-deployment-5c8c9cc69d" is progressing.}],ReadyReplicas:0,CollisionCount:nil,},} Apr 4 13:22:29.304: INFO: New ReplicaSet "test-recreate-deployment-5c8c9cc69d" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5c8c9cc69d,GenerateName:,Namespace:deployment-7334,SelfLink:/apis/apps/v1/namespaces/deployment-7334/replicasets/test-recreate-deployment-5c8c9cc69d,UID:89fe5a42-00ca-480c-b5da-f25aef8c5bc6,ResourceVersion:3586474,Generation:1,CreationTimestamp:2020-04-04 13:22:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 9ce8e147-ed68-42a8-bb5a-1fd15330acf9 0xc002f62c87 0xc002f62c88}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Apr 4 13:22:29.304: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": Apr 4 13:22:29.304: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-6df85df6b9,GenerateName:,Namespace:deployment-7334,SelfLink:/apis/apps/v1/namespaces/deployment-7334/replicasets/test-recreate-deployment-6df85df6b9,UID:f840c097-bff9-4219-85bc-41acfcbcffab,ResourceVersion:3586465,Generation:2,CreationTimestamp:2020-04-04 13:22:25 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 9ce8e147-ed68-42a8-bb5a-1fd15330acf9 0xc002f62d57 0xc002f62d58}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Apr 4 13:22:29.307: INFO: Pod "test-recreate-deployment-5c8c9cc69d-6hdbd" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5c8c9cc69d-6hdbd,GenerateName:test-recreate-deployment-5c8c9cc69d-,Namespace:deployment-7334,SelfLink:/api/v1/namespaces/deployment-7334/pods/test-recreate-deployment-5c8c9cc69d-6hdbd,UID:8edef025-1096-4d1d-a07e-033e62763800,ResourceVersion:3586477,Generation:0,CreationTimestamp:2020-04-04 13:22:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-recreate-deployment-5c8c9cc69d 89fe5a42-00ca-480c-b5da-f25aef8c5bc6 0xc002f63627 0xc002f63628}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-4hpgs {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-4hpgs,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-4hpgs true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002f636b0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002f636d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-04 13:22:29 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-04 13:22:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-04 13:22:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-04 13:22:29 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:,StartTime:2020-04-04 13:22:29 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 4 13:22:29.307: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-7334" for this suite. Apr 4 13:22:35.318: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 4 13:22:35.391: INFO: namespace deployment-7334 deletion completed in 6.081749133s • [SLOW TEST:10.416 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 4 13:22:35.391: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Apr 4 13:22:35.430: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0538e185-effb-4f75-ae16-49890725a5b2" in namespace "projected-1962" to be "success or failure" Apr 4 13:22:35.449: INFO: Pod "downwardapi-volume-0538e185-effb-4f75-ae16-49890725a5b2": Phase="Pending", Reason="", readiness=false. Elapsed: 18.602027ms Apr 4 13:22:37.474: INFO: Pod "downwardapi-volume-0538e185-effb-4f75-ae16-49890725a5b2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.043117722s Apr 4 13:22:39.483: INFO: Pod "downwardapi-volume-0538e185-effb-4f75-ae16-49890725a5b2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.052447577s STEP: Saw pod success Apr 4 13:22:39.483: INFO: Pod "downwardapi-volume-0538e185-effb-4f75-ae16-49890725a5b2" satisfied condition "success or failure" Apr 4 13:22:39.485: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-0538e185-effb-4f75-ae16-49890725a5b2 container client-container: STEP: delete the pod Apr 4 13:22:39.518: INFO: Waiting for pod downwardapi-volume-0538e185-effb-4f75-ae16-49890725a5b2 to disappear Apr 4 13:22:39.531: INFO: Pod downwardapi-volume-0538e185-effb-4f75-ae16-49890725a5b2 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 4 13:22:39.531: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1962" for this suite. Apr 4 13:22:45.561: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 4 13:22:45.662: INFO: namespace projected-1962 deletion completed in 6.127259821s • [SLOW TEST:10.271 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-network] DNS should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 4 13:22:45.662: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8056.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8056.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 4 13:22:51.777: INFO: DNS probes using dns-8056/dns-test-8a932378-d735-4767-84cb-85d760dc70f2 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 4 13:22:51.800: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-8056" for this suite. Apr 4 13:22:57.844: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 4 13:22:57.923: INFO: namespace dns-8056 deletion completed in 6.106454853s • [SLOW TEST:12.261 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 4 13:22:57.923: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0666 on node default medium Apr 4 13:22:57.980: INFO: Waiting up to 5m0s for pod "pod-84b73fd4-6ad6-4ef3-a0e9-dbf4208ce2b4" in namespace "emptydir-5388" to be "success or failure" Apr 4 13:22:57.983: INFO: Pod "pod-84b73fd4-6ad6-4ef3-a0e9-dbf4208ce2b4": Phase="Pending", Reason="", readiness=false. Elapsed: 3.33334ms Apr 4 13:22:59.987: INFO: Pod "pod-84b73fd4-6ad6-4ef3-a0e9-dbf4208ce2b4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006883325s Apr 4 13:23:01.991: INFO: Pod "pod-84b73fd4-6ad6-4ef3-a0e9-dbf4208ce2b4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010826293s STEP: Saw pod success Apr 4 13:23:01.991: INFO: Pod "pod-84b73fd4-6ad6-4ef3-a0e9-dbf4208ce2b4" satisfied condition "success or failure" Apr 4 13:23:01.994: INFO: Trying to get logs from node iruya-worker pod pod-84b73fd4-6ad6-4ef3-a0e9-dbf4208ce2b4 container test-container: STEP: delete the pod Apr 4 13:23:02.011: INFO: Waiting for pod pod-84b73fd4-6ad6-4ef3-a0e9-dbf4208ce2b4 to disappear Apr 4 13:23:02.022: INFO: Pod pod-84b73fd4-6ad6-4ef3-a0e9-dbf4208ce2b4 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 4 13:23:02.022: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5388" for this suite. Apr 4 13:23:08.046: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 4 13:23:08.126: INFO: namespace emptydir-5388 deletion completed in 6.101288162s • [SLOW TEST:10.203 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 4 13:23:08.127: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-34bfa5c2-0608-48d8-b913-190e40f13ea2 STEP: Creating a pod to test consume configMaps Apr 4 13:23:08.196: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-cbfe66eb-71ac-4e17-8887-0b1b7532029d" in namespace "projected-6039" to be "success or failure" Apr 4 13:23:08.217: INFO: Pod "pod-projected-configmaps-cbfe66eb-71ac-4e17-8887-0b1b7532029d": Phase="Pending", Reason="", readiness=false. Elapsed: 20.696378ms Apr 4 13:23:10.220: INFO: Pod "pod-projected-configmaps-cbfe66eb-71ac-4e17-8887-0b1b7532029d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023618369s Apr 4 13:23:12.223: INFO: Pod "pod-projected-configmaps-cbfe66eb-71ac-4e17-8887-0b1b7532029d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.026784309s STEP: Saw pod success Apr 4 13:23:12.223: INFO: Pod "pod-projected-configmaps-cbfe66eb-71ac-4e17-8887-0b1b7532029d" satisfied condition "success or failure" Apr 4 13:23:12.225: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-configmaps-cbfe66eb-71ac-4e17-8887-0b1b7532029d container projected-configmap-volume-test: STEP: delete the pod Apr 4 13:23:12.250: INFO: Waiting for pod pod-projected-configmaps-cbfe66eb-71ac-4e17-8887-0b1b7532029d to disappear Apr 4 13:23:12.256: INFO: Pod pod-projected-configmaps-cbfe66eb-71ac-4e17-8887-0b1b7532029d no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 4 13:23:12.256: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6039" for this suite. Apr 4 13:23:18.276: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 4 13:23:18.355: INFO: namespace projected-6039 deletion completed in 6.096272638s • [SLOW TEST:10.228 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 4 13:23:18.356: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Apr 4 13:23:18.423: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4b3f086c-926a-4dd2-b715-52d8ed11f894" in namespace "projected-9716" to be "success or failure" Apr 4 13:23:18.427: INFO: Pod "downwardapi-volume-4b3f086c-926a-4dd2-b715-52d8ed11f894": Phase="Pending", Reason="", readiness=false. Elapsed: 3.853011ms Apr 4 13:23:20.431: INFO: Pod "downwardapi-volume-4b3f086c-926a-4dd2-b715-52d8ed11f894": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008412924s Apr 4 13:23:22.436: INFO: Pod "downwardapi-volume-4b3f086c-926a-4dd2-b715-52d8ed11f894": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012688258s STEP: Saw pod success Apr 4 13:23:22.436: INFO: Pod "downwardapi-volume-4b3f086c-926a-4dd2-b715-52d8ed11f894" satisfied condition "success or failure" Apr 4 13:23:22.439: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-4b3f086c-926a-4dd2-b715-52d8ed11f894 container client-container: STEP: delete the pod Apr 4 13:23:22.475: INFO: Waiting for pod downwardapi-volume-4b3f086c-926a-4dd2-b715-52d8ed11f894 to disappear Apr 4 13:23:22.498: INFO: Pod downwardapi-volume-4b3f086c-926a-4dd2-b715-52d8ed11f894 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 4 13:23:22.498: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9716" for this suite. Apr 4 13:23:28.515: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 4 13:23:28.598: INFO: namespace projected-9716 deletion completed in 6.095722151s • [SLOW TEST:10.242 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 4 13:23:28.598: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-fa2cc269-10fe-4910-8484-69f7a9f68203 STEP: Creating a pod to test consume configMaps Apr 4 13:23:28.669: INFO: Waiting up to 5m0s for pod "pod-configmaps-ba1aaebe-49b8-4024-86c5-0c943ee834d1" in namespace "configmap-8805" to be "success or failure" Apr 4 13:23:28.685: INFO: Pod "pod-configmaps-ba1aaebe-49b8-4024-86c5-0c943ee834d1": Phase="Pending", Reason="", readiness=false. Elapsed: 16.264972ms Apr 4 13:23:30.689: INFO: Pod "pod-configmaps-ba1aaebe-49b8-4024-86c5-0c943ee834d1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020103266s Apr 4 13:23:32.693: INFO: Pod "pod-configmaps-ba1aaebe-49b8-4024-86c5-0c943ee834d1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024459766s STEP: Saw pod success Apr 4 13:23:32.693: INFO: Pod "pod-configmaps-ba1aaebe-49b8-4024-86c5-0c943ee834d1" satisfied condition "success or failure" Apr 4 13:23:32.696: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-ba1aaebe-49b8-4024-86c5-0c943ee834d1 container configmap-volume-test: STEP: delete the pod Apr 4 13:23:32.744: INFO: Waiting for pod pod-configmaps-ba1aaebe-49b8-4024-86c5-0c943ee834d1 to disappear Apr 4 13:23:32.747: INFO: Pod pod-configmaps-ba1aaebe-49b8-4024-86c5-0c943ee834d1 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 4 13:23:32.747: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8805" for this suite. Apr 4 13:23:38.766: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 4 13:23:38.852: INFO: namespace configmap-8805 deletion completed in 6.101860792s • [SLOW TEST:10.255 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 4 13:23:38.853: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 4 13:23:42.994: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-5367" for this suite. Apr 4 13:23:49.063: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 4 13:23:49.145: INFO: namespace emptydir-wrapper-5367 deletion completed in 6.10698464s • [SLOW TEST:10.293 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 4 13:23:49.146: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods STEP: Gathering metrics W0404 13:24:29.370606 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Apr 4 13:24:29.370: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 4 13:24:29.370: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-2084" for this suite. Apr 4 13:24:39.390: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 4 13:24:39.466: INFO: namespace gc-2084 deletion completed in 10.092037111s • [SLOW TEST:50.320 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 4 13:24:39.466: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:167 [It] should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating server pod server in namespace prestop-5136 STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace prestop-5136 STEP: Deleting pre-stop pod Apr 4 13:24:52.570: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 4 13:24:52.576: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "prestop-5136" for this suite. Apr 4 13:25:34.614: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 4 13:25:34.690: INFO: namespace prestop-5136 deletion completed in 42.108550546s • [SLOW TEST:55.224 seconds] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 4 13:25:34.691: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-7985.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-7985.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-7985.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-7985.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-7985.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-7985.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-7985.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-7985.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-7985.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-7985.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-7985.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-7985.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7985.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 183.217.103.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.103.217.183_udp@PTR;check="$$(dig +tcp +noall +answer +search 183.217.103.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.103.217.183_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-7985.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-7985.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-7985.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-7985.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-7985.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-7985.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-7985.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-7985.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-7985.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-7985.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-7985.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-7985.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7985.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 183.217.103.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.103.217.183_udp@PTR;check="$$(dig +tcp +noall +answer +search 183.217.103.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.103.217.183_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 4 13:25:40.860: INFO: Unable to read wheezy_udp@dns-test-service.dns-7985.svc.cluster.local from pod dns-7985/dns-test-81031b72-1000-4668-bde8-7e793dcc61f0: the server could not find the requested resource (get pods dns-test-81031b72-1000-4668-bde8-7e793dcc61f0) Apr 4 13:25:40.864: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7985.svc.cluster.local from pod dns-7985/dns-test-81031b72-1000-4668-bde8-7e793dcc61f0: the server could not find the requested resource (get pods dns-test-81031b72-1000-4668-bde8-7e793dcc61f0) Apr 4 13:25:40.867: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-7985.svc.cluster.local from pod dns-7985/dns-test-81031b72-1000-4668-bde8-7e793dcc61f0: the server could not find the requested resource (get pods dns-test-81031b72-1000-4668-bde8-7e793dcc61f0) Apr 4 13:25:40.871: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-7985.svc.cluster.local from pod dns-7985/dns-test-81031b72-1000-4668-bde8-7e793dcc61f0: the server could not find the requested resource (get pods dns-test-81031b72-1000-4668-bde8-7e793dcc61f0) Apr 4 13:25:40.888: INFO: Unable to read jessie_udp@dns-test-service.dns-7985.svc.cluster.local from pod dns-7985/dns-test-81031b72-1000-4668-bde8-7e793dcc61f0: the server could not find the requested resource (get pods dns-test-81031b72-1000-4668-bde8-7e793dcc61f0) Apr 4 13:25:40.890: INFO: Unable to read jessie_tcp@dns-test-service.dns-7985.svc.cluster.local from pod dns-7985/dns-test-81031b72-1000-4668-bde8-7e793dcc61f0: the server could not find the requested resource (get pods dns-test-81031b72-1000-4668-bde8-7e793dcc61f0) Apr 4 13:25:40.893: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-7985.svc.cluster.local from pod dns-7985/dns-test-81031b72-1000-4668-bde8-7e793dcc61f0: the server could not find the requested resource (get pods dns-test-81031b72-1000-4668-bde8-7e793dcc61f0) Apr 4 13:25:40.896: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7985.svc.cluster.local from pod dns-7985/dns-test-81031b72-1000-4668-bde8-7e793dcc61f0: the server could not find the requested resource (get pods dns-test-81031b72-1000-4668-bde8-7e793dcc61f0) Apr 4 13:25:40.914: INFO: Lookups using dns-7985/dns-test-81031b72-1000-4668-bde8-7e793dcc61f0 failed for: [wheezy_udp@dns-test-service.dns-7985.svc.cluster.local wheezy_tcp@dns-test-service.dns-7985.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-7985.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-7985.svc.cluster.local jessie_udp@dns-test-service.dns-7985.svc.cluster.local jessie_tcp@dns-test-service.dns-7985.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-7985.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-7985.svc.cluster.local] Apr 4 13:25:45.918: INFO: Unable to read wheezy_udp@dns-test-service.dns-7985.svc.cluster.local from pod dns-7985/dns-test-81031b72-1000-4668-bde8-7e793dcc61f0: the server could not find the requested resource (get pods dns-test-81031b72-1000-4668-bde8-7e793dcc61f0) Apr 4 13:25:45.922: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7985.svc.cluster.local from pod dns-7985/dns-test-81031b72-1000-4668-bde8-7e793dcc61f0: the server could not find the requested resource (get pods dns-test-81031b72-1000-4668-bde8-7e793dcc61f0) Apr 4 13:25:45.924: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-7985.svc.cluster.local from pod dns-7985/dns-test-81031b72-1000-4668-bde8-7e793dcc61f0: the server could not find the requested resource (get pods dns-test-81031b72-1000-4668-bde8-7e793dcc61f0) Apr 4 13:25:45.928: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-7985.svc.cluster.local from pod dns-7985/dns-test-81031b72-1000-4668-bde8-7e793dcc61f0: the server could not find the requested resource (get pods dns-test-81031b72-1000-4668-bde8-7e793dcc61f0) Apr 4 13:25:45.951: INFO: Unable to read jessie_udp@dns-test-service.dns-7985.svc.cluster.local from pod dns-7985/dns-test-81031b72-1000-4668-bde8-7e793dcc61f0: the server could not find the requested resource (get pods dns-test-81031b72-1000-4668-bde8-7e793dcc61f0) Apr 4 13:25:45.954: INFO: Unable to read jessie_tcp@dns-test-service.dns-7985.svc.cluster.local from pod dns-7985/dns-test-81031b72-1000-4668-bde8-7e793dcc61f0: the server could not find the requested resource (get pods dns-test-81031b72-1000-4668-bde8-7e793dcc61f0) Apr 4 13:25:45.957: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-7985.svc.cluster.local from pod dns-7985/dns-test-81031b72-1000-4668-bde8-7e793dcc61f0: the server could not find the requested resource (get pods dns-test-81031b72-1000-4668-bde8-7e793dcc61f0) Apr 4 13:25:45.960: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7985.svc.cluster.local from pod dns-7985/dns-test-81031b72-1000-4668-bde8-7e793dcc61f0: the server could not find the requested resource (get pods dns-test-81031b72-1000-4668-bde8-7e793dcc61f0) Apr 4 13:25:45.977: INFO: Lookups using dns-7985/dns-test-81031b72-1000-4668-bde8-7e793dcc61f0 failed for: [wheezy_udp@dns-test-service.dns-7985.svc.cluster.local wheezy_tcp@dns-test-service.dns-7985.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-7985.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-7985.svc.cluster.local jessie_udp@dns-test-service.dns-7985.svc.cluster.local jessie_tcp@dns-test-service.dns-7985.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-7985.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-7985.svc.cluster.local] Apr 4 13:25:50.919: INFO: Unable to read wheezy_udp@dns-test-service.dns-7985.svc.cluster.local from pod dns-7985/dns-test-81031b72-1000-4668-bde8-7e793dcc61f0: the server could not find the requested resource (get pods dns-test-81031b72-1000-4668-bde8-7e793dcc61f0) Apr 4 13:25:50.923: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7985.svc.cluster.local from pod dns-7985/dns-test-81031b72-1000-4668-bde8-7e793dcc61f0: the server could not find the requested resource (get pods dns-test-81031b72-1000-4668-bde8-7e793dcc61f0) Apr 4 13:25:50.926: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-7985.svc.cluster.local from pod dns-7985/dns-test-81031b72-1000-4668-bde8-7e793dcc61f0: the server could not find the requested resource (get pods dns-test-81031b72-1000-4668-bde8-7e793dcc61f0) Apr 4 13:25:50.930: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-7985.svc.cluster.local from pod dns-7985/dns-test-81031b72-1000-4668-bde8-7e793dcc61f0: the server could not find the requested resource (get pods dns-test-81031b72-1000-4668-bde8-7e793dcc61f0) Apr 4 13:25:50.951: INFO: Unable to read jessie_udp@dns-test-service.dns-7985.svc.cluster.local from pod dns-7985/dns-test-81031b72-1000-4668-bde8-7e793dcc61f0: the server could not find the requested resource (get pods dns-test-81031b72-1000-4668-bde8-7e793dcc61f0) Apr 4 13:25:50.955: INFO: Unable to read jessie_tcp@dns-test-service.dns-7985.svc.cluster.local from pod dns-7985/dns-test-81031b72-1000-4668-bde8-7e793dcc61f0: the server could not find the requested resource (get pods dns-test-81031b72-1000-4668-bde8-7e793dcc61f0) Apr 4 13:25:50.958: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-7985.svc.cluster.local from pod dns-7985/dns-test-81031b72-1000-4668-bde8-7e793dcc61f0: the server could not find the requested resource (get pods dns-test-81031b72-1000-4668-bde8-7e793dcc61f0) Apr 4 13:25:50.961: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7985.svc.cluster.local from pod dns-7985/dns-test-81031b72-1000-4668-bde8-7e793dcc61f0: the server could not find the requested resource (get pods dns-test-81031b72-1000-4668-bde8-7e793dcc61f0) Apr 4 13:25:50.980: INFO: Lookups using dns-7985/dns-test-81031b72-1000-4668-bde8-7e793dcc61f0 failed for: [wheezy_udp@dns-test-service.dns-7985.svc.cluster.local wheezy_tcp@dns-test-service.dns-7985.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-7985.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-7985.svc.cluster.local jessie_udp@dns-test-service.dns-7985.svc.cluster.local jessie_tcp@dns-test-service.dns-7985.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-7985.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-7985.svc.cluster.local] Apr 4 13:25:55.919: INFO: Unable to read wheezy_udp@dns-test-service.dns-7985.svc.cluster.local from pod dns-7985/dns-test-81031b72-1000-4668-bde8-7e793dcc61f0: the server could not find the requested resource (get pods dns-test-81031b72-1000-4668-bde8-7e793dcc61f0) Apr 4 13:25:55.923: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7985.svc.cluster.local from pod dns-7985/dns-test-81031b72-1000-4668-bde8-7e793dcc61f0: the server could not find the requested resource (get pods dns-test-81031b72-1000-4668-bde8-7e793dcc61f0) Apr 4 13:25:55.927: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-7985.svc.cluster.local from pod dns-7985/dns-test-81031b72-1000-4668-bde8-7e793dcc61f0: the server could not find the requested resource (get pods dns-test-81031b72-1000-4668-bde8-7e793dcc61f0) Apr 4 13:25:55.930: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-7985.svc.cluster.local from pod dns-7985/dns-test-81031b72-1000-4668-bde8-7e793dcc61f0: the server could not find the requested resource (get pods dns-test-81031b72-1000-4668-bde8-7e793dcc61f0) Apr 4 13:25:55.954: INFO: Unable to read jessie_udp@dns-test-service.dns-7985.svc.cluster.local from pod dns-7985/dns-test-81031b72-1000-4668-bde8-7e793dcc61f0: the server could not find the requested resource (get pods dns-test-81031b72-1000-4668-bde8-7e793dcc61f0) Apr 4 13:25:55.957: INFO: Unable to read jessie_tcp@dns-test-service.dns-7985.svc.cluster.local from pod dns-7985/dns-test-81031b72-1000-4668-bde8-7e793dcc61f0: the server could not find the requested resource (get pods dns-test-81031b72-1000-4668-bde8-7e793dcc61f0) Apr 4 13:25:55.960: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-7985.svc.cluster.local from pod dns-7985/dns-test-81031b72-1000-4668-bde8-7e793dcc61f0: the server could not find the requested resource (get pods dns-test-81031b72-1000-4668-bde8-7e793dcc61f0) Apr 4 13:25:55.963: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7985.svc.cluster.local from pod dns-7985/dns-test-81031b72-1000-4668-bde8-7e793dcc61f0: the server could not find the requested resource (get pods dns-test-81031b72-1000-4668-bde8-7e793dcc61f0) Apr 4 13:25:55.983: INFO: Lookups using dns-7985/dns-test-81031b72-1000-4668-bde8-7e793dcc61f0 failed for: [wheezy_udp@dns-test-service.dns-7985.svc.cluster.local wheezy_tcp@dns-test-service.dns-7985.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-7985.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-7985.svc.cluster.local jessie_udp@dns-test-service.dns-7985.svc.cluster.local jessie_tcp@dns-test-service.dns-7985.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-7985.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-7985.svc.cluster.local] Apr 4 13:26:00.919: INFO: Unable to read wheezy_udp@dns-test-service.dns-7985.svc.cluster.local from pod dns-7985/dns-test-81031b72-1000-4668-bde8-7e793dcc61f0: the server could not find the requested resource (get pods dns-test-81031b72-1000-4668-bde8-7e793dcc61f0) Apr 4 13:26:00.922: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7985.svc.cluster.local from pod dns-7985/dns-test-81031b72-1000-4668-bde8-7e793dcc61f0: the server could not find the requested resource (get pods dns-test-81031b72-1000-4668-bde8-7e793dcc61f0) Apr 4 13:26:00.926: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-7985.svc.cluster.local from pod dns-7985/dns-test-81031b72-1000-4668-bde8-7e793dcc61f0: the server could not find the requested resource (get pods dns-test-81031b72-1000-4668-bde8-7e793dcc61f0) Apr 4 13:26:00.930: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-7985.svc.cluster.local from pod dns-7985/dns-test-81031b72-1000-4668-bde8-7e793dcc61f0: the server could not find the requested resource (get pods dns-test-81031b72-1000-4668-bde8-7e793dcc61f0) Apr 4 13:26:00.951: INFO: Unable to read jessie_udp@dns-test-service.dns-7985.svc.cluster.local from pod dns-7985/dns-test-81031b72-1000-4668-bde8-7e793dcc61f0: the server could not find the requested resource (get pods dns-test-81031b72-1000-4668-bde8-7e793dcc61f0) Apr 4 13:26:00.954: INFO: Unable to read jessie_tcp@dns-test-service.dns-7985.svc.cluster.local from pod dns-7985/dns-test-81031b72-1000-4668-bde8-7e793dcc61f0: the server could not find the requested resource (get pods dns-test-81031b72-1000-4668-bde8-7e793dcc61f0) Apr 4 13:26:00.957: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-7985.svc.cluster.local from pod dns-7985/dns-test-81031b72-1000-4668-bde8-7e793dcc61f0: the server could not find the requested resource (get pods dns-test-81031b72-1000-4668-bde8-7e793dcc61f0) Apr 4 13:26:00.959: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7985.svc.cluster.local from pod dns-7985/dns-test-81031b72-1000-4668-bde8-7e793dcc61f0: the server could not find the requested resource (get pods dns-test-81031b72-1000-4668-bde8-7e793dcc61f0) Apr 4 13:26:00.976: INFO: Lookups using dns-7985/dns-test-81031b72-1000-4668-bde8-7e793dcc61f0 failed for: [wheezy_udp@dns-test-service.dns-7985.svc.cluster.local wheezy_tcp@dns-test-service.dns-7985.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-7985.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-7985.svc.cluster.local jessie_udp@dns-test-service.dns-7985.svc.cluster.local jessie_tcp@dns-test-service.dns-7985.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-7985.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-7985.svc.cluster.local] Apr 4 13:26:05.918: INFO: Unable to read wheezy_udp@dns-test-service.dns-7985.svc.cluster.local from pod dns-7985/dns-test-81031b72-1000-4668-bde8-7e793dcc61f0: the server could not find the requested resource (get pods dns-test-81031b72-1000-4668-bde8-7e793dcc61f0) Apr 4 13:26:05.922: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7985.svc.cluster.local from pod dns-7985/dns-test-81031b72-1000-4668-bde8-7e793dcc61f0: the server could not find the requested resource (get pods dns-test-81031b72-1000-4668-bde8-7e793dcc61f0) Apr 4 13:26:05.924: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-7985.svc.cluster.local from pod dns-7985/dns-test-81031b72-1000-4668-bde8-7e793dcc61f0: the server could not find the requested resource (get pods dns-test-81031b72-1000-4668-bde8-7e793dcc61f0) Apr 4 13:26:05.928: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-7985.svc.cluster.local from pod dns-7985/dns-test-81031b72-1000-4668-bde8-7e793dcc61f0: the server could not find the requested resource (get pods dns-test-81031b72-1000-4668-bde8-7e793dcc61f0) Apr 4 13:26:05.952: INFO: Unable to read jessie_udp@dns-test-service.dns-7985.svc.cluster.local from pod dns-7985/dns-test-81031b72-1000-4668-bde8-7e793dcc61f0: the server could not find the requested resource (get pods dns-test-81031b72-1000-4668-bde8-7e793dcc61f0) Apr 4 13:26:05.955: INFO: Unable to read jessie_tcp@dns-test-service.dns-7985.svc.cluster.local from pod dns-7985/dns-test-81031b72-1000-4668-bde8-7e793dcc61f0: the server could not find the requested resource (get pods dns-test-81031b72-1000-4668-bde8-7e793dcc61f0) Apr 4 13:26:05.958: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-7985.svc.cluster.local from pod dns-7985/dns-test-81031b72-1000-4668-bde8-7e793dcc61f0: the server could not find the requested resource (get pods dns-test-81031b72-1000-4668-bde8-7e793dcc61f0) Apr 4 13:26:05.960: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7985.svc.cluster.local from pod dns-7985/dns-test-81031b72-1000-4668-bde8-7e793dcc61f0: the server could not find the requested resource (get pods dns-test-81031b72-1000-4668-bde8-7e793dcc61f0) Apr 4 13:26:05.978: INFO: Lookups using dns-7985/dns-test-81031b72-1000-4668-bde8-7e793dcc61f0 failed for: [wheezy_udp@dns-test-service.dns-7985.svc.cluster.local wheezy_tcp@dns-test-service.dns-7985.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-7985.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-7985.svc.cluster.local jessie_udp@dns-test-service.dns-7985.svc.cluster.local jessie_tcp@dns-test-service.dns-7985.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-7985.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-7985.svc.cluster.local] Apr 4 13:26:10.975: INFO: DNS probes using dns-7985/dns-test-81031b72-1000-4668-bde8-7e793dcc61f0 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 4 13:26:11.411: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-7985" for this suite. Apr 4 13:26:17.538: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 4 13:26:17.621: INFO: namespace dns-7985 deletion completed in 6.192873029s • [SLOW TEST:42.930 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-network] Service endpoints latency should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 4 13:26:17.622: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating replication controller svc-latency-rc in namespace svc-latency-3542 I0404 13:26:17.681676 6 runners.go:180] Created replication controller with name: svc-latency-rc, namespace: svc-latency-3542, replica count: 1 I0404 13:26:18.732109 6 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0404 13:26:19.732333 6 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0404 13:26:20.732572 6 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Apr 4 13:26:20.866: INFO: Created: latency-svc-5sknd Apr 4 13:26:20.883: INFO: Got endpoints: latency-svc-5sknd [50.808257ms] Apr 4 13:26:20.956: INFO: Created: latency-svc-2c8fb Apr 4 13:26:20.971: INFO: Got endpoints: latency-svc-2c8fb [87.778719ms] Apr 4 13:26:20.998: INFO: Created: latency-svc-h28wh Apr 4 13:26:21.081: INFO: Got endpoints: latency-svc-h28wh [198.084345ms] Apr 4 13:26:21.084: INFO: Created: latency-svc-gscc7 Apr 4 13:26:21.091: INFO: Got endpoints: latency-svc-gscc7 [207.596358ms] Apr 4 13:26:21.118: INFO: Created: latency-svc-pncv8 Apr 4 13:26:21.165: INFO: Got endpoints: latency-svc-pncv8 [281.438262ms] Apr 4 13:26:21.237: INFO: Created: latency-svc-bnnt5 Apr 4 13:26:21.240: INFO: Got endpoints: latency-svc-bnnt5 [356.698647ms] Apr 4 13:26:21.261: INFO: Created: latency-svc-6k8sl Apr 4 13:26:21.274: INFO: Got endpoints: latency-svc-6k8sl [390.485339ms] Apr 4 13:26:21.291: INFO: Created: latency-svc-jcpsg Apr 4 13:26:21.304: INFO: Got endpoints: latency-svc-jcpsg [420.598759ms] Apr 4 13:26:21.321: INFO: Created: latency-svc-hv8x6 Apr 4 13:26:21.334: INFO: Got endpoints: latency-svc-hv8x6 [450.406931ms] Apr 4 13:26:21.387: INFO: Created: latency-svc-ncwdg Apr 4 13:26:21.393: INFO: Got endpoints: latency-svc-ncwdg [509.608602ms] Apr 4 13:26:21.430: INFO: Created: latency-svc-hrclt Apr 4 13:26:21.441: INFO: Got endpoints: latency-svc-hrclt [557.772218ms] Apr 4 13:26:21.459: INFO: Created: latency-svc-rzxt7 Apr 4 13:26:21.478: INFO: Got endpoints: latency-svc-rzxt7 [594.199312ms] Apr 4 13:26:21.543: INFO: Created: latency-svc-gd6kb Apr 4 13:26:21.556: INFO: Got endpoints: latency-svc-gd6kb [672.063782ms] Apr 4 13:26:21.574: INFO: Created: latency-svc-qvrn9 Apr 4 13:26:21.597: INFO: Got endpoints: latency-svc-qvrn9 [713.448679ms] Apr 4 13:26:21.621: INFO: Created: latency-svc-dx2q4 Apr 4 13:26:21.634: INFO: Got endpoints: latency-svc-dx2q4 [750.722846ms] Apr 4 13:26:21.681: INFO: Created: latency-svc-w5nh9 Apr 4 13:26:21.684: INFO: Got endpoints: latency-svc-w5nh9 [800.765576ms] Apr 4 13:26:21.723: INFO: Created: latency-svc-vc4nr Apr 4 13:26:21.753: INFO: Got endpoints: latency-svc-vc4nr [782.36277ms] Apr 4 13:26:21.771: INFO: Created: latency-svc-674wg Apr 4 13:26:21.848: INFO: Got endpoints: latency-svc-674wg [766.082102ms] Apr 4 13:26:21.861: INFO: Created: latency-svc-2pkkx Apr 4 13:26:21.888: INFO: Got endpoints: latency-svc-2pkkx [796.305446ms] Apr 4 13:26:21.915: INFO: Created: latency-svc-sr95f Apr 4 13:26:21.930: INFO: Got endpoints: latency-svc-sr95f [764.628913ms] Apr 4 13:26:21.992: INFO: Created: latency-svc-zq5cn Apr 4 13:26:21.999: INFO: Got endpoints: latency-svc-zq5cn [759.18493ms] Apr 4 13:26:22.029: INFO: Created: latency-svc-4cb5d Apr 4 13:26:22.044: INFO: Got endpoints: latency-svc-4cb5d [769.774786ms] Apr 4 13:26:22.065: INFO: Created: latency-svc-5vccw Apr 4 13:26:22.081: INFO: Got endpoints: latency-svc-5vccw [776.827508ms] Apr 4 13:26:22.124: INFO: Created: latency-svc-9jgkt Apr 4 13:26:22.149: INFO: Got endpoints: latency-svc-9jgkt [814.963054ms] Apr 4 13:26:22.150: INFO: Created: latency-svc-7md7q Apr 4 13:26:22.165: INFO: Got endpoints: latency-svc-7md7q [772.118116ms] Apr 4 13:26:22.185: INFO: Created: latency-svc-5f9mk Apr 4 13:26:22.195: INFO: Got endpoints: latency-svc-5f9mk [754.064286ms] Apr 4 13:26:22.215: INFO: Created: latency-svc-8w5q7 Apr 4 13:26:22.249: INFO: Got endpoints: latency-svc-8w5q7 [771.282216ms] Apr 4 13:26:22.263: INFO: Created: latency-svc-brphg Apr 4 13:26:22.274: INFO: Got endpoints: latency-svc-brphg [718.15347ms] Apr 4 13:26:22.293: INFO: Created: latency-svc-nzlgh Apr 4 13:26:22.304: INFO: Got endpoints: latency-svc-nzlgh [707.188416ms] Apr 4 13:26:22.323: INFO: Created: latency-svc-5h8kb Apr 4 13:26:22.347: INFO: Got endpoints: latency-svc-5h8kb [712.978577ms] Apr 4 13:26:22.399: INFO: Created: latency-svc-4szmn Apr 4 13:26:22.407: INFO: Got endpoints: latency-svc-4szmn [722.926295ms] Apr 4 13:26:22.425: INFO: Created: latency-svc-k2hfr Apr 4 13:26:22.437: INFO: Got endpoints: latency-svc-k2hfr [683.42004ms] Apr 4 13:26:22.456: INFO: Created: latency-svc-9szs4 Apr 4 13:26:22.467: INFO: Got endpoints: latency-svc-9szs4 [619.745902ms] Apr 4 13:26:22.486: INFO: Created: latency-svc-tlc9b Apr 4 13:26:22.498: INFO: Got endpoints: latency-svc-tlc9b [610.609818ms] Apr 4 13:26:22.549: INFO: Created: latency-svc-mgkfj Apr 4 13:26:22.563: INFO: Got endpoints: latency-svc-mgkfj [633.138309ms] Apr 4 13:26:22.587: INFO: Created: latency-svc-qrcpj Apr 4 13:26:22.601: INFO: Got endpoints: latency-svc-qrcpj [601.482864ms] Apr 4 13:26:22.624: INFO: Created: latency-svc-mnhj6 Apr 4 13:26:22.636: INFO: Got endpoints: latency-svc-mnhj6 [592.511769ms] Apr 4 13:26:22.699: INFO: Created: latency-svc-zvpmh Apr 4 13:26:22.702: INFO: Got endpoints: latency-svc-zvpmh [620.844379ms] Apr 4 13:26:22.731: INFO: Created: latency-svc-s4sdb Apr 4 13:26:22.745: INFO: Got endpoints: latency-svc-s4sdb [596.178976ms] Apr 4 13:26:22.767: INFO: Created: latency-svc-pzf6v Apr 4 13:26:22.785: INFO: Got endpoints: latency-svc-pzf6v [619.470782ms] Apr 4 13:26:22.851: INFO: Created: latency-svc-27sm6 Apr 4 13:26:22.897: INFO: Got endpoints: latency-svc-27sm6 [701.695082ms] Apr 4 13:26:22.917: INFO: Created: latency-svc-hhq4j Apr 4 13:26:22.955: INFO: Got endpoints: latency-svc-hhq4j [706.01411ms] Apr 4 13:26:22.983: INFO: Created: latency-svc-5wzdb Apr 4 13:26:22.998: INFO: Got endpoints: latency-svc-5wzdb [724.438349ms] Apr 4 13:26:23.019: INFO: Created: latency-svc-8lvq5 Apr 4 13:26:23.029: INFO: Got endpoints: latency-svc-8lvq5 [724.908495ms] Apr 4 13:26:23.055: INFO: Created: latency-svc-z7bzn Apr 4 13:26:23.106: INFO: Got endpoints: latency-svc-z7bzn [758.155693ms] Apr 4 13:26:23.139: INFO: Created: latency-svc-pg79r Apr 4 13:26:23.155: INFO: Got endpoints: latency-svc-pg79r [748.110386ms] Apr 4 13:26:23.175: INFO: Created: latency-svc-px6xr Apr 4 13:26:23.191: INFO: Got endpoints: latency-svc-px6xr [754.437823ms] Apr 4 13:26:23.256: INFO: Created: latency-svc-d9bhr Apr 4 13:26:23.277: INFO: Created: latency-svc-b6bk6 Apr 4 13:26:23.277: INFO: Got endpoints: latency-svc-d9bhr [809.86063ms] Apr 4 13:26:23.288: INFO: Got endpoints: latency-svc-b6bk6 [789.509983ms] Apr 4 13:26:23.307: INFO: Created: latency-svc-wc7nh Apr 4 13:26:23.318: INFO: Got endpoints: latency-svc-wc7nh [755.604564ms] Apr 4 13:26:23.338: INFO: Created: latency-svc-glqgl Apr 4 13:26:23.349: INFO: Got endpoints: latency-svc-glqgl [747.612022ms] Apr 4 13:26:23.399: INFO: Created: latency-svc-t4cz7 Apr 4 13:26:23.402: INFO: Got endpoints: latency-svc-t4cz7 [765.194173ms] Apr 4 13:26:23.427: INFO: Created: latency-svc-s2mtz Apr 4 13:26:23.439: INFO: Got endpoints: latency-svc-s2mtz [737.297757ms] Apr 4 13:26:23.457: INFO: Created: latency-svc-ht5ph Apr 4 13:26:23.470: INFO: Got endpoints: latency-svc-ht5ph [724.129045ms] Apr 4 13:26:23.493: INFO: Created: latency-svc-f8vgm Apr 4 13:26:23.542: INFO: Got endpoints: latency-svc-f8vgm [757.449197ms] Apr 4 13:26:23.571: INFO: Created: latency-svc-wdc4g Apr 4 13:26:23.619: INFO: Got endpoints: latency-svc-wdc4g [721.74568ms] Apr 4 13:26:23.716: INFO: Created: latency-svc-5tnsn Apr 4 13:26:23.719: INFO: Got endpoints: latency-svc-5tnsn [763.991816ms] Apr 4 13:26:23.776: INFO: Created: latency-svc-6b5vj Apr 4 13:26:23.789: INFO: Got endpoints: latency-svc-6b5vj [790.888637ms] Apr 4 13:26:23.902: INFO: Created: latency-svc-9xmml Apr 4 13:26:23.906: INFO: Got endpoints: latency-svc-9xmml [876.117447ms] Apr 4 13:26:23.937: INFO: Created: latency-svc-hbfjv Apr 4 13:26:23.952: INFO: Got endpoints: latency-svc-hbfjv [846.013597ms] Apr 4 13:26:24.040: INFO: Created: latency-svc-9tgvt Apr 4 13:26:24.048: INFO: Got endpoints: latency-svc-9tgvt [893.151812ms] Apr 4 13:26:24.069: INFO: Created: latency-svc-tkkqs Apr 4 13:26:24.085: INFO: Got endpoints: latency-svc-tkkqs [893.50555ms] Apr 4 13:26:24.105: INFO: Created: latency-svc-5rblm Apr 4 13:26:24.133: INFO: Got endpoints: latency-svc-5rblm [855.444972ms] Apr 4 13:26:24.183: INFO: Created: latency-svc-p4dz2 Apr 4 13:26:24.187: INFO: Got endpoints: latency-svc-p4dz2 [898.808718ms] Apr 4 13:26:24.213: INFO: Created: latency-svc-tqfd6 Apr 4 13:26:24.231: INFO: Got endpoints: latency-svc-tqfd6 [912.143185ms] Apr 4 13:26:24.261: INFO: Created: latency-svc-lvv9b Apr 4 13:26:24.271: INFO: Got endpoints: latency-svc-lvv9b [922.52727ms] Apr 4 13:26:24.321: INFO: Created: latency-svc-vcgx9 Apr 4 13:26:24.326: INFO: Got endpoints: latency-svc-vcgx9 [923.953334ms] Apr 4 13:26:24.345: INFO: Created: latency-svc-c775f Apr 4 13:26:24.357: INFO: Got endpoints: latency-svc-c775f [918.233972ms] Apr 4 13:26:24.387: INFO: Created: latency-svc-nd2vs Apr 4 13:26:24.398: INFO: Got endpoints: latency-svc-nd2vs [928.486017ms] Apr 4 13:26:24.417: INFO: Created: latency-svc-bdh56 Apr 4 13:26:24.482: INFO: Got endpoints: latency-svc-bdh56 [939.955072ms] Apr 4 13:26:24.496: INFO: Created: latency-svc-cq2fn Apr 4 13:26:24.507: INFO: Got endpoints: latency-svc-cq2fn [887.570639ms] Apr 4 13:26:24.525: INFO: Created: latency-svc-rv5hh Apr 4 13:26:24.537: INFO: Got endpoints: latency-svc-rv5hh [818.002297ms] Apr 4 13:26:24.555: INFO: Created: latency-svc-m5tqj Apr 4 13:26:24.567: INFO: Got endpoints: latency-svc-m5tqj [778.004406ms] Apr 4 13:26:24.615: INFO: Created: latency-svc-kd9f8 Apr 4 13:26:24.626: INFO: Got endpoints: latency-svc-kd9f8 [720.850186ms] Apr 4 13:26:24.651: INFO: Created: latency-svc-bdv6g Apr 4 13:26:24.664: INFO: Got endpoints: latency-svc-bdv6g [712.09417ms] Apr 4 13:26:24.681: INFO: Created: latency-svc-dbltd Apr 4 13:26:24.694: INFO: Got endpoints: latency-svc-dbltd [645.853985ms] Apr 4 13:26:24.711: INFO: Created: latency-svc-9xtfl Apr 4 13:26:24.746: INFO: Got endpoints: latency-svc-9xtfl [661.212809ms] Apr 4 13:26:24.788: INFO: Created: latency-svc-8p9dz Apr 4 13:26:24.819: INFO: Created: latency-svc-pp52m Apr 4 13:26:24.819: INFO: Got endpoints: latency-svc-8p9dz [686.425085ms] Apr 4 13:26:24.837: INFO: Got endpoints: latency-svc-pp52m [650.307382ms] Apr 4 13:26:24.902: INFO: Created: latency-svc-2n7hq Apr 4 13:26:24.905: INFO: Got endpoints: latency-svc-2n7hq [674.500768ms] Apr 4 13:26:24.933: INFO: Created: latency-svc-lm9rr Apr 4 13:26:24.960: INFO: Got endpoints: latency-svc-lm9rr [688.478479ms] Apr 4 13:26:24.981: INFO: Created: latency-svc-2gcc4 Apr 4 13:26:25.069: INFO: Got endpoints: latency-svc-2gcc4 [743.40798ms] Apr 4 13:26:25.077: INFO: Created: latency-svc-m629s Apr 4 13:26:25.104: INFO: Got endpoints: latency-svc-m629s [746.813409ms] Apr 4 13:26:25.161: INFO: Created: latency-svc-h2xgs Apr 4 13:26:25.219: INFO: Got endpoints: latency-svc-h2xgs [820.624362ms] Apr 4 13:26:25.223: INFO: Created: latency-svc-dbpgl Apr 4 13:26:25.230: INFO: Got endpoints: latency-svc-dbpgl [748.043732ms] Apr 4 13:26:25.257: INFO: Created: latency-svc-pk8rr Apr 4 13:26:25.273: INFO: Got endpoints: latency-svc-pk8rr [766.383578ms] Apr 4 13:26:25.293: INFO: Created: latency-svc-xq6vj Apr 4 13:26:25.309: INFO: Got endpoints: latency-svc-xq6vj [771.756674ms] Apr 4 13:26:25.357: INFO: Created: latency-svc-8wf6m Apr 4 13:26:25.360: INFO: Got endpoints: latency-svc-8wf6m [792.458071ms] Apr 4 13:26:25.383: INFO: Created: latency-svc-mz7sb Apr 4 13:26:25.394: INFO: Got endpoints: latency-svc-mz7sb [767.365594ms] Apr 4 13:26:25.413: INFO: Created: latency-svc-rnhvj Apr 4 13:26:25.424: INFO: Got endpoints: latency-svc-rnhvj [760.138007ms] Apr 4 13:26:25.443: INFO: Created: latency-svc-zjl5g Apr 4 13:26:25.494: INFO: Got endpoints: latency-svc-zjl5g [800.070735ms] Apr 4 13:26:25.515: INFO: Created: latency-svc-rjn6s Apr 4 13:26:25.546: INFO: Got endpoints: latency-svc-rjn6s [799.219591ms] Apr 4 13:26:25.587: INFO: Created: latency-svc-2glmt Apr 4 13:26:25.626: INFO: Got endpoints: latency-svc-2glmt [806.678034ms] Apr 4 13:26:25.640: INFO: Created: latency-svc-s6qnf Apr 4 13:26:25.653: INFO: Got endpoints: latency-svc-s6qnf [816.323654ms] Apr 4 13:26:25.671: INFO: Created: latency-svc-jw9bm Apr 4 13:26:25.683: INFO: Got endpoints: latency-svc-jw9bm [778.106126ms] Apr 4 13:26:25.774: INFO: Created: latency-svc-2qvfz Apr 4 13:26:25.776: INFO: Got endpoints: latency-svc-2qvfz [816.492671ms] Apr 4 13:26:25.798: INFO: Created: latency-svc-2s868 Apr 4 13:26:25.810: INFO: Got endpoints: latency-svc-2s868 [741.331613ms] Apr 4 13:26:25.839: INFO: Created: latency-svc-mfvt8 Apr 4 13:26:25.852: INFO: Got endpoints: latency-svc-mfvt8 [748.15179ms] Apr 4 13:26:25.929: INFO: Created: latency-svc-sxlnt Apr 4 13:26:25.942: INFO: Got endpoints: latency-svc-sxlnt [723.594986ms] Apr 4 13:26:25.977: INFO: Created: latency-svc-4c4hx Apr 4 13:26:26.013: INFO: Got endpoints: latency-svc-4c4hx [782.742304ms] Apr 4 13:26:26.069: INFO: Created: latency-svc-2g28z Apr 4 13:26:26.081: INFO: Got endpoints: latency-svc-2g28z [807.911604ms] Apr 4 13:26:26.103: INFO: Created: latency-svc-vtwhv Apr 4 13:26:26.117: INFO: Got endpoints: latency-svc-vtwhv [808.282464ms] Apr 4 13:26:26.139: INFO: Created: latency-svc-bgqz9 Apr 4 13:26:26.154: INFO: Got endpoints: latency-svc-bgqz9 [794.079355ms] Apr 4 13:26:26.214: INFO: Created: latency-svc-8m446 Apr 4 13:26:26.217: INFO: Got endpoints: latency-svc-8m446 [822.918969ms] Apr 4 13:26:26.272: INFO: Created: latency-svc-gbgpw Apr 4 13:26:26.286: INFO: Got endpoints: latency-svc-gbgpw [861.871364ms] Apr 4 13:26:26.307: INFO: Created: latency-svc-sg6dd Apr 4 13:26:26.375: INFO: Got endpoints: latency-svc-sg6dd [880.215145ms] Apr 4 13:26:26.377: INFO: Created: latency-svc-tdtwn Apr 4 13:26:26.383: INFO: Got endpoints: latency-svc-tdtwn [837.190662ms] Apr 4 13:26:26.409: INFO: Created: latency-svc-5qz7c Apr 4 13:26:26.427: INFO: Got endpoints: latency-svc-5qz7c [800.811527ms] Apr 4 13:26:26.457: INFO: Created: latency-svc-zp8mc Apr 4 13:26:26.473: INFO: Got endpoints: latency-svc-zp8mc [820.051695ms] Apr 4 13:26:26.525: INFO: Created: latency-svc-td8m8 Apr 4 13:26:26.528: INFO: Got endpoints: latency-svc-td8m8 [844.604615ms] Apr 4 13:26:26.554: INFO: Created: latency-svc-znz5p Apr 4 13:26:26.564: INFO: Got endpoints: latency-svc-znz5p [787.428159ms] Apr 4 13:26:26.583: INFO: Created: latency-svc-5h57w Apr 4 13:26:26.607: INFO: Got endpoints: latency-svc-5h57w [796.242313ms] Apr 4 13:26:26.668: INFO: Created: latency-svc-d8z4r Apr 4 13:26:26.671: INFO: Got endpoints: latency-svc-d8z4r [818.770993ms] Apr 4 13:26:26.691: INFO: Created: latency-svc-qjgsg Apr 4 13:26:26.703: INFO: Got endpoints: latency-svc-qjgsg [760.202982ms] Apr 4 13:26:26.721: INFO: Created: latency-svc-9xrsd Apr 4 13:26:26.733: INFO: Got endpoints: latency-svc-9xrsd [719.696209ms] Apr 4 13:26:26.751: INFO: Created: latency-svc-q7dhz Apr 4 13:26:26.763: INFO: Got endpoints: latency-svc-q7dhz [682.195133ms] Apr 4 13:26:26.806: INFO: Created: latency-svc-7b4vj Apr 4 13:26:26.809: INFO: Got endpoints: latency-svc-7b4vj [691.416119ms] Apr 4 13:26:26.829: INFO: Created: latency-svc-nfjps Apr 4 13:26:26.842: INFO: Got endpoints: latency-svc-nfjps [687.747003ms] Apr 4 13:26:26.872: INFO: Created: latency-svc-rmzxv Apr 4 13:26:26.884: INFO: Got endpoints: latency-svc-rmzxv [667.149769ms] Apr 4 13:26:26.901: INFO: Created: latency-svc-nnztz Apr 4 13:26:26.968: INFO: Got endpoints: latency-svc-nnztz [681.642465ms] Apr 4 13:26:26.998: INFO: Created: latency-svc-tp6zn Apr 4 13:26:27.011: INFO: Got endpoints: latency-svc-tp6zn [636.101699ms] Apr 4 13:26:27.027: INFO: Created: latency-svc-qptbm Apr 4 13:26:27.041: INFO: Got endpoints: latency-svc-qptbm [658.137804ms] Apr 4 13:26:27.063: INFO: Created: latency-svc-gjfg4 Apr 4 13:26:27.105: INFO: Got endpoints: latency-svc-gjfg4 [678.092981ms] Apr 4 13:26:27.111: INFO: Created: latency-svc-4skxs Apr 4 13:26:27.141: INFO: Got endpoints: latency-svc-4skxs [667.445379ms] Apr 4 13:26:27.178: INFO: Created: latency-svc-hrj6b Apr 4 13:26:27.198: INFO: Got endpoints: latency-svc-hrj6b [669.810099ms] Apr 4 13:26:27.249: INFO: Created: latency-svc-zzf2l Apr 4 13:26:27.258: INFO: Got endpoints: latency-svc-zzf2l [694.116222ms] Apr 4 13:26:27.279: INFO: Created: latency-svc-26wsr Apr 4 13:26:27.288: INFO: Got endpoints: latency-svc-26wsr [681.355239ms] Apr 4 13:26:27.309: INFO: Created: latency-svc-9p852 Apr 4 13:26:27.318: INFO: Got endpoints: latency-svc-9p852 [647.034669ms] Apr 4 13:26:27.345: INFO: Created: latency-svc-jfhbb Apr 4 13:26:27.381: INFO: Got endpoints: latency-svc-jfhbb [677.983651ms] Apr 4 13:26:27.405: INFO: Created: latency-svc-plt7z Apr 4 13:26:27.427: INFO: Got endpoints: latency-svc-plt7z [694.249933ms] Apr 4 13:26:27.447: INFO: Created: latency-svc-qbhsr Apr 4 13:26:27.464: INFO: Got endpoints: latency-svc-qbhsr [700.614129ms] Apr 4 13:26:27.557: INFO: Created: latency-svc-wfc99 Apr 4 13:26:27.572: INFO: Got endpoints: latency-svc-wfc99 [763.443352ms] Apr 4 13:26:27.591: INFO: Created: latency-svc-kdfwk Apr 4 13:26:27.602: INFO: Got endpoints: latency-svc-kdfwk [760.58332ms] Apr 4 13:26:27.656: INFO: Created: latency-svc-z9kb6 Apr 4 13:26:27.659: INFO: Got endpoints: latency-svc-z9kb6 [774.61949ms] Apr 4 13:26:27.687: INFO: Created: latency-svc-ddwqg Apr 4 13:26:27.699: INFO: Got endpoints: latency-svc-ddwqg [731.262848ms] Apr 4 13:26:27.717: INFO: Created: latency-svc-2rnz7 Apr 4 13:26:27.747: INFO: Got endpoints: latency-svc-2rnz7 [736.133133ms] Apr 4 13:26:27.800: INFO: Created: latency-svc-lrkz9 Apr 4 13:26:27.808: INFO: Got endpoints: latency-svc-lrkz9 [766.55787ms] Apr 4 13:26:27.837: INFO: Created: latency-svc-rftdx Apr 4 13:26:27.861: INFO: Got endpoints: latency-svc-rftdx [756.039706ms] Apr 4 13:26:27.896: INFO: Created: latency-svc-f9cxt Apr 4 13:26:27.934: INFO: Got endpoints: latency-svc-f9cxt [793.28065ms] Apr 4 13:26:27.957: INFO: Created: latency-svc-zmm92 Apr 4 13:26:27.970: INFO: Got endpoints: latency-svc-zmm92 [772.541997ms] Apr 4 13:26:27.993: INFO: Created: latency-svc-d9kn8 Apr 4 13:26:28.007: INFO: Got endpoints: latency-svc-d9kn8 [748.43191ms] Apr 4 13:26:28.088: INFO: Created: latency-svc-jwrcz Apr 4 13:26:28.091: INFO: Got endpoints: latency-svc-jwrcz [802.55896ms] Apr 4 13:26:28.119: INFO: Created: latency-svc-b5wng Apr 4 13:26:28.133: INFO: Got endpoints: latency-svc-b5wng [814.901717ms] Apr 4 13:26:28.155: INFO: Created: latency-svc-glwg6 Apr 4 13:26:28.170: INFO: Got endpoints: latency-svc-glwg6 [788.858726ms] Apr 4 13:26:28.250: INFO: Created: latency-svc-jw2r2 Apr 4 13:26:28.260: INFO: Got endpoints: latency-svc-jw2r2 [832.326268ms] Apr 4 13:26:28.281: INFO: Created: latency-svc-lvk5j Apr 4 13:26:28.290: INFO: Got endpoints: latency-svc-lvk5j [826.070676ms] Apr 4 13:26:28.311: INFO: Created: latency-svc-cfc6j Apr 4 13:26:28.320: INFO: Got endpoints: latency-svc-cfc6j [747.655062ms] Apr 4 13:26:28.341: INFO: Created: latency-svc-bssmx Apr 4 13:26:28.380: INFO: Got endpoints: latency-svc-bssmx [777.753938ms] Apr 4 13:26:28.395: INFO: Created: latency-svc-gxmlz Apr 4 13:26:28.425: INFO: Got endpoints: latency-svc-gxmlz [766.073394ms] Apr 4 13:26:28.455: INFO: Created: latency-svc-wx4p5 Apr 4 13:26:28.465: INFO: Got endpoints: latency-svc-wx4p5 [765.921961ms] Apr 4 13:26:28.531: INFO: Created: latency-svc-xdzbr Apr 4 13:26:28.534: INFO: Got endpoints: latency-svc-xdzbr [786.822925ms] Apr 4 13:26:28.581: INFO: Created: latency-svc-ctv94 Apr 4 13:26:28.598: INFO: Got endpoints: latency-svc-ctv94 [790.364005ms] Apr 4 13:26:28.623: INFO: Created: latency-svc-c67kv Apr 4 13:26:28.680: INFO: Got endpoints: latency-svc-c67kv [818.617036ms] Apr 4 13:26:28.682: INFO: Created: latency-svc-5d4tt Apr 4 13:26:28.688: INFO: Got endpoints: latency-svc-5d4tt [753.649845ms] Apr 4 13:26:28.707: INFO: Created: latency-svc-8bkzc Apr 4 13:26:28.719: INFO: Got endpoints: latency-svc-8bkzc [748.155948ms] Apr 4 13:26:28.737: INFO: Created: latency-svc-lf2tx Apr 4 13:26:28.749: INFO: Got endpoints: latency-svc-lf2tx [742.181107ms] Apr 4 13:26:28.767: INFO: Created: latency-svc-82tn6 Apr 4 13:26:28.836: INFO: Got endpoints: latency-svc-82tn6 [744.963947ms] Apr 4 13:26:28.838: INFO: Created: latency-svc-7xx25 Apr 4 13:26:28.858: INFO: Got endpoints: latency-svc-7xx25 [724.53549ms] Apr 4 13:26:28.881: INFO: Created: latency-svc-xjmmv Apr 4 13:26:28.906: INFO: Got endpoints: latency-svc-xjmmv [736.528101ms] Apr 4 13:26:28.935: INFO: Created: latency-svc-7gbnt Apr 4 13:26:28.998: INFO: Got endpoints: latency-svc-7gbnt [737.938809ms] Apr 4 13:26:29.037: INFO: Created: latency-svc-szx5s Apr 4 13:26:29.050: INFO: Got endpoints: latency-svc-szx5s [760.168167ms] Apr 4 13:26:29.073: INFO: Created: latency-svc-z68gm Apr 4 13:26:29.087: INFO: Got endpoints: latency-svc-z68gm [766.50151ms] Apr 4 13:26:29.129: INFO: Created: latency-svc-mjc6r Apr 4 13:26:29.132: INFO: Got endpoints: latency-svc-mjc6r [751.83813ms] Apr 4 13:26:29.169: INFO: Created: latency-svc-v44nj Apr 4 13:26:29.189: INFO: Got endpoints: latency-svc-v44nj [764.392378ms] Apr 4 13:26:29.211: INFO: Created: latency-svc-gkv9m Apr 4 13:26:29.226: INFO: Got endpoints: latency-svc-gkv9m [760.41219ms] Apr 4 13:26:29.285: INFO: Created: latency-svc-kjq98 Apr 4 13:26:29.292: INFO: Got endpoints: latency-svc-kjq98 [757.738718ms] Apr 4 13:26:29.313: INFO: Created: latency-svc-prt29 Apr 4 13:26:29.322: INFO: Got endpoints: latency-svc-prt29 [723.757904ms] Apr 4 13:26:29.343: INFO: Created: latency-svc-zwwnd Apr 4 13:26:29.358: INFO: Got endpoints: latency-svc-zwwnd [678.436054ms] Apr 4 13:26:29.380: INFO: Created: latency-svc-d96kc Apr 4 13:26:29.428: INFO: Got endpoints: latency-svc-d96kc [740.379071ms] Apr 4 13:26:29.451: INFO: Created: latency-svc-85922 Apr 4 13:26:29.467: INFO: Got endpoints: latency-svc-85922 [748.259433ms] Apr 4 13:26:29.487: INFO: Created: latency-svc-dc6wg Apr 4 13:26:29.497: INFO: Got endpoints: latency-svc-dc6wg [748.292695ms] Apr 4 13:26:29.523: INFO: Created: latency-svc-67r7v Apr 4 13:26:29.578: INFO: Got endpoints: latency-svc-67r7v [742.368127ms] Apr 4 13:26:29.580: INFO: Created: latency-svc-57p7v Apr 4 13:26:29.588: INFO: Got endpoints: latency-svc-57p7v [729.776595ms] Apr 4 13:26:29.620: INFO: Created: latency-svc-fr5ck Apr 4 13:26:29.636: INFO: Got endpoints: latency-svc-fr5ck [729.906141ms] Apr 4 13:26:29.655: INFO: Created: latency-svc-57wxr Apr 4 13:26:29.672: INFO: Got endpoints: latency-svc-57wxr [674.81411ms] Apr 4 13:26:29.740: INFO: Created: latency-svc-lhrr4 Apr 4 13:26:29.743: INFO: Got endpoints: latency-svc-lhrr4 [692.759591ms] Apr 4 13:26:29.769: INFO: Created: latency-svc-dl7k5 Apr 4 13:26:29.781: INFO: Got endpoints: latency-svc-dl7k5 [694.268909ms] Apr 4 13:26:29.799: INFO: Created: latency-svc-zd69n Apr 4 13:26:29.823: INFO: Got endpoints: latency-svc-zd69n [691.167094ms] Apr 4 13:26:29.886: INFO: Created: latency-svc-95f5p Apr 4 13:26:29.890: INFO: Got endpoints: latency-svc-95f5p [700.983697ms] Apr 4 13:26:29.913: INFO: Created: latency-svc-86xfz Apr 4 13:26:29.938: INFO: Got endpoints: latency-svc-86xfz [712.417196ms] Apr 4 13:26:29.955: INFO: Created: latency-svc-b5qfz Apr 4 13:26:29.980: INFO: Got endpoints: latency-svc-b5qfz [688.483563ms] Apr 4 13:26:30.027: INFO: Created: latency-svc-ms4lj Apr 4 13:26:30.046: INFO: Got endpoints: latency-svc-ms4lj [724.604058ms] Apr 4 13:26:30.069: INFO: Created: latency-svc-b8vdg Apr 4 13:26:30.083: INFO: Got endpoints: latency-svc-b8vdg [724.201533ms] Apr 4 13:26:30.135: INFO: Created: latency-svc-f8grw Apr 4 13:26:30.138: INFO: Got endpoints: latency-svc-f8grw [709.258636ms] Apr 4 13:26:30.171: INFO: Created: latency-svc-ggtxg Apr 4 13:26:30.186: INFO: Got endpoints: latency-svc-ggtxg [718.484827ms] Apr 4 13:26:30.207: INFO: Created: latency-svc-m7dtc Apr 4 13:26:30.221: INFO: Got endpoints: latency-svc-m7dtc [724.254542ms] Apr 4 13:26:30.274: INFO: Created: latency-svc-s6mcq Apr 4 13:26:30.297: INFO: Got endpoints: latency-svc-s6mcq [718.883259ms] Apr 4 13:26:30.339: INFO: Created: latency-svc-dbc86 Apr 4 13:26:30.348: INFO: Got endpoints: latency-svc-dbc86 [759.99136ms] Apr 4 13:26:30.369: INFO: Created: latency-svc-pvfwb Apr 4 13:26:30.404: INFO: Got endpoints: latency-svc-pvfwb [768.05902ms] Apr 4 13:26:30.417: INFO: Created: latency-svc-87rx9 Apr 4 13:26:30.432: INFO: Got endpoints: latency-svc-87rx9 [760.029901ms] Apr 4 13:26:30.453: INFO: Created: latency-svc-5246t Apr 4 13:26:30.483: INFO: Got endpoints: latency-svc-5246t [739.83616ms] Apr 4 13:26:30.537: INFO: Created: latency-svc-sstq2 Apr 4 13:26:30.541: INFO: Got endpoints: latency-svc-sstq2 [760.120622ms] Apr 4 13:26:30.562: INFO: Created: latency-svc-ppbt5 Apr 4 13:26:30.571: INFO: Got endpoints: latency-svc-ppbt5 [748.102582ms] Apr 4 13:26:30.591: INFO: Created: latency-svc-wmvfq Apr 4 13:26:30.608: INFO: Got endpoints: latency-svc-wmvfq [717.414648ms] Apr 4 13:26:30.627: INFO: Created: latency-svc-9tmxz Apr 4 13:26:30.686: INFO: Got endpoints: latency-svc-9tmxz [747.995304ms] Apr 4 13:26:30.705: INFO: Created: latency-svc-s7llv Apr 4 13:26:30.723: INFO: Got endpoints: latency-svc-s7llv [742.537137ms] Apr 4 13:26:30.741: INFO: Created: latency-svc-fwjbj Apr 4 13:26:30.752: INFO: Got endpoints: latency-svc-fwjbj [705.905636ms] Apr 4 13:26:30.771: INFO: Created: latency-svc-n6cc8 Apr 4 13:26:30.784: INFO: Got endpoints: latency-svc-n6cc8 [700.707083ms] Apr 4 13:26:30.837: INFO: Created: latency-svc-z9544 Apr 4 13:26:30.855: INFO: Got endpoints: latency-svc-z9544 [716.824933ms] Apr 4 13:26:30.879: INFO: Created: latency-svc-c7zdk Apr 4 13:26:30.891: INFO: Got endpoints: latency-svc-c7zdk [705.664145ms] Apr 4 13:26:30.909: INFO: Created: latency-svc-89lpw Apr 4 13:26:30.922: INFO: Got endpoints: latency-svc-89lpw [700.196761ms] Apr 4 13:26:30.922: INFO: Latencies: [87.778719ms 198.084345ms 207.596358ms 281.438262ms 356.698647ms 390.485339ms 420.598759ms 450.406931ms 509.608602ms 557.772218ms 592.511769ms 594.199312ms 596.178976ms 601.482864ms 610.609818ms 619.470782ms 619.745902ms 620.844379ms 633.138309ms 636.101699ms 645.853985ms 647.034669ms 650.307382ms 658.137804ms 661.212809ms 667.149769ms 667.445379ms 669.810099ms 672.063782ms 674.500768ms 674.81411ms 677.983651ms 678.092981ms 678.436054ms 681.355239ms 681.642465ms 682.195133ms 683.42004ms 686.425085ms 687.747003ms 688.478479ms 688.483563ms 691.167094ms 691.416119ms 692.759591ms 694.116222ms 694.249933ms 694.268909ms 700.196761ms 700.614129ms 700.707083ms 700.983697ms 701.695082ms 705.664145ms 705.905636ms 706.01411ms 707.188416ms 709.258636ms 712.09417ms 712.417196ms 712.978577ms 713.448679ms 716.824933ms 717.414648ms 718.15347ms 718.484827ms 718.883259ms 719.696209ms 720.850186ms 721.74568ms 722.926295ms 723.594986ms 723.757904ms 724.129045ms 724.201533ms 724.254542ms 724.438349ms 724.53549ms 724.604058ms 724.908495ms 729.776595ms 729.906141ms 731.262848ms 736.133133ms 736.528101ms 737.297757ms 737.938809ms 739.83616ms 740.379071ms 741.331613ms 742.181107ms 742.368127ms 742.537137ms 743.40798ms 744.963947ms 746.813409ms 747.612022ms 747.655062ms 747.995304ms 748.043732ms 748.102582ms 748.110386ms 748.15179ms 748.155948ms 748.259433ms 748.292695ms 748.43191ms 750.722846ms 751.83813ms 753.649845ms 754.064286ms 754.437823ms 755.604564ms 756.039706ms 757.449197ms 757.738718ms 758.155693ms 759.18493ms 759.99136ms 760.029901ms 760.120622ms 760.138007ms 760.168167ms 760.202982ms 760.41219ms 760.58332ms 763.443352ms 763.991816ms 764.392378ms 764.628913ms 765.194173ms 765.921961ms 766.073394ms 766.082102ms 766.383578ms 766.50151ms 766.55787ms 767.365594ms 768.05902ms 769.774786ms 771.282216ms 771.756674ms 772.118116ms 772.541997ms 774.61949ms 776.827508ms 777.753938ms 778.004406ms 778.106126ms 782.36277ms 782.742304ms 786.822925ms 787.428159ms 788.858726ms 789.509983ms 790.364005ms 790.888637ms 792.458071ms 793.28065ms 794.079355ms 796.242313ms 796.305446ms 799.219591ms 800.070735ms 800.765576ms 800.811527ms 802.55896ms 806.678034ms 807.911604ms 808.282464ms 809.86063ms 814.901717ms 814.963054ms 816.323654ms 816.492671ms 818.002297ms 818.617036ms 818.770993ms 820.051695ms 820.624362ms 822.918969ms 826.070676ms 832.326268ms 837.190662ms 844.604615ms 846.013597ms 855.444972ms 861.871364ms 876.117447ms 880.215145ms 887.570639ms 893.151812ms 893.50555ms 898.808718ms 912.143185ms 918.233972ms 922.52727ms 923.953334ms 928.486017ms 939.955072ms] Apr 4 13:26:30.922: INFO: 50 %ile: 748.102582ms Apr 4 13:26:30.922: INFO: 90 %ile: 822.918969ms Apr 4 13:26:30.922: INFO: 99 %ile: 928.486017ms Apr 4 13:26:30.922: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 4 13:26:30.922: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svc-latency-3542" for this suite. Apr 4 13:26:50.983: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 4 13:26:51.082: INFO: namespace svc-latency-3542 deletion completed in 20.152013362s • [SLOW TEST:33.461 seconds] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 4 13:26:51.083: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-map-7b3b699b-edeb-42d9-81bf-cc98f76e447a STEP: Creating a pod to test consume configMaps Apr 4 13:26:51.145: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-e0a785c0-722f-43df-84e7-aec00c6ccb1c" in namespace "projected-8549" to be "success or failure" Apr 4 13:26:51.149: INFO: Pod "pod-projected-configmaps-e0a785c0-722f-43df-84e7-aec00c6ccb1c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.212889ms Apr 4 13:26:53.153: INFO: Pod "pod-projected-configmaps-e0a785c0-722f-43df-84e7-aec00c6ccb1c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008678689s Apr 4 13:26:55.157: INFO: Pod "pod-projected-configmaps-e0a785c0-722f-43df-84e7-aec00c6ccb1c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012574727s STEP: Saw pod success Apr 4 13:26:55.157: INFO: Pod "pod-projected-configmaps-e0a785c0-722f-43df-84e7-aec00c6ccb1c" satisfied condition "success or failure" Apr 4 13:26:55.160: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-configmaps-e0a785c0-722f-43df-84e7-aec00c6ccb1c container projected-configmap-volume-test: STEP: delete the pod Apr 4 13:26:55.209: INFO: Waiting for pod pod-projected-configmaps-e0a785c0-722f-43df-84e7-aec00c6ccb1c to disappear Apr 4 13:26:55.214: INFO: Pod pod-projected-configmaps-e0a785c0-722f-43df-84e7-aec00c6ccb1c no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 4 13:26:55.214: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8549" for this suite. Apr 4 13:27:01.230: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 4 13:27:01.299: INFO: namespace projected-8549 deletion completed in 6.081200937s • [SLOW TEST:10.217 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 4 13:27:01.299: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-942883cd-9925-4311-9ce1-7131f57840c5 STEP: Creating a pod to test consume configMaps Apr 4 13:27:01.384: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-75bf6239-4b18-4c5c-88be-e292a05379ef" in namespace "projected-1847" to be "success or failure" Apr 4 13:27:01.410: INFO: Pod "pod-projected-configmaps-75bf6239-4b18-4c5c-88be-e292a05379ef": Phase="Pending", Reason="", readiness=false. Elapsed: 26.01759ms Apr 4 13:27:03.424: INFO: Pod "pod-projected-configmaps-75bf6239-4b18-4c5c-88be-e292a05379ef": Phase="Pending", Reason="", readiness=false. Elapsed: 2.040714025s Apr 4 13:27:05.429: INFO: Pod "pod-projected-configmaps-75bf6239-4b18-4c5c-88be-e292a05379ef": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.045228341s STEP: Saw pod success Apr 4 13:27:05.429: INFO: Pod "pod-projected-configmaps-75bf6239-4b18-4c5c-88be-e292a05379ef" satisfied condition "success or failure" Apr 4 13:27:05.432: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-configmaps-75bf6239-4b18-4c5c-88be-e292a05379ef container projected-configmap-volume-test: STEP: delete the pod Apr 4 13:27:05.450: INFO: Waiting for pod pod-projected-configmaps-75bf6239-4b18-4c5c-88be-e292a05379ef to disappear Apr 4 13:27:05.454: INFO: Pod pod-projected-configmaps-75bf6239-4b18-4c5c-88be-e292a05379ef no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 4 13:27:05.455: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1847" for this suite. Apr 4 13:27:11.482: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 4 13:27:11.566: INFO: namespace projected-1847 deletion completed in 6.107562832s • [SLOW TEST:10.266 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 4 13:27:11.566: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod busybox-a7247051-fc6d-4026-8ee2-38f611137a71 in namespace container-probe-957 Apr 4 13:27:15.663: INFO: Started pod busybox-a7247051-fc6d-4026-8ee2-38f611137a71 in namespace container-probe-957 STEP: checking the pod's current state and verifying that restartCount is present Apr 4 13:27:15.667: INFO: Initial restart count of pod busybox-a7247051-fc6d-4026-8ee2-38f611137a71 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 4 13:31:16.427: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-957" for this suite. Apr 4 13:31:22.449: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 4 13:31:22.525: INFO: namespace container-probe-957 deletion completed in 6.09150288s • [SLOW TEST:250.959 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 4 13:31:22.526: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-4605 [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a new StatefulSet Apr 4 13:31:22.595: INFO: Found 0 stateful pods, waiting for 3 Apr 4 13:31:32.601: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Apr 4 13:31:32.601: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Apr 4 13:31:32.601: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Apr 4 13:31:42.600: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Apr 4 13:31:42.600: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Apr 4 13:31:42.600: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Updating stateful set template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine Apr 4 13:31:42.629: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Not applying an update when the partition is greater than the number of replicas STEP: Performing a canary update Apr 4 13:31:52.670: INFO: Updating stateful set ss2 Apr 4 13:31:52.727: INFO: Waiting for Pod statefulset-4605/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c STEP: Restoring Pods to the correct revision when they are deleted Apr 4 13:32:02.883: INFO: Found 2 stateful pods, waiting for 3 Apr 4 13:32:12.888: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Apr 4 13:32:12.888: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Apr 4 13:32:12.888: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Performing a phased rolling update Apr 4 13:32:12.912: INFO: Updating stateful set ss2 Apr 4 13:32:12.924: INFO: Waiting for Pod statefulset-4605/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Apr 4 13:32:22.950: INFO: Updating stateful set ss2 Apr 4 13:32:22.990: INFO: Waiting for StatefulSet statefulset-4605/ss2 to complete update Apr 4 13:32:22.990: INFO: Waiting for Pod statefulset-4605/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Apr 4 13:32:32.998: INFO: Waiting for StatefulSet statefulset-4605/ss2 to complete update [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 Apr 4 13:32:43.003: INFO: Deleting all statefulset in ns statefulset-4605 Apr 4 13:32:43.005: INFO: Scaling statefulset ss2 to 0 Apr 4 13:33:13.019: INFO: Waiting for statefulset status.replicas updated to 0 Apr 4 13:33:13.023: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 4 13:33:13.035: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-4605" for this suite. Apr 4 13:33:19.075: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 4 13:33:19.149: INFO: namespace statefulset-4605 deletion completed in 6.110515394s • [SLOW TEST:116.623 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 4 13:33:19.149: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars Apr 4 13:33:19.233: INFO: Waiting up to 5m0s for pod "downward-api-4329c3eb-2b48-4d2e-afb0-3b801ae36164" in namespace "downward-api-4875" to be "success or failure" Apr 4 13:33:19.273: INFO: Pod "downward-api-4329c3eb-2b48-4d2e-afb0-3b801ae36164": Phase="Pending", Reason="", readiness=false. Elapsed: 39.366246ms Apr 4 13:33:21.277: INFO: Pod "downward-api-4329c3eb-2b48-4d2e-afb0-3b801ae36164": Phase="Pending", Reason="", readiness=false. Elapsed: 2.043302484s Apr 4 13:33:23.281: INFO: Pod "downward-api-4329c3eb-2b48-4d2e-afb0-3b801ae36164": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.047973467s STEP: Saw pod success Apr 4 13:33:23.282: INFO: Pod "downward-api-4329c3eb-2b48-4d2e-afb0-3b801ae36164" satisfied condition "success or failure" Apr 4 13:33:23.285: INFO: Trying to get logs from node iruya-worker pod downward-api-4329c3eb-2b48-4d2e-afb0-3b801ae36164 container dapi-container: STEP: delete the pod Apr 4 13:33:23.305: INFO: Waiting for pod downward-api-4329c3eb-2b48-4d2e-afb0-3b801ae36164 to disappear Apr 4 13:33:23.325: INFO: Pod downward-api-4329c3eb-2b48-4d2e-afb0-3b801ae36164 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 4 13:33:23.325: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4875" for this suite. Apr 4 13:33:29.343: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 4 13:33:29.420: INFO: namespace downward-api-4875 deletion completed in 6.091275827s • [SLOW TEST:10.271 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 4 13:33:29.420: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename hostpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37 [It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test hostPath mode Apr 4 13:33:29.461: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-2431" to be "success or failure" Apr 4 13:33:29.494: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 33.203271ms Apr 4 13:33:31.499: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.037503929s Apr 4 13:33:33.503: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.041403152s STEP: Saw pod success Apr 4 13:33:33.503: INFO: Pod "pod-host-path-test" satisfied condition "success or failure" Apr 4 13:33:33.505: INFO: Trying to get logs from node iruya-worker2 pod pod-host-path-test container test-container-1: STEP: delete the pod Apr 4 13:33:33.541: INFO: Waiting for pod pod-host-path-test to disappear Apr 4 13:33:33.566: INFO: Pod pod-host-path-test no longer exists [AfterEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 4 13:33:33.566: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "hostpath-2431" for this suite. Apr 4 13:33:39.694: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 4 13:33:39.767: INFO: namespace hostpath-2431 deletion completed in 6.196177147s • [SLOW TEST:10.347 seconds] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34 should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-apps] Deployment deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 4 13:33:39.768: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Apr 4 13:33:39.878: INFO: Pod name cleanup-pod: Found 0 pods out of 1 Apr 4 13:33:44.883: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Apr 4 13:33:44.883: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 Apr 4 13:33:44.926: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment,GenerateName:,Namespace:deployment-4850,SelfLink:/apis/apps/v1/namespaces/deployment-4850/deployments/test-cleanup-deployment,UID:a9b8d6a7-2ac1-4faf-8f29-195116029417,ResourceVersion:3590068,Generation:1,CreationTimestamp:2020-04-04 13:33:44 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[],ReadyReplicas:0,CollisionCount:nil,},} Apr 4 13:33:44.956: INFO: New ReplicaSet "test-cleanup-deployment-55bbcbc84c" of Deployment "test-cleanup-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment-55bbcbc84c,GenerateName:,Namespace:deployment-4850,SelfLink:/apis/apps/v1/namespaces/deployment-4850/replicasets/test-cleanup-deployment-55bbcbc84c,UID:9035b2ca-af3c-4af7-a8cc-82dd2f13c8a7,ResourceVersion:3590070,Generation:1,CreationTimestamp:2020-04-04 13:33:44 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-cleanup-deployment a9b8d6a7-2ac1-4faf-8f29-195116029417 0xc0022ef537 0xc0022ef538}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:0,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Apr 4 13:33:44.956: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment": Apr 4 13:33:44.956: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller,GenerateName:,Namespace:deployment-4850,SelfLink:/apis/apps/v1/namespaces/deployment-4850/replicasets/test-cleanup-controller,UID:d9f159d7-84d0-4a91-81ab-bb7ba5d4faeb,ResourceVersion:3590069,Generation:1,CreationTimestamp:2020-04-04 13:33:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 Deployment test-cleanup-deployment a9b8d6a7-2ac1-4faf-8f29-195116029417 0xc0022ef467 0xc0022ef468}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} Apr 4 13:33:44.987: INFO: Pod "test-cleanup-controller-dvg2f" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller-dvg2f,GenerateName:test-cleanup-controller-,Namespace:deployment-4850,SelfLink:/api/v1/namespaces/deployment-4850/pods/test-cleanup-controller-dvg2f,UID:7717794b-f861-4b58-849c-2a0a3b6f8b52,ResourceVersion:3590061,Generation:0,CreationTimestamp:2020-04-04 13:33:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-cleanup-controller d9f159d7-84d0-4a91-81ab-bb7ba5d4faeb 0xc00268c4e7 0xc00268c4e8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-b6dqq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-b6dqq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-b6dqq true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00268c560} {node.kubernetes.io/unreachable Exists NoExecute 0xc00268c580}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-04 13:33:39 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-04-04 13:33:42 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-04-04 13:33:42 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-04 13:33:39 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:10.244.2.228,StartTime:2020-04-04 13:33:39 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-04-04 13:33:42 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://0466ed345d138d23ca6726641781d432a286e32de803f4c2d2d4c104ff61e694}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 4 13:33:44.987: INFO: Pod "test-cleanup-deployment-55bbcbc84c-cnwsq" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment-55bbcbc84c-cnwsq,GenerateName:test-cleanup-deployment-55bbcbc84c-,Namespace:deployment-4850,SelfLink:/api/v1/namespaces/deployment-4850/pods/test-cleanup-deployment-55bbcbc84c-cnwsq,UID:a3bfc0b2-6276-4a3b-9384-ad86e0a3df76,ResourceVersion:3590076,Generation:0,CreationTimestamp:2020-04-04 13:33:44 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-cleanup-deployment-55bbcbc84c 9035b2ca-af3c-4af7-a8cc-82dd2f13c8a7 0xc00268c667 0xc00268c668}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-b6dqq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-b6dqq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-b6dqq true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00268c6e0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00268c700}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-04 13:33:44 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 4 13:33:44.987: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-4850" for this suite. Apr 4 13:33:51.091: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 4 13:33:51.181: INFO: namespace deployment-4850 deletion completed in 6.161736326s • [SLOW TEST:11.413 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 4 13:33:51.182: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted STEP: Gathering metrics W0404 13:34:01.860203 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Apr 4 13:34:01.860: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 4 13:34:01.860: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-3212" for this suite. Apr 4 13:34:09.878: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 4 13:34:09.953: INFO: namespace gc-3212 deletion completed in 8.089379137s • [SLOW TEST:18.771 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl label should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 4 13:34:09.953: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1210 STEP: creating the pod Apr 4 13:34:10.028: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1678' Apr 4 13:34:12.524: INFO: stderr: "" Apr 4 13:34:12.524: INFO: stdout: "pod/pause created\n" Apr 4 13:34:12.524: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] Apr 4 13:34:12.524: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-1678" to be "running and ready" Apr 4 13:34:12.603: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 79.315501ms Apr 4 13:34:14.607: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.083343436s Apr 4 13:34:16.612: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 4.087800862s Apr 4 13:34:16.612: INFO: Pod "pause" satisfied condition "running and ready" Apr 4 13:34:16.612: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: adding the label testing-label with value testing-label-value to a pod Apr 4 13:34:16.612: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-1678' Apr 4 13:34:16.722: INFO: stderr: "" Apr 4 13:34:16.722: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value Apr 4 13:34:16.722: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-1678' Apr 4 13:34:16.817: INFO: stderr: "" Apr 4 13:34:16.817: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 4s testing-label-value\n" STEP: removing the label testing-label of a pod Apr 4 13:34:16.817: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-1678' Apr 4 13:34:16.915: INFO: stderr: "" Apr 4 13:34:16.915: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label Apr 4 13:34:16.915: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-1678' Apr 4 13:34:17.018: INFO: stderr: "" Apr 4 13:34:17.018: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 5s \n" [AfterEach] [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1217 STEP: using delete to clean up resources Apr 4 13:34:17.018: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1678' Apr 4 13:34:17.164: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 4 13:34:17.164: INFO: stdout: "pod \"pause\" force deleted\n" Apr 4 13:34:17.164: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-1678' Apr 4 13:34:17.256: INFO: stderr: "No resources found.\n" Apr 4 13:34:17.256: INFO: stdout: "" Apr 4 13:34:17.256: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-1678 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Apr 4 13:34:17.345: INFO: stderr: "" Apr 4 13:34:17.345: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 4 13:34:17.345: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1678" for this suite. Apr 4 13:34:23.374: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 4 13:34:23.447: INFO: namespace kubectl-1678 deletion completed in 6.098622418s • [SLOW TEST:13.494 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl version should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 4 13:34:23.447: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Apr 4 13:34:23.515: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version' Apr 4 13:34:23.660: INFO: stderr: "" Apr 4 13:34:23.660: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"15\", GitVersion:\"v1.15.10\", GitCommit:\"1bea6c00a7055edef03f1d4bb58b773fa8917f11\", GitTreeState:\"clean\", BuildDate:\"2020-03-18T15:12:55Z\", GoVersion:\"go1.12.14\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"15\", GitVersion:\"v1.15.7\", GitCommit:\"6c143d35bb11d74970e7bc0b6c45b6bfdffc0bd4\", GitTreeState:\"clean\", BuildDate:\"2020-01-14T00:28:37Z\", GoVersion:\"go1.12.12\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 4 13:34:23.660: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3263" for this suite. Apr 4 13:34:29.688: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 4 13:34:29.787: INFO: namespace kubectl-3263 deletion completed in 6.122450166s • [SLOW TEST:6.340 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl version /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 4 13:34:29.788: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Apr 4 13:34:29.895: INFO: Create a RollingUpdate DaemonSet Apr 4 13:34:29.899: INFO: Check that daemon pods launch on every node of the cluster Apr 4 13:34:29.904: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 4 13:34:29.909: INFO: Number of nodes with available pods: 0 Apr 4 13:34:29.909: INFO: Node iruya-worker is running more than one daemon pod Apr 4 13:34:30.939: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 4 13:34:30.942: INFO: Number of nodes with available pods: 0 Apr 4 13:34:30.942: INFO: Node iruya-worker is running more than one daemon pod Apr 4 13:34:31.915: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 4 13:34:31.919: INFO: Number of nodes with available pods: 0 Apr 4 13:34:31.919: INFO: Node iruya-worker is running more than one daemon pod Apr 4 13:34:32.917: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 4 13:34:32.938: INFO: Number of nodes with available pods: 0 Apr 4 13:34:32.938: INFO: Node iruya-worker is running more than one daemon pod Apr 4 13:34:33.917: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 4 13:34:33.920: INFO: Number of nodes with available pods: 1 Apr 4 13:34:33.920: INFO: Node iruya-worker2 is running more than one daemon pod Apr 4 13:34:34.914: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 4 13:34:34.918: INFO: Number of nodes with available pods: 2 Apr 4 13:34:34.918: INFO: Number of running nodes: 2, number of available pods: 2 Apr 4 13:34:34.918: INFO: Update the DaemonSet to trigger a rollout Apr 4 13:34:34.944: INFO: Updating DaemonSet daemon-set Apr 4 13:34:42.965: INFO: Roll back the DaemonSet before rollout is complete Apr 4 13:34:42.971: INFO: Updating DaemonSet daemon-set Apr 4 13:34:42.971: INFO: Make sure DaemonSet rollback is complete Apr 4 13:34:42.976: INFO: Wrong image for pod: daemon-set-868np. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent. Apr 4 13:34:42.976: INFO: Pod daemon-set-868np is not available Apr 4 13:34:42.983: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 4 13:34:43.987: INFO: Wrong image for pod: daemon-set-868np. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent. Apr 4 13:34:43.987: INFO: Pod daemon-set-868np is not available Apr 4 13:34:43.991: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 4 13:34:44.987: INFO: Wrong image for pod: daemon-set-868np. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent. Apr 4 13:34:44.987: INFO: Pod daemon-set-868np is not available Apr 4 13:34:44.991: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 4 13:34:45.988: INFO: Pod daemon-set-55ht9 is not available Apr 4 13:34:45.990: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-3435, will wait for the garbage collector to delete the pods Apr 4 13:34:46.054: INFO: Deleting DaemonSet.extensions daemon-set took: 6.266489ms Apr 4 13:34:46.354: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.29679ms Apr 4 13:34:51.958: INFO: Number of nodes with available pods: 0 Apr 4 13:34:51.958: INFO: Number of running nodes: 0, number of available pods: 0 Apr 4 13:34:51.963: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-3435/daemonsets","resourceVersion":"3590535"},"items":null} Apr 4 13:34:51.966: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-3435/pods","resourceVersion":"3590535"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 4 13:34:51.975: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-3435" for this suite. Apr 4 13:34:57.992: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 4 13:34:58.074: INFO: namespace daemonsets-3435 deletion completed in 6.095860598s • [SLOW TEST:28.287 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 4 13:34:58.075: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Apr 4 13:34:58.114: INFO: (0) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 5.335495ms) Apr 4 13:34:58.117: INFO: (1) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.129839ms) Apr 4 13:34:58.120: INFO: (2) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.7744ms) Apr 4 13:34:58.123: INFO: (3) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.006128ms) Apr 4 13:34:58.143: INFO: (4) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 19.652683ms) Apr 4 13:34:58.147: INFO: (5) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.867277ms) Apr 4 13:34:58.151: INFO: (6) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 4.075854ms) Apr 4 13:34:58.155: INFO: (7) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.674337ms) Apr 4 13:34:58.158: INFO: (8) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.424845ms) Apr 4 13:34:58.161: INFO: (9) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.150586ms) Apr 4 13:34:58.164: INFO: (10) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.723932ms) Apr 4 13:34:58.167: INFO: (11) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.173356ms) Apr 4 13:34:58.170: INFO: (12) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.822067ms) Apr 4 13:34:58.173: INFO: (13) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.959787ms) Apr 4 13:34:58.177: INFO: (14) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.585955ms) Apr 4 13:34:58.180: INFO: (15) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.043958ms) Apr 4 13:34:58.183: INFO: (16) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.278187ms) Apr 4 13:34:58.187: INFO: (17) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.617354ms) Apr 4 13:34:58.190: INFO: (18) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.26513ms) Apr 4 13:34:58.194: INFO: (19) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.426029ms) [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 4 13:34:58.194: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-314" for this suite. Apr 4 13:35:04.224: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 4 13:35:04.333: INFO: namespace proxy-314 deletion completed in 6.1362502s • [SLOW TEST:6.258 seconds] [sig-network] Proxy /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:58 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 4 13:35:04.334: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44 [It] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod Apr 4 13:35:04.383: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 4 13:35:11.132: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-2616" for this suite. Apr 4 13:35:33.179: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 4 13:35:33.259: INFO: namespace init-container-2616 deletion completed in 22.12295712s • [SLOW TEST:28.925 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 4 13:35:33.260: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Apr 4 13:35:33.316: INFO: Waiting up to 5m0s for pod "downwardapi-volume-3fa8132b-faa6-4df6-91ab-739a9b23ba24" in namespace "projected-1736" to be "success or failure" Apr 4 13:35:33.318: INFO: Pod "downwardapi-volume-3fa8132b-faa6-4df6-91ab-739a9b23ba24": Phase="Pending", Reason="", readiness=false. Elapsed: 1.920253ms Apr 4 13:35:35.322: INFO: Pod "downwardapi-volume-3fa8132b-faa6-4df6-91ab-739a9b23ba24": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005732547s Apr 4 13:35:37.327: INFO: Pod "downwardapi-volume-3fa8132b-faa6-4df6-91ab-739a9b23ba24": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010089614s STEP: Saw pod success Apr 4 13:35:37.327: INFO: Pod "downwardapi-volume-3fa8132b-faa6-4df6-91ab-739a9b23ba24" satisfied condition "success or failure" Apr 4 13:35:37.330: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-3fa8132b-faa6-4df6-91ab-739a9b23ba24 container client-container: STEP: delete the pod Apr 4 13:35:37.360: INFO: Waiting for pod downwardapi-volume-3fa8132b-faa6-4df6-91ab-739a9b23ba24 to disappear Apr 4 13:35:37.366: INFO: Pod downwardapi-volume-3fa8132b-faa6-4df6-91ab-739a9b23ba24 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 4 13:35:37.366: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1736" for this suite. Apr 4 13:35:43.381: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 4 13:35:43.454: INFO: namespace projected-1736 deletion completed in 6.08349951s • [SLOW TEST:10.194 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run default should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 4 13:35:43.454: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1420 [It] should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Apr 4 13:35:43.507: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-5041' Apr 4 13:35:43.605: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Apr 4 13:35:43.605: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n" STEP: verifying the pod controlled by e2e-test-nginx-deployment gets created [AfterEach] [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1426 Apr 4 13:35:45.640: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=kubectl-5041' Apr 4 13:35:45.751: INFO: stderr: "" Apr 4 13:35:45.752: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 4 13:35:45.752: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5041" for this suite. Apr 4 13:37:47.787: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 4 13:37:47.863: INFO: namespace kubectl-5041 deletion completed in 2m2.108558135s • [SLOW TEST:124.409 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 4 13:37:47.863: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name secret-emptykey-test-30d69395-fa71-429b-9210-8ce7a1ee4d02 [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 4 13:37:47.908: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-2061" for this suite. Apr 4 13:37:53.938: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 4 13:37:54.010: INFO: namespace secrets-2061 deletion completed in 6.089896021s • [SLOW TEST:6.146 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31 should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 4 13:37:54.010: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Apr 4 13:37:54.112: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 4 13:37:54.123: INFO: Number of nodes with available pods: 0 Apr 4 13:37:54.123: INFO: Node iruya-worker is running more than one daemon pod Apr 4 13:37:55.128: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 4 13:37:55.131: INFO: Number of nodes with available pods: 0 Apr 4 13:37:55.131: INFO: Node iruya-worker is running more than one daemon pod Apr 4 13:37:56.128: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 4 13:37:56.131: INFO: Number of nodes with available pods: 0 Apr 4 13:37:56.131: INFO: Node iruya-worker is running more than one daemon pod Apr 4 13:37:57.193: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 4 13:37:57.196: INFO: Number of nodes with available pods: 0 Apr 4 13:37:57.197: INFO: Node iruya-worker is running more than one daemon pod Apr 4 13:37:58.129: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 4 13:37:58.132: INFO: Number of nodes with available pods: 2 Apr 4 13:37:58.132: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Stop a daemon pod, check that the daemon pod is revived. Apr 4 13:37:58.175: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 4 13:37:58.178: INFO: Number of nodes with available pods: 1 Apr 4 13:37:58.178: INFO: Node iruya-worker2 is running more than one daemon pod Apr 4 13:37:59.184: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 4 13:37:59.187: INFO: Number of nodes with available pods: 1 Apr 4 13:37:59.187: INFO: Node iruya-worker2 is running more than one daemon pod Apr 4 13:38:00.183: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 4 13:38:00.187: INFO: Number of nodes with available pods: 1 Apr 4 13:38:00.187: INFO: Node iruya-worker2 is running more than one daemon pod Apr 4 13:38:01.183: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 4 13:38:01.186: INFO: Number of nodes with available pods: 1 Apr 4 13:38:01.186: INFO: Node iruya-worker2 is running more than one daemon pod Apr 4 13:38:02.183: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 4 13:38:02.187: INFO: Number of nodes with available pods: 1 Apr 4 13:38:02.187: INFO: Node iruya-worker2 is running more than one daemon pod Apr 4 13:38:03.183: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 4 13:38:03.187: INFO: Number of nodes with available pods: 1 Apr 4 13:38:03.187: INFO: Node iruya-worker2 is running more than one daemon pod Apr 4 13:38:04.184: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 4 13:38:04.187: INFO: Number of nodes with available pods: 1 Apr 4 13:38:04.187: INFO: Node iruya-worker2 is running more than one daemon pod Apr 4 13:38:05.183: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 4 13:38:05.186: INFO: Number of nodes with available pods: 1 Apr 4 13:38:05.186: INFO: Node iruya-worker2 is running more than one daemon pod Apr 4 13:38:06.184: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 4 13:38:06.188: INFO: Number of nodes with available pods: 1 Apr 4 13:38:06.188: INFO: Node iruya-worker2 is running more than one daemon pod Apr 4 13:38:07.183: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 4 13:38:07.187: INFO: Number of nodes with available pods: 1 Apr 4 13:38:07.187: INFO: Node iruya-worker2 is running more than one daemon pod Apr 4 13:38:08.184: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 4 13:38:08.188: INFO: Number of nodes with available pods: 1 Apr 4 13:38:08.188: INFO: Node iruya-worker2 is running more than one daemon pod Apr 4 13:38:09.183: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 4 13:38:09.186: INFO: Number of nodes with available pods: 1 Apr 4 13:38:09.186: INFO: Node iruya-worker2 is running more than one daemon pod Apr 4 13:38:10.183: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 4 13:38:10.187: INFO: Number of nodes with available pods: 1 Apr 4 13:38:10.187: INFO: Node iruya-worker2 is running more than one daemon pod Apr 4 13:38:11.183: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 4 13:38:11.186: INFO: Number of nodes with available pods: 1 Apr 4 13:38:11.186: INFO: Node iruya-worker2 is running more than one daemon pod Apr 4 13:38:12.184: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 4 13:38:12.187: INFO: Number of nodes with available pods: 1 Apr 4 13:38:12.187: INFO: Node iruya-worker2 is running more than one daemon pod Apr 4 13:38:13.183: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 4 13:38:13.187: INFO: Number of nodes with available pods: 1 Apr 4 13:38:13.187: INFO: Node iruya-worker2 is running more than one daemon pod Apr 4 13:38:14.183: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 4 13:38:14.187: INFO: Number of nodes with available pods: 1 Apr 4 13:38:14.187: INFO: Node iruya-worker2 is running more than one daemon pod Apr 4 13:38:15.184: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 4 13:38:15.187: INFO: Number of nodes with available pods: 1 Apr 4 13:38:15.187: INFO: Node iruya-worker2 is running more than one daemon pod Apr 4 13:38:16.183: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 4 13:38:16.187: INFO: Number of nodes with available pods: 2 Apr 4 13:38:16.187: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-2152, will wait for the garbage collector to delete the pods Apr 4 13:38:16.251: INFO: Deleting DaemonSet.extensions daemon-set took: 7.481303ms Apr 4 13:38:16.551: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.302789ms Apr 4 13:38:22.255: INFO: Number of nodes with available pods: 0 Apr 4 13:38:22.255: INFO: Number of running nodes: 0, number of available pods: 0 Apr 4 13:38:22.258: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-2152/daemonsets","resourceVersion":"3591142"},"items":null} Apr 4 13:38:22.261: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-2152/pods","resourceVersion":"3591142"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 4 13:38:22.271: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-2152" for this suite. Apr 4 13:38:28.288: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 4 13:38:28.367: INFO: namespace daemonsets-2152 deletion completed in 6.092712983s • [SLOW TEST:34.357 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Job should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 4 13:38:28.369: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: delete a job STEP: deleting Job.batch foo in namespace job-7671, will wait for the garbage collector to delete the pods Apr 4 13:38:32.489: INFO: Deleting Job.batch foo took: 6.669689ms Apr 4 13:38:32.590: INFO: Terminating Job.batch foo pods took: 100.270296ms STEP: Ensuring job was deleted [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 4 13:39:12.293: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-7671" for this suite. Apr 4 13:39:18.313: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 4 13:39:18.396: INFO: namespace job-7671 deletion completed in 6.098437868s • [SLOW TEST:50.027 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 4 13:39:18.396: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-bf8bbcb7-c1a9-455b-9f45-0d65629216c0 STEP: Creating a pod to test consume configMaps Apr 4 13:39:18.456: INFO: Waiting up to 5m0s for pod "pod-configmaps-387168af-89c7-4f86-977a-914722ccb07d" in namespace "configmap-8934" to be "success or failure" Apr 4 13:39:18.465: INFO: Pod "pod-configmaps-387168af-89c7-4f86-977a-914722ccb07d": Phase="Pending", Reason="", readiness=false. Elapsed: 8.688864ms Apr 4 13:39:20.468: INFO: Pod "pod-configmaps-387168af-89c7-4f86-977a-914722ccb07d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011944513s Apr 4 13:39:22.472: INFO: Pod "pod-configmaps-387168af-89c7-4f86-977a-914722ccb07d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016242106s STEP: Saw pod success Apr 4 13:39:22.472: INFO: Pod "pod-configmaps-387168af-89c7-4f86-977a-914722ccb07d" satisfied condition "success or failure" Apr 4 13:39:22.475: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-387168af-89c7-4f86-977a-914722ccb07d container configmap-volume-test: STEP: delete the pod Apr 4 13:39:22.511: INFO: Waiting for pod pod-configmaps-387168af-89c7-4f86-977a-914722ccb07d to disappear Apr 4 13:39:22.521: INFO: Pod pod-configmaps-387168af-89c7-4f86-977a-914722ccb07d no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 4 13:39:22.521: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8934" for this suite. Apr 4 13:39:28.611: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 4 13:39:28.687: INFO: namespace configmap-8934 deletion completed in 6.162478758s • [SLOW TEST:10.290 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 4 13:39:28.687: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Performing setup for networking test in namespace pod-network-test-5334 STEP: creating a selector STEP: Creating the service pods in kubernetes Apr 4 13:39:28.743: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Apr 4 13:39:46.851: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.2.244 8081 | grep -v '^\s*$'] Namespace:pod-network-test-5334 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 4 13:39:46.851: INFO: >>> kubeConfig: /root/.kube/config I0404 13:39:46.883107 6 log.go:172] (0xc002f069a0) (0xc0010e5f40) Create stream I0404 13:39:46.883134 6 log.go:172] (0xc002f069a0) (0xc0010e5f40) Stream added, broadcasting: 1 I0404 13:39:46.885283 6 log.go:172] (0xc002f069a0) Reply frame received for 1 I0404 13:39:46.885328 6 log.go:172] (0xc002f069a0) (0xc00149e640) Create stream I0404 13:39:46.885343 6 log.go:172] (0xc002f069a0) (0xc00149e640) Stream added, broadcasting: 3 I0404 13:39:46.886322 6 log.go:172] (0xc002f069a0) Reply frame received for 3 I0404 13:39:46.886364 6 log.go:172] (0xc002f069a0) (0xc00228e640) Create stream I0404 13:39:46.886382 6 log.go:172] (0xc002f069a0) (0xc00228e640) Stream added, broadcasting: 5 I0404 13:39:46.887206 6 log.go:172] (0xc002f069a0) Reply frame received for 5 I0404 13:39:47.974574 6 log.go:172] (0xc002f069a0) Data frame received for 3 I0404 13:39:47.974703 6 log.go:172] (0xc00149e640) (3) Data frame handling I0404 13:39:47.974740 6 log.go:172] (0xc002f069a0) Data frame received for 5 I0404 13:39:47.974790 6 log.go:172] (0xc00228e640) (5) Data frame handling I0404 13:39:47.974820 6 log.go:172] (0xc00149e640) (3) Data frame sent I0404 13:39:47.974836 6 log.go:172] (0xc002f069a0) Data frame received for 3 I0404 13:39:47.974854 6 log.go:172] (0xc00149e640) (3) Data frame handling I0404 13:39:47.976831 6 log.go:172] (0xc002f069a0) Data frame received for 1 I0404 13:39:47.976854 6 log.go:172] (0xc0010e5f40) (1) Data frame handling I0404 13:39:47.976867 6 log.go:172] (0xc0010e5f40) (1) Data frame sent I0404 13:39:47.976899 6 log.go:172] (0xc002f069a0) (0xc0010e5f40) Stream removed, broadcasting: 1 I0404 13:39:47.976918 6 log.go:172] (0xc002f069a0) Go away received I0404 13:39:47.977074 6 log.go:172] (0xc002f069a0) (0xc0010e5f40) Stream removed, broadcasting: 1 I0404 13:39:47.977132 6 log.go:172] (0xc002f069a0) (0xc00149e640) Stream removed, broadcasting: 3 I0404 13:39:47.977153 6 log.go:172] (0xc002f069a0) (0xc00228e640) Stream removed, broadcasting: 5 Apr 4 13:39:47.977: INFO: Found all expected endpoints: [netserver-0] Apr 4 13:39:47.982: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.1.123 8081 | grep -v '^\s*$'] Namespace:pod-network-test-5334 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 4 13:39:47.982: INFO: >>> kubeConfig: /root/.kube/config I0404 13:39:48.006549 6 log.go:172] (0xc00226ebb0) (0xc00149ef00) Create stream I0404 13:39:48.006573 6 log.go:172] (0xc00226ebb0) (0xc00149ef00) Stream added, broadcasting: 1 I0404 13:39:48.010619 6 log.go:172] (0xc00226ebb0) Reply frame received for 1 I0404 13:39:48.010709 6 log.go:172] (0xc00226ebb0) (0xc00149efa0) Create stream I0404 13:39:48.010752 6 log.go:172] (0xc00226ebb0) (0xc00149efa0) Stream added, broadcasting: 3 I0404 13:39:48.012918 6 log.go:172] (0xc00226ebb0) Reply frame received for 3 I0404 13:39:48.012981 6 log.go:172] (0xc00226ebb0) (0xc00228e780) Create stream I0404 13:39:48.012995 6 log.go:172] (0xc00226ebb0) (0xc00228e780) Stream added, broadcasting: 5 I0404 13:39:48.015232 6 log.go:172] (0xc00226ebb0) Reply frame received for 5 I0404 13:39:49.076036 6 log.go:172] (0xc00226ebb0) Data frame received for 5 I0404 13:39:49.076077 6 log.go:172] (0xc00228e780) (5) Data frame handling I0404 13:39:49.076116 6 log.go:172] (0xc00226ebb0) Data frame received for 3 I0404 13:39:49.076144 6 log.go:172] (0xc00149efa0) (3) Data frame handling I0404 13:39:49.076181 6 log.go:172] (0xc00149efa0) (3) Data frame sent I0404 13:39:49.076196 6 log.go:172] (0xc00226ebb0) Data frame received for 3 I0404 13:39:49.076211 6 log.go:172] (0xc00149efa0) (3) Data frame handling I0404 13:39:49.078233 6 log.go:172] (0xc00226ebb0) Data frame received for 1 I0404 13:39:49.078244 6 log.go:172] (0xc00149ef00) (1) Data frame handling I0404 13:39:49.078250 6 log.go:172] (0xc00149ef00) (1) Data frame sent I0404 13:39:49.078587 6 log.go:172] (0xc00226ebb0) (0xc00149ef00) Stream removed, broadcasting: 1 I0404 13:39:49.078625 6 log.go:172] (0xc00226ebb0) Go away received I0404 13:39:49.078699 6 log.go:172] (0xc00226ebb0) (0xc00149ef00) Stream removed, broadcasting: 1 I0404 13:39:49.078743 6 log.go:172] (0xc00226ebb0) (0xc00149efa0) Stream removed, broadcasting: 3 I0404 13:39:49.078765 6 log.go:172] (0xc00226ebb0) (0xc00228e780) Stream removed, broadcasting: 5 Apr 4 13:39:49.078: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 4 13:39:49.078: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-5334" for this suite. Apr 4 13:40:13.098: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 4 13:40:13.180: INFO: namespace pod-network-test-5334 deletion completed in 24.09668408s • [SLOW TEST:44.493 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 4 13:40:13.180: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods Apr 4 13:40:13.900: INFO: Pod name wrapped-volume-race-e7ceac5c-3acb-4049-8624-b381efd144a4: Found 0 pods out of 5 Apr 4 13:40:18.908: INFO: Pod name wrapped-volume-race-e7ceac5c-3acb-4049-8624-b381efd144a4: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-e7ceac5c-3acb-4049-8624-b381efd144a4 in namespace emptydir-wrapper-7897, will wait for the garbage collector to delete the pods Apr 4 13:40:30.996: INFO: Deleting ReplicationController wrapped-volume-race-e7ceac5c-3acb-4049-8624-b381efd144a4 took: 7.448754ms Apr 4 13:40:31.297: INFO: Terminating ReplicationController wrapped-volume-race-e7ceac5c-3acb-4049-8624-b381efd144a4 pods took: 300.2771ms STEP: Creating RC which spawns configmap-volume pods Apr 4 13:41:13.255: INFO: Pod name wrapped-volume-race-ad7463f8-c3c5-44ec-aeb7-4e8e80414e1d: Found 0 pods out of 5 Apr 4 13:41:18.263: INFO: Pod name wrapped-volume-race-ad7463f8-c3c5-44ec-aeb7-4e8e80414e1d: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-ad7463f8-c3c5-44ec-aeb7-4e8e80414e1d in namespace emptydir-wrapper-7897, will wait for the garbage collector to delete the pods Apr 4 13:41:32.343: INFO: Deleting ReplicationController wrapped-volume-race-ad7463f8-c3c5-44ec-aeb7-4e8e80414e1d took: 5.208003ms Apr 4 13:41:32.643: INFO: Terminating ReplicationController wrapped-volume-race-ad7463f8-c3c5-44ec-aeb7-4e8e80414e1d pods took: 300.258987ms STEP: Creating RC which spawns configmap-volume pods Apr 4 13:42:13.272: INFO: Pod name wrapped-volume-race-6ff4cf55-5883-40ea-aaa9-945b95d7b319: Found 0 pods out of 5 Apr 4 13:42:18.281: INFO: Pod name wrapped-volume-race-6ff4cf55-5883-40ea-aaa9-945b95d7b319: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-6ff4cf55-5883-40ea-aaa9-945b95d7b319 in namespace emptydir-wrapper-7897, will wait for the garbage collector to delete the pods Apr 4 13:42:32.363: INFO: Deleting ReplicationController wrapped-volume-race-6ff4cf55-5883-40ea-aaa9-945b95d7b319 took: 7.640994ms Apr 4 13:42:32.663: INFO: Terminating ReplicationController wrapped-volume-race-6ff4cf55-5883-40ea-aaa9-945b95d7b319 pods took: 300.309904ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 4 13:43:13.763: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-7897" for this suite. Apr 4 13:43:21.780: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 4 13:43:21.868: INFO: namespace emptydir-wrapper-7897 deletion completed in 8.097070732s • [SLOW TEST:188.688 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 4 13:43:21.869: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod Apr 4 13:43:21.921: INFO: PodSpec: initContainers in spec.initContainers Apr 4 13:44:12.142: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-39b5d05d-9693-47bb-acdb-18df4e0227cc", GenerateName:"", Namespace:"init-container-428", SelfLink:"/api/v1/namespaces/init-container-428/pods/pod-init-39b5d05d-9693-47bb-acdb-18df4e0227cc", UID:"9b8d6ea7-b85e-40bc-91b1-161093771a22", ResourceVersion:"3592859", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63721604601, loc:(*time.Location)(0x7ea78c0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"921948671"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-r2vc8", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc00204c8c0), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-r2vc8", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-r2vc8", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-r2vc8", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc00268cfe8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"iruya-worker2", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc002aae0c0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc00268d080)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc00268d0a0)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc00268d0a8), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc00268d0ac), PreemptionPolicy:(*v1.PreemptionPolicy)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721604602, loc:(*time.Location)(0x7ea78c0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721604602, loc:(*time.Location)(0x7ea78c0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721604602, loc:(*time.Location)(0x7ea78c0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721604601, loc:(*time.Location)(0x7ea78c0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.17.0.5", PodIP:"10.244.1.124", StartTime:(*v1.Time)(0xc002521100), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(0xc002521140), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc001c9b340)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://2f2af11db97bfac35678beb436a4cb65360eabd52a90cfbc9214727dfd6c76e9"}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc002521160), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:""}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc002521120), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:""}}, QOSClass:"Guaranteed"}} [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 4 13:44:12.143: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-428" for this suite. Apr 4 13:44:34.168: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 4 13:44:34.254: INFO: namespace init-container-428 deletion completed in 22.094773337s • [SLOW TEST:72.385 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 4 13:44:34.255: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88 [It] should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating service multi-endpoint-test in namespace services-7686 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-7686 to expose endpoints map[] Apr 4 13:44:34.357: INFO: Get endpoints failed (9.173229ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found Apr 4 13:44:35.361: INFO: successfully validated that service multi-endpoint-test in namespace services-7686 exposes endpoints map[] (1.012682501s elapsed) STEP: Creating pod pod1 in namespace services-7686 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-7686 to expose endpoints map[pod1:[100]] Apr 4 13:44:38.402: INFO: successfully validated that service multi-endpoint-test in namespace services-7686 exposes endpoints map[pod1:[100]] (3.034604168s elapsed) STEP: Creating pod pod2 in namespace services-7686 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-7686 to expose endpoints map[pod1:[100] pod2:[101]] Apr 4 13:44:41.497: INFO: successfully validated that service multi-endpoint-test in namespace services-7686 exposes endpoints map[pod1:[100] pod2:[101]] (3.091631795s elapsed) STEP: Deleting pod pod1 in namespace services-7686 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-7686 to expose endpoints map[pod2:[101]] Apr 4 13:44:42.553: INFO: successfully validated that service multi-endpoint-test in namespace services-7686 exposes endpoints map[pod2:[101]] (1.050354672s elapsed) STEP: Deleting pod pod2 in namespace services-7686 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-7686 to expose endpoints map[] Apr 4 13:44:43.627: INFO: successfully validated that service multi-endpoint-test in namespace services-7686 exposes endpoints map[] (1.068816886s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 4 13:44:43.669: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-7686" for this suite. Apr 4 13:45:05.684: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 4 13:45:05.786: INFO: namespace services-7686 deletion completed in 22.111248056s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92 • [SLOW TEST:31.531 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 4 13:45:05.787: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 4 13:45:05.907: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-5623" for this suite. Apr 4 13:45:11.920: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 4 13:45:12.002: INFO: namespace kubelet-test-5623 deletion completed in 6.090819304s • [SLOW TEST:6.215 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78 should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 4 13:45:12.002: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: starting a background goroutine to produce watch events STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 4 13:45:17.504: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-6528" for this suite. Apr 4 13:45:23.627: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 4 13:45:23.708: INFO: namespace watch-6528 deletion completed in 6.200018093s • [SLOW TEST:11.706 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 4 13:45:23.709: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test use defaults Apr 4 13:45:23.788: INFO: Waiting up to 5m0s for pod "client-containers-2b34909f-8a58-44af-8470-6f5a8cbeb502" in namespace "containers-5285" to be "success or failure" Apr 4 13:45:23.795: INFO: Pod "client-containers-2b34909f-8a58-44af-8470-6f5a8cbeb502": Phase="Pending", Reason="", readiness=false. Elapsed: 7.285738ms Apr 4 13:45:25.851: INFO: Pod "client-containers-2b34909f-8a58-44af-8470-6f5a8cbeb502": Phase="Pending", Reason="", readiness=false. Elapsed: 2.063373084s Apr 4 13:45:27.855: INFO: Pod "client-containers-2b34909f-8a58-44af-8470-6f5a8cbeb502": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.067406216s STEP: Saw pod success Apr 4 13:45:27.855: INFO: Pod "client-containers-2b34909f-8a58-44af-8470-6f5a8cbeb502" satisfied condition "success or failure" Apr 4 13:45:27.858: INFO: Trying to get logs from node iruya-worker2 pod client-containers-2b34909f-8a58-44af-8470-6f5a8cbeb502 container test-container: STEP: delete the pod Apr 4 13:45:27.875: INFO: Waiting for pod client-containers-2b34909f-8a58-44af-8470-6f5a8cbeb502 to disappear Apr 4 13:45:27.879: INFO: Pod client-containers-2b34909f-8a58-44af-8470-6f5a8cbeb502 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 4 13:45:27.879: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-5285" for this suite. Apr 4 13:45:33.908: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 4 13:45:33.987: INFO: namespace containers-5285 deletion completed in 6.104326021s • [SLOW TEST:10.278 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 4 13:45:33.987: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir volume type on node default medium Apr 4 13:45:34.051: INFO: Waiting up to 5m0s for pod "pod-271aae56-ed14-4743-8cbf-122bc9e9d40c" in namespace "emptydir-6281" to be "success or failure" Apr 4 13:45:34.054: INFO: Pod "pod-271aae56-ed14-4743-8cbf-122bc9e9d40c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.92158ms Apr 4 13:45:36.058: INFO: Pod "pod-271aae56-ed14-4743-8cbf-122bc9e9d40c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006888142s Apr 4 13:45:38.062: INFO: Pod "pod-271aae56-ed14-4743-8cbf-122bc9e9d40c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011186362s STEP: Saw pod success Apr 4 13:45:38.063: INFO: Pod "pod-271aae56-ed14-4743-8cbf-122bc9e9d40c" satisfied condition "success or failure" Apr 4 13:45:38.066: INFO: Trying to get logs from node iruya-worker pod pod-271aae56-ed14-4743-8cbf-122bc9e9d40c container test-container: STEP: delete the pod Apr 4 13:45:38.101: INFO: Waiting for pod pod-271aae56-ed14-4743-8cbf-122bc9e9d40c to disappear Apr 4 13:45:38.121: INFO: Pod pod-271aae56-ed14-4743-8cbf-122bc9e9d40c no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 4 13:45:38.121: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6281" for this suite. Apr 4 13:45:44.136: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 4 13:45:44.211: INFO: namespace emptydir-6281 deletion completed in 6.087116076s • [SLOW TEST:10.224 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 4 13:45:44.212: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-upd-7a9ba860-e33e-4e6b-a3d7-e7f3a2363146 STEP: Creating the pod STEP: Updating configmap configmap-test-upd-7a9ba860-e33e-4e6b-a3d7-e7f3a2363146 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 4 13:45:50.366: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7222" for this suite. Apr 4 13:46:12.381: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 4 13:46:12.445: INFO: namespace configmap-7222 deletion completed in 22.075001115s • [SLOW TEST:28.233 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 4 13:46:12.446: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0666 on tmpfs Apr 4 13:46:12.522: INFO: Waiting up to 5m0s for pod "pod-efebd924-8b1a-4854-8203-0bb9a562bb93" in namespace "emptydir-676" to be "success or failure" Apr 4 13:46:12.527: INFO: Pod "pod-efebd924-8b1a-4854-8203-0bb9a562bb93": Phase="Pending", Reason="", readiness=false. Elapsed: 4.489912ms Apr 4 13:46:14.547: INFO: Pod "pod-efebd924-8b1a-4854-8203-0bb9a562bb93": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024513259s Apr 4 13:46:16.551: INFO: Pod "pod-efebd924-8b1a-4854-8203-0bb9a562bb93": Phase="Pending", Reason="", readiness=false. Elapsed: 4.028310951s Apr 4 13:46:18.555: INFO: Pod "pod-efebd924-8b1a-4854-8203-0bb9a562bb93": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.032490052s STEP: Saw pod success Apr 4 13:46:18.555: INFO: Pod "pod-efebd924-8b1a-4854-8203-0bb9a562bb93" satisfied condition "success or failure" Apr 4 13:46:18.558: INFO: Trying to get logs from node iruya-worker pod pod-efebd924-8b1a-4854-8203-0bb9a562bb93 container test-container: STEP: delete the pod Apr 4 13:46:19.438: INFO: Waiting for pod pod-efebd924-8b1a-4854-8203-0bb9a562bb93 to disappear Apr 4 13:46:19.677: INFO: Pod pod-efebd924-8b1a-4854-8203-0bb9a562bb93 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 4 13:46:19.678: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-676" for this suite. Apr 4 13:46:25.882: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 4 13:46:26.038: INFO: namespace emptydir-676 deletion completed in 6.208896127s • [SLOW TEST:13.592 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 4 13:46:26.038: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-map-f7ed08b0-d5d2-4d40-b846-46ac8ec14bf4 STEP: Creating a pod to test consume secrets Apr 4 13:46:26.531: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-a2ef5cac-fc70-43fe-8dc5-5f590aba4aee" in namespace "projected-6717" to be "success or failure" Apr 4 13:46:26.598: INFO: Pod "pod-projected-secrets-a2ef5cac-fc70-43fe-8dc5-5f590aba4aee": Phase="Pending", Reason="", readiness=false. Elapsed: 66.846048ms Apr 4 13:46:28.602: INFO: Pod "pod-projected-secrets-a2ef5cac-fc70-43fe-8dc5-5f590aba4aee": Phase="Pending", Reason="", readiness=false. Elapsed: 2.070767521s Apr 4 13:46:30.605: INFO: Pod "pod-projected-secrets-a2ef5cac-fc70-43fe-8dc5-5f590aba4aee": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.073905525s STEP: Saw pod success Apr 4 13:46:30.605: INFO: Pod "pod-projected-secrets-a2ef5cac-fc70-43fe-8dc5-5f590aba4aee" satisfied condition "success or failure" Apr 4 13:46:30.607: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-secrets-a2ef5cac-fc70-43fe-8dc5-5f590aba4aee container projected-secret-volume-test: STEP: delete the pod Apr 4 13:46:30.759: INFO: Waiting for pod pod-projected-secrets-a2ef5cac-fc70-43fe-8dc5-5f590aba4aee to disappear Apr 4 13:46:30.768: INFO: Pod pod-projected-secrets-a2ef5cac-fc70-43fe-8dc5-5f590aba4aee no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 4 13:46:30.768: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6717" for this suite. Apr 4 13:46:36.783: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 4 13:46:36.860: INFO: namespace projected-6717 deletion completed in 6.089615919s • [SLOW TEST:10.822 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 4 13:46:36.860: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-secret-rtvs STEP: Creating a pod to test atomic-volume-subpath Apr 4 13:46:36.938: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-rtvs" in namespace "subpath-3177" to be "success or failure" Apr 4 13:46:36.942: INFO: Pod "pod-subpath-test-secret-rtvs": Phase="Pending", Reason="", readiness=false. Elapsed: 3.402904ms Apr 4 13:46:38.946: INFO: Pod "pod-subpath-test-secret-rtvs": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007515771s Apr 4 13:46:40.950: INFO: Pod "pod-subpath-test-secret-rtvs": Phase="Running", Reason="", readiness=true. Elapsed: 4.011463682s Apr 4 13:46:42.954: INFO: Pod "pod-subpath-test-secret-rtvs": Phase="Running", Reason="", readiness=true. Elapsed: 6.015625515s Apr 4 13:46:44.958: INFO: Pod "pod-subpath-test-secret-rtvs": Phase="Running", Reason="", readiness=true. Elapsed: 8.019985828s Apr 4 13:46:46.962: INFO: Pod "pod-subpath-test-secret-rtvs": Phase="Running", Reason="", readiness=true. Elapsed: 10.024036183s Apr 4 13:46:48.966: INFO: Pod "pod-subpath-test-secret-rtvs": Phase="Running", Reason="", readiness=true. Elapsed: 12.028276598s Apr 4 13:46:50.970: INFO: Pod "pod-subpath-test-secret-rtvs": Phase="Running", Reason="", readiness=true. Elapsed: 14.03225199s Apr 4 13:46:52.975: INFO: Pod "pod-subpath-test-secret-rtvs": Phase="Running", Reason="", readiness=true. Elapsed: 16.036449275s Apr 4 13:46:54.979: INFO: Pod "pod-subpath-test-secret-rtvs": Phase="Running", Reason="", readiness=true. Elapsed: 18.040537904s Apr 4 13:46:56.983: INFO: Pod "pod-subpath-test-secret-rtvs": Phase="Running", Reason="", readiness=true. Elapsed: 20.044949338s Apr 4 13:46:58.987: INFO: Pod "pod-subpath-test-secret-rtvs": Phase="Running", Reason="", readiness=true. Elapsed: 22.049218146s Apr 4 13:47:00.991: INFO: Pod "pod-subpath-test-secret-rtvs": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.053292419s STEP: Saw pod success Apr 4 13:47:00.991: INFO: Pod "pod-subpath-test-secret-rtvs" satisfied condition "success or failure" Apr 4 13:47:00.994: INFO: Trying to get logs from node iruya-worker pod pod-subpath-test-secret-rtvs container test-container-subpath-secret-rtvs: STEP: delete the pod Apr 4 13:47:01.014: INFO: Waiting for pod pod-subpath-test-secret-rtvs to disappear Apr 4 13:47:01.031: INFO: Pod pod-subpath-test-secret-rtvs no longer exists STEP: Deleting pod pod-subpath-test-secret-rtvs Apr 4 13:47:01.031: INFO: Deleting pod "pod-subpath-test-secret-rtvs" in namespace "subpath-3177" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 4 13:47:01.033: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-3177" for this suite. Apr 4 13:47:07.052: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 4 13:47:07.131: INFO: namespace subpath-3177 deletion completed in 6.09510275s • [SLOW TEST:30.271 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 4 13:47:07.131: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:179 [It] should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 4 13:47:07.193: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-7063" for this suite. Apr 4 13:47:29.250: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 4 13:47:29.331: INFO: namespace pods-7063 deletion completed in 22.108721912s • [SLOW TEST:22.200 seconds] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 4 13:47:29.331: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 4 13:47:35.554: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-4209" for this suite. Apr 4 13:47:41.598: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 4 13:47:41.678: INFO: namespace namespaces-4209 deletion completed in 6.120946436s STEP: Destroying namespace "nsdeletetest-5657" for this suite. Apr 4 13:47:41.680: INFO: Namespace nsdeletetest-5657 was already deleted STEP: Destroying namespace "nsdeletetest-7719" for this suite. Apr 4 13:47:47.694: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 4 13:47:47.774: INFO: namespace nsdeletetest-7719 deletion completed in 6.094126941s • [SLOW TEST:18.443 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 4 13:47:47.775: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Apr 4 13:47:47.832: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9d60de4d-2e15-48b7-9adf-dd14859ecb1d" in namespace "projected-325" to be "success or failure" Apr 4 13:47:47.835: INFO: Pod "downwardapi-volume-9d60de4d-2e15-48b7-9adf-dd14859ecb1d": Phase="Pending", Reason="", readiness=false. Elapsed: 3.130614ms Apr 4 13:47:49.839: INFO: Pod "downwardapi-volume-9d60de4d-2e15-48b7-9adf-dd14859ecb1d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007211828s Apr 4 13:47:51.844: INFO: Pod "downwardapi-volume-9d60de4d-2e15-48b7-9adf-dd14859ecb1d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.0116063s STEP: Saw pod success Apr 4 13:47:51.844: INFO: Pod "downwardapi-volume-9d60de4d-2e15-48b7-9adf-dd14859ecb1d" satisfied condition "success or failure" Apr 4 13:47:51.846: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-9d60de4d-2e15-48b7-9adf-dd14859ecb1d container client-container: STEP: delete the pod Apr 4 13:47:51.860: INFO: Waiting for pod downwardapi-volume-9d60de4d-2e15-48b7-9adf-dd14859ecb1d to disappear Apr 4 13:47:51.870: INFO: Pod downwardapi-volume-9d60de4d-2e15-48b7-9adf-dd14859ecb1d no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 4 13:47:51.871: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-325" for this suite. Apr 4 13:47:57.886: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 4 13:47:57.965: INFO: namespace projected-325 deletion completed in 6.090956299s • [SLOW TEST:10.190 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 4 13:47:57.965: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Apr 4 13:47:58.032: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. Apr 4 13:47:58.101: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 4 13:47:58.111: INFO: Number of nodes with available pods: 0 Apr 4 13:47:58.111: INFO: Node iruya-worker is running more than one daemon pod Apr 4 13:47:59.115: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 4 13:47:59.119: INFO: Number of nodes with available pods: 0 Apr 4 13:47:59.119: INFO: Node iruya-worker is running more than one daemon pod Apr 4 13:48:00.208: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 4 13:48:00.211: INFO: Number of nodes with available pods: 0 Apr 4 13:48:00.211: INFO: Node iruya-worker is running more than one daemon pod Apr 4 13:48:01.121: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 4 13:48:01.124: INFO: Number of nodes with available pods: 0 Apr 4 13:48:01.124: INFO: Node iruya-worker is running more than one daemon pod Apr 4 13:48:02.116: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 4 13:48:02.122: INFO: Number of nodes with available pods: 2 Apr 4 13:48:02.122: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. Apr 4 13:48:02.189: INFO: Wrong image for pod: daemon-set-7xd4f. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 4 13:48:02.189: INFO: Wrong image for pod: daemon-set-j7whm. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 4 13:48:02.201: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 4 13:48:03.206: INFO: Wrong image for pod: daemon-set-7xd4f. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 4 13:48:03.206: INFO: Wrong image for pod: daemon-set-j7whm. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 4 13:48:03.211: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 4 13:48:04.205: INFO: Wrong image for pod: daemon-set-7xd4f. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 4 13:48:04.205: INFO: Wrong image for pod: daemon-set-j7whm. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 4 13:48:04.209: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 4 13:48:05.206: INFO: Wrong image for pod: daemon-set-7xd4f. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 4 13:48:05.206: INFO: Pod daemon-set-7xd4f is not available Apr 4 13:48:05.206: INFO: Wrong image for pod: daemon-set-j7whm. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 4 13:48:05.211: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 4 13:48:06.244: INFO: Wrong image for pod: daemon-set-7xd4f. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 4 13:48:06.244: INFO: Pod daemon-set-7xd4f is not available Apr 4 13:48:06.244: INFO: Wrong image for pod: daemon-set-j7whm. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 4 13:48:06.255: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 4 13:48:07.206: INFO: Wrong image for pod: daemon-set-7xd4f. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 4 13:48:07.206: INFO: Pod daemon-set-7xd4f is not available Apr 4 13:48:07.206: INFO: Wrong image for pod: daemon-set-j7whm. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 4 13:48:07.210: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 4 13:48:08.206: INFO: Wrong image for pod: daemon-set-7xd4f. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 4 13:48:08.206: INFO: Pod daemon-set-7xd4f is not available Apr 4 13:48:08.206: INFO: Wrong image for pod: daemon-set-j7whm. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 4 13:48:08.210: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 4 13:48:09.206: INFO: Wrong image for pod: daemon-set-7xd4f. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 4 13:48:09.206: INFO: Pod daemon-set-7xd4f is not available Apr 4 13:48:09.206: INFO: Wrong image for pod: daemon-set-j7whm. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 4 13:48:09.209: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 4 13:48:10.206: INFO: Wrong image for pod: daemon-set-7xd4f. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 4 13:48:10.206: INFO: Pod daemon-set-7xd4f is not available Apr 4 13:48:10.206: INFO: Wrong image for pod: daemon-set-j7whm. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 4 13:48:10.209: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 4 13:48:11.206: INFO: Wrong image for pod: daemon-set-7xd4f. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 4 13:48:11.206: INFO: Pod daemon-set-7xd4f is not available Apr 4 13:48:11.206: INFO: Wrong image for pod: daemon-set-j7whm. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 4 13:48:11.211: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 4 13:48:12.205: INFO: Pod daemon-set-79v4z is not available Apr 4 13:48:12.205: INFO: Wrong image for pod: daemon-set-j7whm. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 4 13:48:12.208: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 4 13:48:13.213: INFO: Pod daemon-set-79v4z is not available Apr 4 13:48:13.213: INFO: Wrong image for pod: daemon-set-j7whm. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 4 13:48:13.217: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 4 13:48:14.205: INFO: Pod daemon-set-79v4z is not available Apr 4 13:48:14.205: INFO: Wrong image for pod: daemon-set-j7whm. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 4 13:48:14.210: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 4 13:48:15.219: INFO: Wrong image for pod: daemon-set-j7whm. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 4 13:48:15.223: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 4 13:48:16.205: INFO: Wrong image for pod: daemon-set-j7whm. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 4 13:48:16.206: INFO: Pod daemon-set-j7whm is not available Apr 4 13:48:16.209: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 4 13:48:17.206: INFO: Pod daemon-set-q2d9w is not available Apr 4 13:48:17.211: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node STEP: Check that daemon pods are still running on every node of the cluster. Apr 4 13:48:17.215: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 4 13:48:17.218: INFO: Number of nodes with available pods: 1 Apr 4 13:48:17.218: INFO: Node iruya-worker is running more than one daemon pod Apr 4 13:48:18.239: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 4 13:48:18.242: INFO: Number of nodes with available pods: 1 Apr 4 13:48:18.242: INFO: Node iruya-worker is running more than one daemon pod Apr 4 13:48:19.223: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 4 13:48:19.227: INFO: Number of nodes with available pods: 1 Apr 4 13:48:19.227: INFO: Node iruya-worker is running more than one daemon pod Apr 4 13:48:20.222: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 4 13:48:20.225: INFO: Number of nodes with available pods: 2 Apr 4 13:48:20.225: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-7737, will wait for the garbage collector to delete the pods Apr 4 13:48:20.334: INFO: Deleting DaemonSet.extensions daemon-set took: 19.091479ms Apr 4 13:48:20.634: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.27219ms Apr 4 13:48:32.279: INFO: Number of nodes with available pods: 0 Apr 4 13:48:32.279: INFO: Number of running nodes: 0, number of available pods: 0 Apr 4 13:48:32.282: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-7737/daemonsets","resourceVersion":"3593869"},"items":null} Apr 4 13:48:32.284: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-7737/pods","resourceVersion":"3593869"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 4 13:48:32.294: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-7737" for this suite. Apr 4 13:48:38.312: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 4 13:48:38.389: INFO: namespace daemonsets-7737 deletion completed in 6.091568922s • [SLOW TEST:40.425 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 4 13:48:38.390: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Apr 4 13:48:42.513: INFO: Expected: &{OK} to match Container's Termination Message: OK -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 4 13:48:42.542: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-966" for this suite. Apr 4 13:48:48.569: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 4 13:48:48.648: INFO: namespace container-runtime-966 deletion completed in 6.101745218s • [SLOW TEST:10.258 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129 should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 4 13:48:48.648: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-map-fa6edd56-2d9c-468d-840a-a1e69ee785c5 STEP: Creating a pod to test consume configMaps Apr 4 13:48:48.742: INFO: Waiting up to 5m0s for pod "pod-configmaps-1c7b4c4a-c863-4de0-8dad-07b435ef2eb8" in namespace "configmap-247" to be "success or failure" Apr 4 13:48:48.746: INFO: Pod "pod-configmaps-1c7b4c4a-c863-4de0-8dad-07b435ef2eb8": Phase="Pending", Reason="", readiness=false. Elapsed: 3.673436ms Apr 4 13:48:50.750: INFO: Pod "pod-configmaps-1c7b4c4a-c863-4de0-8dad-07b435ef2eb8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007589605s Apr 4 13:48:52.754: INFO: Pod "pod-configmaps-1c7b4c4a-c863-4de0-8dad-07b435ef2eb8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011791671s STEP: Saw pod success Apr 4 13:48:52.754: INFO: Pod "pod-configmaps-1c7b4c4a-c863-4de0-8dad-07b435ef2eb8" satisfied condition "success or failure" Apr 4 13:48:52.757: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-1c7b4c4a-c863-4de0-8dad-07b435ef2eb8 container configmap-volume-test: STEP: delete the pod Apr 4 13:48:52.786: INFO: Waiting for pod pod-configmaps-1c7b4c4a-c863-4de0-8dad-07b435ef2eb8 to disappear Apr 4 13:48:52.800: INFO: Pod pod-configmaps-1c7b4c4a-c863-4de0-8dad-07b435ef2eb8 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 4 13:48:52.800: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-247" for this suite. Apr 4 13:48:58.815: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 4 13:48:58.905: INFO: namespace configmap-247 deletion completed in 6.101967752s • [SLOW TEST:10.257 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 4 13:48:58.906: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Apr 4 13:49:02.026: INFO: Expected: &{} to match Container's Termination Message: -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 4 13:49:02.087: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-9213" for this suite. Apr 4 13:49:08.103: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 4 13:49:08.178: INFO: namespace container-runtime-9213 deletion completed in 6.086961701s • [SLOW TEST:9.272 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129 should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 4 13:49:08.178: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 4 13:49:12.270: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-5535" for this suite. Apr 4 13:49:54.292: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 4 13:49:54.384: INFO: namespace kubelet-test-5535 deletion completed in 42.109728456s • [SLOW TEST:46.206 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a busybox command in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:40 should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Aggregator Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 4 13:49:54.385: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename aggregator STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76 Apr 4 13:49:54.461: INFO: >>> kubeConfig: /root/.kube/config [It] Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Registering the sample API server. Apr 4 13:49:55.174: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set Apr 4 13:49:57.322: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721604995, loc:(*time.Location)(0x7ea78c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721604995, loc:(*time.Location)(0x7ea78c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721604995, loc:(*time.Location)(0x7ea78c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721604995, loc:(*time.Location)(0x7ea78c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 4 13:49:59.958: INFO: Waited 626.994426ms for the sample-apiserver to be ready to handle requests. [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67 [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 4 13:50:00.342: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "aggregator-2493" for this suite. Apr 4 13:50:06.525: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 4 13:50:06.604: INFO: namespace aggregator-2493 deletion completed in 6.211241527s • [SLOW TEST:12.219 seconds] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 4 13:50:06.604: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Apr 4 13:50:14.707: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Apr 4 13:50:14.728: INFO: Pod pod-with-prestop-http-hook still exists Apr 4 13:50:16.728: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Apr 4 13:50:16.732: INFO: Pod pod-with-prestop-http-hook still exists Apr 4 13:50:18.728: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Apr 4 13:50:18.732: INFO: Pod pod-with-prestop-http-hook still exists Apr 4 13:50:20.728: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Apr 4 13:50:20.732: INFO: Pod pod-with-prestop-http-hook still exists Apr 4 13:50:22.728: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Apr 4 13:50:22.732: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 4 13:50:22.738: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-673" for this suite. Apr 4 13:50:44.752: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 4 13:50:44.825: INFO: namespace container-lifecycle-hook-673 deletion completed in 22.083838923s • [SLOW TEST:38.221 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 4 13:50:44.825: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Apr 4 13:50:44.982: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 4 13:50:44.987: INFO: Number of nodes with available pods: 0 Apr 4 13:50:44.987: INFO: Node iruya-worker is running more than one daemon pod Apr 4 13:50:45.992: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 4 13:50:45.996: INFO: Number of nodes with available pods: 0 Apr 4 13:50:45.996: INFO: Node iruya-worker is running more than one daemon pod Apr 4 13:50:46.992: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 4 13:50:46.996: INFO: Number of nodes with available pods: 0 Apr 4 13:50:46.996: INFO: Node iruya-worker is running more than one daemon pod Apr 4 13:50:47.992: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 4 13:50:47.996: INFO: Number of nodes with available pods: 1 Apr 4 13:50:47.996: INFO: Node iruya-worker is running more than one daemon pod Apr 4 13:50:48.992: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 4 13:50:48.996: INFO: Number of nodes with available pods: 2 Apr 4 13:50:48.996: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. Apr 4 13:50:49.013: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 4 13:50:49.018: INFO: Number of nodes with available pods: 2 Apr 4 13:50:49.018: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-8783, will wait for the garbage collector to delete the pods Apr 4 13:50:50.116: INFO: Deleting DaemonSet.extensions daemon-set took: 13.778242ms Apr 4 13:50:50.416: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.248428ms Apr 4 13:51:02.219: INFO: Number of nodes with available pods: 0 Apr 4 13:51:02.219: INFO: Number of running nodes: 0, number of available pods: 0 Apr 4 13:51:02.222: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-8783/daemonsets","resourceVersion":"3594457"},"items":null} Apr 4 13:51:02.224: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-8783/pods","resourceVersion":"3594457"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 4 13:51:02.234: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-8783" for this suite. Apr 4 13:51:08.249: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 4 13:51:08.325: INFO: namespace daemonsets-8783 deletion completed in 6.088269383s • [SLOW TEST:23.501 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 4 13:51:08.326: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a test externalName service STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8919.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-8919.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8919.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-8919.svc.cluster.local; sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 4 13:51:14.463: INFO: DNS probes using dns-test-52aee624-73ca-491b-8ef8-22b630bfcdfc succeeded STEP: deleting the pod STEP: changing the externalName to bar.example.com STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8919.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-8919.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8919.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-8919.svc.cluster.local; sleep 1; done STEP: creating a second pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 4 13:51:20.565: INFO: File wheezy_udp@dns-test-service-3.dns-8919.svc.cluster.local from pod dns-8919/dns-test-45be3845-8085-4261-8f86-3a474d8016e0 contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 4 13:51:20.568: INFO: File jessie_udp@dns-test-service-3.dns-8919.svc.cluster.local from pod dns-8919/dns-test-45be3845-8085-4261-8f86-3a474d8016e0 contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 4 13:51:20.568: INFO: Lookups using dns-8919/dns-test-45be3845-8085-4261-8f86-3a474d8016e0 failed for: [wheezy_udp@dns-test-service-3.dns-8919.svc.cluster.local jessie_udp@dns-test-service-3.dns-8919.svc.cluster.local] Apr 4 13:51:25.588: INFO: File wheezy_udp@dns-test-service-3.dns-8919.svc.cluster.local from pod dns-8919/dns-test-45be3845-8085-4261-8f86-3a474d8016e0 contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 4 13:51:25.592: INFO: File jessie_udp@dns-test-service-3.dns-8919.svc.cluster.local from pod dns-8919/dns-test-45be3845-8085-4261-8f86-3a474d8016e0 contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 4 13:51:25.592: INFO: Lookups using dns-8919/dns-test-45be3845-8085-4261-8f86-3a474d8016e0 failed for: [wheezy_udp@dns-test-service-3.dns-8919.svc.cluster.local jessie_udp@dns-test-service-3.dns-8919.svc.cluster.local] Apr 4 13:51:30.574: INFO: File wheezy_udp@dns-test-service-3.dns-8919.svc.cluster.local from pod dns-8919/dns-test-45be3845-8085-4261-8f86-3a474d8016e0 contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 4 13:51:30.578: INFO: File jessie_udp@dns-test-service-3.dns-8919.svc.cluster.local from pod dns-8919/dns-test-45be3845-8085-4261-8f86-3a474d8016e0 contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 4 13:51:30.578: INFO: Lookups using dns-8919/dns-test-45be3845-8085-4261-8f86-3a474d8016e0 failed for: [wheezy_udp@dns-test-service-3.dns-8919.svc.cluster.local jessie_udp@dns-test-service-3.dns-8919.svc.cluster.local] Apr 4 13:51:35.572: INFO: File wheezy_udp@dns-test-service-3.dns-8919.svc.cluster.local from pod dns-8919/dns-test-45be3845-8085-4261-8f86-3a474d8016e0 contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 4 13:51:35.575: INFO: File jessie_udp@dns-test-service-3.dns-8919.svc.cluster.local from pod dns-8919/dns-test-45be3845-8085-4261-8f86-3a474d8016e0 contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 4 13:51:35.575: INFO: Lookups using dns-8919/dns-test-45be3845-8085-4261-8f86-3a474d8016e0 failed for: [wheezy_udp@dns-test-service-3.dns-8919.svc.cluster.local jessie_udp@dns-test-service-3.dns-8919.svc.cluster.local] Apr 4 13:51:40.574: INFO: File wheezy_udp@dns-test-service-3.dns-8919.svc.cluster.local from pod dns-8919/dns-test-45be3845-8085-4261-8f86-3a474d8016e0 contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 4 13:51:40.578: INFO: File jessie_udp@dns-test-service-3.dns-8919.svc.cluster.local from pod dns-8919/dns-test-45be3845-8085-4261-8f86-3a474d8016e0 contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 4 13:51:40.578: INFO: Lookups using dns-8919/dns-test-45be3845-8085-4261-8f86-3a474d8016e0 failed for: [wheezy_udp@dns-test-service-3.dns-8919.svc.cluster.local jessie_udp@dns-test-service-3.dns-8919.svc.cluster.local] Apr 4 13:51:45.576: INFO: DNS probes using dns-test-45be3845-8085-4261-8f86-3a474d8016e0 succeeded STEP: deleting the pod STEP: changing the service to type=ClusterIP STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8919.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-8919.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8919.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-8919.svc.cluster.local; sleep 1; done STEP: creating a third pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 4 13:51:52.117: INFO: DNS probes using dns-test-3b08018e-2118-4337-ac11-0f3f764bd8e4 succeeded STEP: deleting the pod STEP: deleting the test externalName service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 4 13:51:52.215: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-8919" for this suite. Apr 4 13:51:58.228: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 4 13:51:58.302: INFO: namespace dns-8919 deletion completed in 6.084364158s • [SLOW TEST:49.977 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 4 13:51:58.303: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: getting the auto-created API token Apr 4 13:51:58.933: INFO: created pod pod-service-account-defaultsa Apr 4 13:51:58.933: INFO: pod pod-service-account-defaultsa service account token volume mount: true Apr 4 13:51:59.001: INFO: created pod pod-service-account-mountsa Apr 4 13:51:59.001: INFO: pod pod-service-account-mountsa service account token volume mount: true Apr 4 13:51:59.008: INFO: created pod pod-service-account-nomountsa Apr 4 13:51:59.008: INFO: pod pod-service-account-nomountsa service account token volume mount: false Apr 4 13:51:59.032: INFO: created pod pod-service-account-defaultsa-mountspec Apr 4 13:51:59.032: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true Apr 4 13:51:59.044: INFO: created pod pod-service-account-mountsa-mountspec Apr 4 13:51:59.044: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true Apr 4 13:51:59.063: INFO: created pod pod-service-account-nomountsa-mountspec Apr 4 13:51:59.063: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true Apr 4 13:51:59.099: INFO: created pod pod-service-account-defaultsa-nomountspec Apr 4 13:51:59.099: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false Apr 4 13:51:59.139: INFO: created pod pod-service-account-mountsa-nomountspec Apr 4 13:51:59.139: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false Apr 4 13:51:59.141: INFO: created pod pod-service-account-nomountsa-nomountspec Apr 4 13:51:59.141: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 4 13:51:59.141: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-9612" for this suite. Apr 4 13:52:25.288: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 4 13:52:25.361: INFO: namespace svcaccounts-9612 deletion completed in 26.152576422s • [SLOW TEST:27.058 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 4 13:52:25.361: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating the pod Apr 4 13:52:29.942: INFO: Successfully updated pod "annotationupdate1ec0f3f6-e9d1-4198-8893-ce4314823f6c" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 4 13:52:31.979: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-916" for this suite. Apr 4 13:52:53.994: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 4 13:52:54.070: INFO: namespace downward-api-916 deletion completed in 22.086920935s • [SLOW TEST:28.709 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 4 13:52:54.071: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod liveness-45691417-25c2-4b74-8c96-e65b50bbc84d in namespace container-probe-3580 Apr 4 13:52:58.133: INFO: Started pod liveness-45691417-25c2-4b74-8c96-e65b50bbc84d in namespace container-probe-3580 STEP: checking the pod's current state and verifying that restartCount is present Apr 4 13:52:58.136: INFO: Initial restart count of pod liveness-45691417-25c2-4b74-8c96-e65b50bbc84d is 0 Apr 4 13:53:14.172: INFO: Restart count of pod container-probe-3580/liveness-45691417-25c2-4b74-8c96-e65b50bbc84d is now 1 (16.03579773s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 4 13:53:14.184: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-3580" for this suite. Apr 4 13:53:20.245: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 4 13:53:20.384: INFO: namespace container-probe-3580 deletion completed in 6.195014868s • [SLOW TEST:26.313 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 4 13:53:20.384: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Apr 4 13:53:20.423: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e1b01116-2d8e-4d17-8795-7c1b40a6fb76" in namespace "downward-api-3601" to be "success or failure" Apr 4 13:53:20.442: INFO: Pod "downwardapi-volume-e1b01116-2d8e-4d17-8795-7c1b40a6fb76": Phase="Pending", Reason="", readiness=false. Elapsed: 18.966026ms Apr 4 13:53:22.446: INFO: Pod "downwardapi-volume-e1b01116-2d8e-4d17-8795-7c1b40a6fb76": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022858898s Apr 4 13:53:24.450: INFO: Pod "downwardapi-volume-e1b01116-2d8e-4d17-8795-7c1b40a6fb76": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.026562288s STEP: Saw pod success Apr 4 13:53:24.450: INFO: Pod "downwardapi-volume-e1b01116-2d8e-4d17-8795-7c1b40a6fb76" satisfied condition "success or failure" Apr 4 13:53:24.453: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-e1b01116-2d8e-4d17-8795-7c1b40a6fb76 container client-container: STEP: delete the pod Apr 4 13:53:24.492: INFO: Waiting for pod downwardapi-volume-e1b01116-2d8e-4d17-8795-7c1b40a6fb76 to disappear Apr 4 13:53:24.510: INFO: Pod downwardapi-volume-e1b01116-2d8e-4d17-8795-7c1b40a6fb76 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 4 13:53:24.510: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3601" for this suite. Apr 4 13:53:30.528: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 4 13:53:30.609: INFO: namespace downward-api-3601 deletion completed in 6.094772181s • [SLOW TEST:10.225 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 4 13:53:30.609: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-3778a734-3798-4649-a160-696e3ca00b8a STEP: Creating a pod to test consume secrets Apr 4 13:53:30.690: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-8ccefa41-9e49-404b-9a78-36657061cc68" in namespace "projected-6811" to be "success or failure" Apr 4 13:53:30.693: INFO: Pod "pod-projected-secrets-8ccefa41-9e49-404b-9a78-36657061cc68": Phase="Pending", Reason="", readiness=false. Elapsed: 3.666874ms Apr 4 13:53:32.698: INFO: Pod "pod-projected-secrets-8ccefa41-9e49-404b-9a78-36657061cc68": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00778851s Apr 4 13:53:34.702: INFO: Pod "pod-projected-secrets-8ccefa41-9e49-404b-9a78-36657061cc68": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011953932s STEP: Saw pod success Apr 4 13:53:34.702: INFO: Pod "pod-projected-secrets-8ccefa41-9e49-404b-9a78-36657061cc68" satisfied condition "success or failure" Apr 4 13:53:34.705: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-secrets-8ccefa41-9e49-404b-9a78-36657061cc68 container projected-secret-volume-test: STEP: delete the pod Apr 4 13:53:34.739: INFO: Waiting for pod pod-projected-secrets-8ccefa41-9e49-404b-9a78-36657061cc68 to disappear Apr 4 13:53:34.754: INFO: Pod pod-projected-secrets-8ccefa41-9e49-404b-9a78-36657061cc68 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 4 13:53:34.754: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6811" for this suite. Apr 4 13:53:40.794: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 4 13:53:40.874: INFO: namespace projected-6811 deletion completed in 6.116125884s • [SLOW TEST:10.265 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 4 13:53:40.874: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0666 on tmpfs Apr 4 13:53:40.972: INFO: Waiting up to 5m0s for pod "pod-2777188c-82c0-480c-9b81-eeac66930f01" in namespace "emptydir-2878" to be "success or failure" Apr 4 13:53:40.981: INFO: Pod "pod-2777188c-82c0-480c-9b81-eeac66930f01": Phase="Pending", Reason="", readiness=false. Elapsed: 8.969948ms Apr 4 13:53:43.002: INFO: Pod "pod-2777188c-82c0-480c-9b81-eeac66930f01": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030244965s Apr 4 13:53:45.007: INFO: Pod "pod-2777188c-82c0-480c-9b81-eeac66930f01": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.034501838s STEP: Saw pod success Apr 4 13:53:45.007: INFO: Pod "pod-2777188c-82c0-480c-9b81-eeac66930f01" satisfied condition "success or failure" Apr 4 13:53:45.010: INFO: Trying to get logs from node iruya-worker pod pod-2777188c-82c0-480c-9b81-eeac66930f01 container test-container: STEP: delete the pod Apr 4 13:53:45.024: INFO: Waiting for pod pod-2777188c-82c0-480c-9b81-eeac66930f01 to disappear Apr 4 13:53:45.045: INFO: Pod pod-2777188c-82c0-480c-9b81-eeac66930f01 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 4 13:53:45.045: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2878" for this suite. Apr 4 13:53:51.060: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 4 13:53:51.147: INFO: namespace emptydir-2878 deletion completed in 6.098357365s • [SLOW TEST:10.273 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 4 13:53:51.147: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Apr 4 13:53:51.193: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 4 13:53:55.273: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-2876" for this suite. Apr 4 13:54:33.288: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 4 13:54:33.369: INFO: namespace pods-2876 deletion completed in 38.092894502s • [SLOW TEST:42.222 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 4 13:54:33.369: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: expected 0 pods, got 2 pods STEP: expected 0 pods, got 2 pods STEP: expected 0 pods, got 2 pods STEP: Gathering metrics W0404 13:54:36.155097 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Apr 4 13:54:36.155: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 4 13:54:36.155: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-5829" for this suite. Apr 4 13:54:42.202: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 4 13:54:42.276: INFO: namespace gc-5829 deletion completed in 6.118409086s • [SLOW TEST:8.907 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 4 13:54:42.277: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Apr 4 13:54:50.414: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 4 13:54:50.446: INFO: Pod pod-with-poststart-exec-hook still exists Apr 4 13:54:52.446: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 4 13:54:52.452: INFO: Pod pod-with-poststart-exec-hook still exists Apr 4 13:54:54.446: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 4 13:54:54.464: INFO: Pod pod-with-poststart-exec-hook still exists Apr 4 13:54:56.446: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 4 13:54:56.450: INFO: Pod pod-with-poststart-exec-hook still exists Apr 4 13:54:58.446: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 4 13:54:58.449: INFO: Pod pod-with-poststart-exec-hook still exists Apr 4 13:55:00.446: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 4 13:55:00.450: INFO: Pod pod-with-poststart-exec-hook still exists Apr 4 13:55:02.446: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 4 13:55:02.450: INFO: Pod pod-with-poststart-exec-hook still exists Apr 4 13:55:04.446: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 4 13:55:04.450: INFO: Pod pod-with-poststart-exec-hook still exists Apr 4 13:55:06.446: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 4 13:55:06.450: INFO: Pod pod-with-poststart-exec-hook still exists Apr 4 13:55:08.446: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 4 13:55:08.451: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 4 13:55:08.451: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-6365" for this suite. Apr 4 13:55:30.471: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 4 13:55:30.565: INFO: namespace container-lifecycle-hook-6365 deletion completed in 22.109033114s • [SLOW TEST:48.288 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 4 13:55:30.565: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Apr 4 13:55:54.667: INFO: Container started at 2020-04-04 13:55:32 +0000 UTC, pod became ready at 2020-04-04 13:55:54 +0000 UTC [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 4 13:55:54.667: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-7536" for this suite. Apr 4 13:56:16.685: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 4 13:56:16.783: INFO: namespace container-probe-7536 deletion completed in 22.112761096s • [SLOW TEST:46.218 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 4 13:56:16.784: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Apr 4 13:56:16.872: INFO: Waiting up to 5m0s for pod "downwardapi-volume-23ec9f6a-1efd-4995-a958-39b8fbaf867d" in namespace "downward-api-1837" to be "success or failure" Apr 4 13:56:16.888: INFO: Pod "downwardapi-volume-23ec9f6a-1efd-4995-a958-39b8fbaf867d": Phase="Pending", Reason="", readiness=false. Elapsed: 15.792193ms Apr 4 13:56:18.986: INFO: Pod "downwardapi-volume-23ec9f6a-1efd-4995-a958-39b8fbaf867d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.114142024s Apr 4 13:56:20.991: INFO: Pod "downwardapi-volume-23ec9f6a-1efd-4995-a958-39b8fbaf867d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.118608581s STEP: Saw pod success Apr 4 13:56:20.991: INFO: Pod "downwardapi-volume-23ec9f6a-1efd-4995-a958-39b8fbaf867d" satisfied condition "success or failure" Apr 4 13:56:20.994: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-23ec9f6a-1efd-4995-a958-39b8fbaf867d container client-container: STEP: delete the pod Apr 4 13:56:21.009: INFO: Waiting for pod downwardapi-volume-23ec9f6a-1efd-4995-a958-39b8fbaf867d to disappear Apr 4 13:56:21.014: INFO: Pod downwardapi-volume-23ec9f6a-1efd-4995-a958-39b8fbaf867d no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 4 13:56:21.014: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1837" for this suite. Apr 4 13:56:27.046: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 4 13:56:27.131: INFO: namespace downward-api-1837 deletion completed in 6.113563038s • [SLOW TEST:10.347 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 4 13:56:27.131: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-configmap-r2vp STEP: Creating a pod to test atomic-volume-subpath Apr 4 13:56:27.220: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-r2vp" in namespace "subpath-1113" to be "success or failure" Apr 4 13:56:27.224: INFO: Pod "pod-subpath-test-configmap-r2vp": Phase="Pending", Reason="", readiness=false. Elapsed: 4.189243ms Apr 4 13:56:29.228: INFO: Pod "pod-subpath-test-configmap-r2vp": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008040284s Apr 4 13:56:31.232: INFO: Pod "pod-subpath-test-configmap-r2vp": Phase="Running", Reason="", readiness=true. Elapsed: 4.012224515s Apr 4 13:56:33.236: INFO: Pod "pod-subpath-test-configmap-r2vp": Phase="Running", Reason="", readiness=true. Elapsed: 6.016313475s Apr 4 13:56:35.240: INFO: Pod "pod-subpath-test-configmap-r2vp": Phase="Running", Reason="", readiness=true. Elapsed: 8.0200546s Apr 4 13:56:37.244: INFO: Pod "pod-subpath-test-configmap-r2vp": Phase="Running", Reason="", readiness=true. Elapsed: 10.024487829s Apr 4 13:56:39.249: INFO: Pod "pod-subpath-test-configmap-r2vp": Phase="Running", Reason="", readiness=true. Elapsed: 12.029273053s Apr 4 13:56:41.254: INFO: Pod "pod-subpath-test-configmap-r2vp": Phase="Running", Reason="", readiness=true. Elapsed: 14.033863222s Apr 4 13:56:43.258: INFO: Pod "pod-subpath-test-configmap-r2vp": Phase="Running", Reason="", readiness=true. Elapsed: 16.038462044s Apr 4 13:56:45.263: INFO: Pod "pod-subpath-test-configmap-r2vp": Phase="Running", Reason="", readiness=true. Elapsed: 18.042894219s Apr 4 13:56:47.267: INFO: Pod "pod-subpath-test-configmap-r2vp": Phase="Running", Reason="", readiness=true. Elapsed: 20.047150532s Apr 4 13:56:49.272: INFO: Pod "pod-subpath-test-configmap-r2vp": Phase="Running", Reason="", readiness=true. Elapsed: 22.051689682s Apr 4 13:56:51.275: INFO: Pod "pod-subpath-test-configmap-r2vp": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.054915152s STEP: Saw pod success Apr 4 13:56:51.275: INFO: Pod "pod-subpath-test-configmap-r2vp" satisfied condition "success or failure" Apr 4 13:56:51.277: INFO: Trying to get logs from node iruya-worker pod pod-subpath-test-configmap-r2vp container test-container-subpath-configmap-r2vp: STEP: delete the pod Apr 4 13:56:51.309: INFO: Waiting for pod pod-subpath-test-configmap-r2vp to disappear Apr 4 13:56:51.323: INFO: Pod pod-subpath-test-configmap-r2vp no longer exists STEP: Deleting pod pod-subpath-test-configmap-r2vp Apr 4 13:56:51.323: INFO: Deleting pod "pod-subpath-test-configmap-r2vp" in namespace "subpath-1113" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 4 13:56:51.325: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-1113" for this suite. Apr 4 13:56:57.359: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 4 13:56:57.452: INFO: namespace subpath-1113 deletion completed in 6.122125015s • [SLOW TEST:30.321 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Proxy server should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 4 13:56:57.453: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: starting the proxy server Apr 4 13:56:57.486: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter' STEP: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 4 13:56:57.564: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9342" for this suite. Apr 4 13:57:03.577: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 4 13:57:03.651: INFO: namespace kubectl-9342 deletion completed in 6.083490682s • [SLOW TEST:6.198 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Proxy server /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 4 13:57:03.652: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update Apr 4 13:57:03.755: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:watch-8867,SelfLink:/api/v1/namespaces/watch-8867/configmaps/e2e-watch-test-resource-version,UID:76ab4e58-6f78-4f12-b6fa-901c2f5589c5,ResourceVersion:3595728,Generation:0,CreationTimestamp:2020-04-04 13:57:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Apr 4 13:57:03.755: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:watch-8867,SelfLink:/api/v1/namespaces/watch-8867/configmaps/e2e-watch-test-resource-version,UID:76ab4e58-6f78-4f12-b6fa-901c2f5589c5,ResourceVersion:3595729,Generation:0,CreationTimestamp:2020-04-04 13:57:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 4 13:57:03.755: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-8867" for this suite. Apr 4 13:57:09.766: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 4 13:57:09.846: INFO: namespace watch-8867 deletion completed in 6.088163409s • [SLOW TEST:6.194 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 4 13:57:09.846: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Apr 4 13:57:09.885: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d7f72f66-43be-47f2-8303-5e94a1b9ce9d" in namespace "projected-9402" to be "success or failure" Apr 4 13:57:09.905: INFO: Pod "downwardapi-volume-d7f72f66-43be-47f2-8303-5e94a1b9ce9d": Phase="Pending", Reason="", readiness=false. Elapsed: 19.330495ms Apr 4 13:57:11.909: INFO: Pod "downwardapi-volume-d7f72f66-43be-47f2-8303-5e94a1b9ce9d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023642711s Apr 4 13:57:13.913: INFO: Pod "downwardapi-volume-d7f72f66-43be-47f2-8303-5e94a1b9ce9d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.027913669s STEP: Saw pod success Apr 4 13:57:13.913: INFO: Pod "downwardapi-volume-d7f72f66-43be-47f2-8303-5e94a1b9ce9d" satisfied condition "success or failure" Apr 4 13:57:13.916: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-d7f72f66-43be-47f2-8303-5e94a1b9ce9d container client-container: STEP: delete the pod Apr 4 13:57:13.980: INFO: Waiting for pod downwardapi-volume-d7f72f66-43be-47f2-8303-5e94a1b9ce9d to disappear Apr 4 13:57:13.985: INFO: Pod downwardapi-volume-d7f72f66-43be-47f2-8303-5e94a1b9ce9d no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 4 13:57:13.985: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9402" for this suite. Apr 4 13:57:20.001: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 4 13:57:20.068: INFO: namespace projected-9402 deletion completed in 6.07945412s • [SLOW TEST:10.222 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 4 13:57:20.069: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Apr 4 13:57:20.189: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"4da60a99-84c7-4bdd-9b9b-2977d42907f1", Controller:(*bool)(0xc002611dca), BlockOwnerDeletion:(*bool)(0xc002611dcb)}} Apr 4 13:57:20.196: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"75ab208d-d4e9-4e0b-ab68-e7cbccb099ad", Controller:(*bool)(0xc0022ee612), BlockOwnerDeletion:(*bool)(0xc0022ee613)}} Apr 4 13:57:20.201: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"c9d3bd1f-0025-4e3c-9fa5-6ab1896594a5", Controller:(*bool)(0xc002611f7a), BlockOwnerDeletion:(*bool)(0xc002611f7b)}} [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 4 13:57:25.229: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-8796" for this suite. Apr 4 13:57:31.249: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 4 13:57:31.325: INFO: namespace gc-8796 deletion completed in 6.092443425s • [SLOW TEST:11.256 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl patch should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 4 13:57:31.326: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating Redis RC Apr 4 13:57:31.359: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7140' Apr 4 13:57:34.131: INFO: stderr: "" Apr 4 13:57:34.131: INFO: stdout: "replicationcontroller/redis-master created\n" STEP: Waiting for Redis master to start. Apr 4 13:57:35.137: INFO: Selector matched 1 pods for map[app:redis] Apr 4 13:57:35.137: INFO: Found 0 / 1 Apr 4 13:57:36.136: INFO: Selector matched 1 pods for map[app:redis] Apr 4 13:57:36.136: INFO: Found 0 / 1 Apr 4 13:57:37.136: INFO: Selector matched 1 pods for map[app:redis] Apr 4 13:57:37.136: INFO: Found 0 / 1 Apr 4 13:57:38.136: INFO: Selector matched 1 pods for map[app:redis] Apr 4 13:57:38.136: INFO: Found 1 / 1 Apr 4 13:57:38.136: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods Apr 4 13:57:38.139: INFO: Selector matched 1 pods for map[app:redis] Apr 4 13:57:38.139: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Apr 4 13:57:38.139: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod redis-master-jgkrl --namespace=kubectl-7140 -p {"metadata":{"annotations":{"x":"y"}}}' Apr 4 13:57:38.245: INFO: stderr: "" Apr 4 13:57:38.245: INFO: stdout: "pod/redis-master-jgkrl patched\n" STEP: checking annotations Apr 4 13:57:38.248: INFO: Selector matched 1 pods for map[app:redis] Apr 4 13:57:38.248: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 4 13:57:38.248: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7140" for this suite. Apr 4 13:58:00.265: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 4 13:58:00.351: INFO: namespace kubectl-7140 deletion completed in 22.099692015s • [SLOW TEST:29.025 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl patch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 4 13:58:00.351: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Performing setup for networking test in namespace pod-network-test-6716 STEP: creating a selector STEP: Creating the service pods in kubernetes Apr 4 13:58:00.418: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Apr 4 13:58:24.519: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.37:8080/dial?request=hostName&protocol=udp&host=10.244.2.36&port=8081&tries=1'] Namespace:pod-network-test-6716 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 4 13:58:24.519: INFO: >>> kubeConfig: /root/.kube/config I0404 13:58:24.555402 6 log.go:172] (0xc0009d0b00) (0xc000fe4a00) Create stream I0404 13:58:24.555427 6 log.go:172] (0xc0009d0b00) (0xc000fe4a00) Stream added, broadcasting: 1 I0404 13:58:24.558421 6 log.go:172] (0xc0009d0b00) Reply frame received for 1 I0404 13:58:24.558474 6 log.go:172] (0xc0009d0b00) (0xc002f64c80) Create stream I0404 13:58:24.558490 6 log.go:172] (0xc0009d0b00) (0xc002f64c80) Stream added, broadcasting: 3 I0404 13:58:24.559663 6 log.go:172] (0xc0009d0b00) Reply frame received for 3 I0404 13:58:24.559708 6 log.go:172] (0xc0009d0b00) (0xc002f64d20) Create stream I0404 13:58:24.559725 6 log.go:172] (0xc0009d0b00) (0xc002f64d20) Stream added, broadcasting: 5 I0404 13:58:24.560825 6 log.go:172] (0xc0009d0b00) Reply frame received for 5 I0404 13:58:24.661650 6 log.go:172] (0xc0009d0b00) Data frame received for 3 I0404 13:58:24.661681 6 log.go:172] (0xc002f64c80) (3) Data frame handling I0404 13:58:24.661700 6 log.go:172] (0xc002f64c80) (3) Data frame sent I0404 13:58:24.662062 6 log.go:172] (0xc0009d0b00) Data frame received for 3 I0404 13:58:24.662085 6 log.go:172] (0xc002f64c80) (3) Data frame handling I0404 13:58:24.662379 6 log.go:172] (0xc0009d0b00) Data frame received for 5 I0404 13:58:24.662393 6 log.go:172] (0xc002f64d20) (5) Data frame handling I0404 13:58:24.664198 6 log.go:172] (0xc0009d0b00) Data frame received for 1 I0404 13:58:24.664252 6 log.go:172] (0xc000fe4a00) (1) Data frame handling I0404 13:58:24.664287 6 log.go:172] (0xc000fe4a00) (1) Data frame sent I0404 13:58:24.664393 6 log.go:172] (0xc0009d0b00) (0xc000fe4a00) Stream removed, broadcasting: 1 I0404 13:58:24.664514 6 log.go:172] (0xc0009d0b00) Go away received I0404 13:58:24.664540 6 log.go:172] (0xc0009d0b00) (0xc000fe4a00) Stream removed, broadcasting: 1 I0404 13:58:24.664554 6 log.go:172] (0xc0009d0b00) (0xc002f64c80) Stream removed, broadcasting: 3 I0404 13:58:24.664563 6 log.go:172] (0xc0009d0b00) (0xc002f64d20) Stream removed, broadcasting: 5 Apr 4 13:58:24.664: INFO: Waiting for endpoints: map[] Apr 4 13:58:24.668: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.37:8080/dial?request=hostName&protocol=udp&host=10.244.1.151&port=8081&tries=1'] Namespace:pod-network-test-6716 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 4 13:58:24.668: INFO: >>> kubeConfig: /root/.kube/config I0404 13:58:24.695493 6 log.go:172] (0xc000b99a20) (0xc0020521e0) Create stream I0404 13:58:24.695515 6 log.go:172] (0xc000b99a20) (0xc0020521e0) Stream added, broadcasting: 1 I0404 13:58:24.702110 6 log.go:172] (0xc000b99a20) Reply frame received for 1 I0404 13:58:24.702176 6 log.go:172] (0xc000b99a20) (0xc0017c8320) Create stream I0404 13:58:24.702200 6 log.go:172] (0xc000b99a20) (0xc0017c8320) Stream added, broadcasting: 3 I0404 13:58:24.703968 6 log.go:172] (0xc000b99a20) Reply frame received for 3 I0404 13:58:24.704036 6 log.go:172] (0xc000b99a20) (0xc002052280) Create stream I0404 13:58:24.704080 6 log.go:172] (0xc000b99a20) (0xc002052280) Stream added, broadcasting: 5 I0404 13:58:24.705865 6 log.go:172] (0xc000b99a20) Reply frame received for 5 I0404 13:58:24.777513 6 log.go:172] (0xc000b99a20) Data frame received for 3 I0404 13:58:24.777533 6 log.go:172] (0xc0017c8320) (3) Data frame handling I0404 13:58:24.777541 6 log.go:172] (0xc0017c8320) (3) Data frame sent I0404 13:58:24.778094 6 log.go:172] (0xc000b99a20) Data frame received for 3 I0404 13:58:24.778104 6 log.go:172] (0xc0017c8320) (3) Data frame handling I0404 13:58:24.778335 6 log.go:172] (0xc000b99a20) Data frame received for 5 I0404 13:58:24.778390 6 log.go:172] (0xc002052280) (5) Data frame handling I0404 13:58:24.779815 6 log.go:172] (0xc000b99a20) Data frame received for 1 I0404 13:58:24.779833 6 log.go:172] (0xc0020521e0) (1) Data frame handling I0404 13:58:24.779856 6 log.go:172] (0xc0020521e0) (1) Data frame sent I0404 13:58:24.779871 6 log.go:172] (0xc000b99a20) (0xc0020521e0) Stream removed, broadcasting: 1 I0404 13:58:24.779983 6 log.go:172] (0xc000b99a20) (0xc0020521e0) Stream removed, broadcasting: 1 I0404 13:58:24.779999 6 log.go:172] (0xc000b99a20) (0xc0017c8320) Stream removed, broadcasting: 3 I0404 13:58:24.780014 6 log.go:172] (0xc000b99a20) Go away received I0404 13:58:24.780074 6 log.go:172] (0xc000b99a20) (0xc002052280) Stream removed, broadcasting: 5 Apr 4 13:58:24.780: INFO: Waiting for endpoints: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 4 13:58:24.780: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-6716" for this suite. Apr 4 13:58:46.815: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 4 13:58:46.899: INFO: namespace pod-network-test-6716 deletion completed in 22.114791571s • [SLOW TEST:46.547 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 4 13:58:46.899: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name cm-test-opt-del-b8d814e2-2317-472b-b429-184ec7c74fff STEP: Creating configMap with name cm-test-opt-upd-7293927d-38f5-43b1-ae5e-68989a73b0a1 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-b8d814e2-2317-472b-b429-184ec7c74fff STEP: Updating configmap cm-test-opt-upd-7293927d-38f5-43b1-ae5e-68989a73b0a1 STEP: Creating configMap with name cm-test-opt-create-61815aab-337e-4e1c-ae0d-01382bb3ff15 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 4 13:58:57.138: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3305" for this suite. Apr 4 13:59:19.155: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 4 13:59:19.238: INFO: namespace configmap-3305 deletion completed in 22.095476941s • [SLOW TEST:32.339 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 4 13:59:19.238: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes Apr 4 13:59:19.308: INFO: observed the pod list STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice STEP: verifying pod deletion was observed [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 4 13:59:31.880: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-8322" for this suite. Apr 4 13:59:37.895: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 4 13:59:37.978: INFO: namespace pods-8322 deletion completed in 6.094814373s • [SLOW TEST:18.740 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 4 13:59:37.980: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications Apr 4 13:59:38.057: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-7798,SelfLink:/api/v1/namespaces/watch-7798/configmaps/e2e-watch-test-watch-closed,UID:c9ae92d6-15d5-418f-b35b-30d922efaf3e,ResourceVersion:3596283,Generation:0,CreationTimestamp:2020-04-04 13:59:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} Apr 4 13:59:38.057: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-7798,SelfLink:/api/v1/namespaces/watch-7798/configmaps/e2e-watch-test-watch-closed,UID:c9ae92d6-15d5-418f-b35b-30d922efaf3e,ResourceVersion:3596284,Generation:0,CreationTimestamp:2020-04-04 13:59:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed Apr 4 13:59:38.104: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-7798,SelfLink:/api/v1/namespaces/watch-7798/configmaps/e2e-watch-test-watch-closed,UID:c9ae92d6-15d5-418f-b35b-30d922efaf3e,ResourceVersion:3596286,Generation:0,CreationTimestamp:2020-04-04 13:59:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Apr 4 13:59:38.104: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-7798,SelfLink:/api/v1/namespaces/watch-7798/configmaps/e2e-watch-test-watch-closed,UID:c9ae92d6-15d5-418f-b35b-30d922efaf3e,ResourceVersion:3596287,Generation:0,CreationTimestamp:2020-04-04 13:59:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 4 13:59:38.104: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-7798" for this suite. Apr 4 13:59:44.135: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 4 13:59:44.231: INFO: namespace watch-7798 deletion completed in 6.122656601s • [SLOW TEST:6.252 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 4 13:59:44.232: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0777 on tmpfs Apr 4 13:59:44.303: INFO: Waiting up to 5m0s for pod "pod-279a379a-787e-49b3-a50a-9508368dca96" in namespace "emptydir-7812" to be "success or failure" Apr 4 13:59:44.306: INFO: Pod "pod-279a379a-787e-49b3-a50a-9508368dca96": Phase="Pending", Reason="", readiness=false. Elapsed: 3.095612ms Apr 4 13:59:46.310: INFO: Pod "pod-279a379a-787e-49b3-a50a-9508368dca96": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006928464s Apr 4 13:59:48.314: INFO: Pod "pod-279a379a-787e-49b3-a50a-9508368dca96": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011255846s STEP: Saw pod success Apr 4 13:59:48.314: INFO: Pod "pod-279a379a-787e-49b3-a50a-9508368dca96" satisfied condition "success or failure" Apr 4 13:59:48.317: INFO: Trying to get logs from node iruya-worker pod pod-279a379a-787e-49b3-a50a-9508368dca96 container test-container: STEP: delete the pod Apr 4 13:59:48.350: INFO: Waiting for pod pod-279a379a-787e-49b3-a50a-9508368dca96 to disappear Apr 4 13:59:48.361: INFO: Pod pod-279a379a-787e-49b3-a50a-9508368dca96 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 4 13:59:48.361: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7812" for this suite. Apr 4 13:59:54.377: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 4 13:59:54.457: INFO: namespace emptydir-7812 deletion completed in 6.092722207s • [SLOW TEST:10.225 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 4 13:59:54.458: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-map-c3526784-3f1b-48ca-8192-b7b7042d7456 STEP: Creating a pod to test consume secrets Apr 4 13:59:54.550: INFO: Waiting up to 5m0s for pod "pod-secrets-dba73e45-a256-49d3-8f26-945725bc4796" in namespace "secrets-9559" to be "success or failure" Apr 4 13:59:54.559: INFO: Pod "pod-secrets-dba73e45-a256-49d3-8f26-945725bc4796": Phase="Pending", Reason="", readiness=false. Elapsed: 8.700387ms Apr 4 13:59:56.563: INFO: Pod "pod-secrets-dba73e45-a256-49d3-8f26-945725bc4796": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012054674s Apr 4 13:59:58.566: INFO: Pod "pod-secrets-dba73e45-a256-49d3-8f26-945725bc4796": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.015687543s STEP: Saw pod success Apr 4 13:59:58.566: INFO: Pod "pod-secrets-dba73e45-a256-49d3-8f26-945725bc4796" satisfied condition "success or failure" Apr 4 13:59:58.569: INFO: Trying to get logs from node iruya-worker2 pod pod-secrets-dba73e45-a256-49d3-8f26-945725bc4796 container secret-volume-test: STEP: delete the pod Apr 4 13:59:58.598: INFO: Waiting for pod pod-secrets-dba73e45-a256-49d3-8f26-945725bc4796 to disappear Apr 4 13:59:58.613: INFO: Pod pod-secrets-dba73e45-a256-49d3-8f26-945725bc4796 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 4 13:59:58.613: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-9559" for this suite. Apr 4 14:00:04.628: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 4 14:00:04.700: INFO: namespace secrets-9559 deletion completed in 6.083398512s • [SLOW TEST:10.243 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 4 14:00:04.701: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test override all Apr 4 14:00:04.771: INFO: Waiting up to 5m0s for pod "client-containers-56cacfeb-14dc-47f3-8778-b61203c5a5bf" in namespace "containers-844" to be "success or failure" Apr 4 14:00:04.784: INFO: Pod "client-containers-56cacfeb-14dc-47f3-8778-b61203c5a5bf": Phase="Pending", Reason="", readiness=false. Elapsed: 12.588124ms Apr 4 14:00:06.798: INFO: Pod "client-containers-56cacfeb-14dc-47f3-8778-b61203c5a5bf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02651453s Apr 4 14:00:08.802: INFO: Pod "client-containers-56cacfeb-14dc-47f3-8778-b61203c5a5bf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.030704366s STEP: Saw pod success Apr 4 14:00:08.802: INFO: Pod "client-containers-56cacfeb-14dc-47f3-8778-b61203c5a5bf" satisfied condition "success or failure" Apr 4 14:00:08.805: INFO: Trying to get logs from node iruya-worker pod client-containers-56cacfeb-14dc-47f3-8778-b61203c5a5bf container test-container: STEP: delete the pod Apr 4 14:00:08.877: INFO: Waiting for pod client-containers-56cacfeb-14dc-47f3-8778-b61203c5a5bf to disappear Apr 4 14:00:08.881: INFO: Pod client-containers-56cacfeb-14dc-47f3-8778-b61203c5a5bf no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 4 14:00:08.881: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-844" for this suite. Apr 4 14:00:14.935: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 4 14:00:15.014: INFO: namespace containers-844 deletion completed in 6.129342892s • [SLOW TEST:10.313 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 4 14:00:15.015: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod Apr 4 14:00:15.068: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 4 14:00:20.544: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-8693" for this suite. Apr 4 14:00:26.580: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 4 14:00:26.663: INFO: namespace init-container-8693 deletion completed in 6.102807794s • [SLOW TEST:11.648 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 4 14:00:26.664: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Given a Pod with a 'name' label pod-adoption-release is created STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change Apr 4 14:00:31.819: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 4 14:00:32.847: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-1524" for this suite. Apr 4 14:00:54.905: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 4 14:00:54.979: INFO: namespace replicaset-1524 deletion completed in 22.128607532s • [SLOW TEST:28.315 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 4 14:00:54.980: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: getting the auto-created API token STEP: reading a file in the container Apr 4 14:00:59.563: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-3288 pod-service-account-11b97652-4f60-4c19-ace6-c4896f5eccf5 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token' STEP: reading a file in the container Apr 4 14:00:59.803: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-3288 pod-service-account-11b97652-4f60-4c19-ace6-c4896f5eccf5 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt' STEP: reading a file in the container Apr 4 14:01:00.011: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-3288 pod-service-account-11b97652-4f60-4c19-ace6-c4896f5eccf5 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace' [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 4 14:01:00.234: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-3288" for this suite. Apr 4 14:01:06.251: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 4 14:01:06.330: INFO: namespace svcaccounts-3288 deletion completed in 6.092610154s • [SLOW TEST:11.350 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 4 14:01:06.331: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-3961f4b7-9bd3-4126-b3a5-0608cd62225a STEP: Creating a pod to test consume secrets Apr 4 14:01:06.398: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-fa39f0c2-0878-4ef1-b322-b77c3f9e756f" in namespace "projected-6289" to be "success or failure" Apr 4 14:01:06.404: INFO: Pod "pod-projected-secrets-fa39f0c2-0878-4ef1-b322-b77c3f9e756f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.77518ms Apr 4 14:01:08.409: INFO: Pod "pod-projected-secrets-fa39f0c2-0878-4ef1-b322-b77c3f9e756f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011254518s Apr 4 14:01:10.413: INFO: Pod "pod-projected-secrets-fa39f0c2-0878-4ef1-b322-b77c3f9e756f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.015580933s STEP: Saw pod success Apr 4 14:01:10.413: INFO: Pod "pod-projected-secrets-fa39f0c2-0878-4ef1-b322-b77c3f9e756f" satisfied condition "success or failure" Apr 4 14:01:10.416: INFO: Trying to get logs from node iruya-worker pod pod-projected-secrets-fa39f0c2-0878-4ef1-b322-b77c3f9e756f container projected-secret-volume-test: STEP: delete the pod Apr 4 14:01:10.436: INFO: Waiting for pod pod-projected-secrets-fa39f0c2-0878-4ef1-b322-b77c3f9e756f to disappear Apr 4 14:01:10.571: INFO: Pod pod-projected-secrets-fa39f0c2-0878-4ef1-b322-b77c3f9e756f no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 4 14:01:10.571: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6289" for this suite. Apr 4 14:01:16.666: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 4 14:01:16.758: INFO: namespace projected-6289 deletion completed in 6.182479669s • [SLOW TEST:10.427 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 4 14:01:16.758: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 4 14:01:42.972: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-9623" for this suite. Apr 4 14:01:48.988: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 4 14:01:49.061: INFO: namespace namespaces-9623 deletion completed in 6.085230376s STEP: Destroying namespace "nsdeletetest-2996" for this suite. Apr 4 14:01:49.063: INFO: Namespace nsdeletetest-2996 was already deleted STEP: Destroying namespace "nsdeletetest-4428" for this suite. Apr 4 14:01:55.079: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 4 14:01:55.195: INFO: namespace nsdeletetest-4428 deletion completed in 6.132212905s • [SLOW TEST:38.437 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 4 14:01:55.195: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0777 on tmpfs Apr 4 14:01:55.269: INFO: Waiting up to 5m0s for pod "pod-a6a1b5af-67cf-4792-99fb-c2d4d290b61d" in namespace "emptydir-6960" to be "success or failure" Apr 4 14:01:55.289: INFO: Pod "pod-a6a1b5af-67cf-4792-99fb-c2d4d290b61d": Phase="Pending", Reason="", readiness=false. Elapsed: 19.996854ms Apr 4 14:01:57.293: INFO: Pod "pod-a6a1b5af-67cf-4792-99fb-c2d4d290b61d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023598534s Apr 4 14:01:59.297: INFO: Pod "pod-a6a1b5af-67cf-4792-99fb-c2d4d290b61d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.028016273s STEP: Saw pod success Apr 4 14:01:59.297: INFO: Pod "pod-a6a1b5af-67cf-4792-99fb-c2d4d290b61d" satisfied condition "success or failure" Apr 4 14:01:59.300: INFO: Trying to get logs from node iruya-worker2 pod pod-a6a1b5af-67cf-4792-99fb-c2d4d290b61d container test-container: STEP: delete the pod Apr 4 14:01:59.333: INFO: Waiting for pod pod-a6a1b5af-67cf-4792-99fb-c2d4d290b61d to disappear Apr 4 14:01:59.345: INFO: Pod pod-a6a1b5af-67cf-4792-99fb-c2d4d290b61d no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 4 14:01:59.345: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6960" for this suite. Apr 4 14:02:05.377: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 4 14:02:05.458: INFO: namespace emptydir-6960 deletion completed in 6.109681303s • [SLOW TEST:10.263 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 4 14:02:05.459: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-45c571cf-88be-461d-9410-a4048623745d STEP: Creating a pod to test consume secrets Apr 4 14:02:05.522: INFO: Waiting up to 5m0s for pod "pod-secrets-35ff4ee2-c18e-4cf3-9e15-305239559d97" in namespace "secrets-9051" to be "success or failure" Apr 4 14:02:05.525: INFO: Pod "pod-secrets-35ff4ee2-c18e-4cf3-9e15-305239559d97": Phase="Pending", Reason="", readiness=false. Elapsed: 3.632884ms Apr 4 14:02:07.529: INFO: Pod "pod-secrets-35ff4ee2-c18e-4cf3-9e15-305239559d97": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007407093s Apr 4 14:02:09.534: INFO: Pod "pod-secrets-35ff4ee2-c18e-4cf3-9e15-305239559d97": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011837234s STEP: Saw pod success Apr 4 14:02:09.534: INFO: Pod "pod-secrets-35ff4ee2-c18e-4cf3-9e15-305239559d97" satisfied condition "success or failure" Apr 4 14:02:09.537: INFO: Trying to get logs from node iruya-worker pod pod-secrets-35ff4ee2-c18e-4cf3-9e15-305239559d97 container secret-volume-test: STEP: delete the pod Apr 4 14:02:09.572: INFO: Waiting for pod pod-secrets-35ff4ee2-c18e-4cf3-9e15-305239559d97 to disappear Apr 4 14:02:09.591: INFO: Pod pod-secrets-35ff4ee2-c18e-4cf3-9e15-305239559d97 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 4 14:02:09.591: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-9051" for this suite. Apr 4 14:02:15.607: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 4 14:02:15.686: INFO: namespace secrets-9051 deletion completed in 6.090914741s • [SLOW TEST:10.227 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 4 14:02:15.686: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0644 on tmpfs Apr 4 14:02:15.766: INFO: Waiting up to 5m0s for pod "pod-3f644913-91fd-413e-82c7-e603617649ec" in namespace "emptydir-5270" to be "success or failure" Apr 4 14:02:15.770: INFO: Pod "pod-3f644913-91fd-413e-82c7-e603617649ec": Phase="Pending", Reason="", readiness=false. Elapsed: 3.857983ms Apr 4 14:02:17.774: INFO: Pod "pod-3f644913-91fd-413e-82c7-e603617649ec": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008231563s Apr 4 14:02:19.779: INFO: Pod "pod-3f644913-91fd-413e-82c7-e603617649ec": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012852113s STEP: Saw pod success Apr 4 14:02:19.779: INFO: Pod "pod-3f644913-91fd-413e-82c7-e603617649ec" satisfied condition "success or failure" Apr 4 14:02:19.782: INFO: Trying to get logs from node iruya-worker2 pod pod-3f644913-91fd-413e-82c7-e603617649ec container test-container: STEP: delete the pod Apr 4 14:02:19.843: INFO: Waiting for pod pod-3f644913-91fd-413e-82c7-e603617649ec to disappear Apr 4 14:02:19.848: INFO: Pod pod-3f644913-91fd-413e-82c7-e603617649ec no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 4 14:02:19.848: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5270" for this suite. Apr 4 14:02:25.863: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 4 14:02:25.949: INFO: namespace emptydir-5270 deletion completed in 6.09831447s • [SLOW TEST:10.263 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 4 14:02:25.950: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name s-test-opt-del-bc521747-caae-4f95-be13-437539ef6f73 STEP: Creating secret with name s-test-opt-upd-7322ce11-0ed4-4378-9907-1283c7c34480 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-bc521747-caae-4f95-be13-437539ef6f73 STEP: Updating secret s-test-opt-upd-7322ce11-0ed4-4378-9907-1283c7c34480 STEP: Creating secret with name s-test-opt-create-b468d085-225f-498b-926c-e2dcdf3da1ab STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 4 14:03:42.458: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1752" for this suite. Apr 4 14:04:04.478: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 4 14:04:04.551: INFO: namespace secrets-1752 deletion completed in 22.089700647s • [SLOW TEST:98.602 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 4 14:04:04.552: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 4 14:04:08.626: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-5453" for this suite. Apr 4 14:04:46.644: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 4 14:04:46.721: INFO: namespace kubelet-test-5453 deletion completed in 38.090349178s • [SLOW TEST:42.169 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a read only busybox container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:187 should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 4 14:04:46.722: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating secret secrets-3172/secret-test-4ceaf6b7-b274-4cee-aaa0-b86696f63c5f STEP: Creating a pod to test consume secrets Apr 4 14:04:46.823: INFO: Waiting up to 5m0s for pod "pod-configmaps-cd225dc8-af4e-4c78-9f24-add90fc4d443" in namespace "secrets-3172" to be "success or failure" Apr 4 14:04:46.842: INFO: Pod "pod-configmaps-cd225dc8-af4e-4c78-9f24-add90fc4d443": Phase="Pending", Reason="", readiness=false. Elapsed: 19.260942ms Apr 4 14:04:48.885: INFO: Pod "pod-configmaps-cd225dc8-af4e-4c78-9f24-add90fc4d443": Phase="Pending", Reason="", readiness=false. Elapsed: 2.062279958s Apr 4 14:04:50.889: INFO: Pod "pod-configmaps-cd225dc8-af4e-4c78-9f24-add90fc4d443": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.06645462s STEP: Saw pod success Apr 4 14:04:50.889: INFO: Pod "pod-configmaps-cd225dc8-af4e-4c78-9f24-add90fc4d443" satisfied condition "success or failure" Apr 4 14:04:50.892: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-cd225dc8-af4e-4c78-9f24-add90fc4d443 container env-test: STEP: delete the pod Apr 4 14:04:50.912: INFO: Waiting for pod pod-configmaps-cd225dc8-af4e-4c78-9f24-add90fc4d443 to disappear Apr 4 14:04:50.917: INFO: Pod pod-configmaps-cd225dc8-af4e-4c78-9f24-add90fc4d443 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 4 14:04:50.917: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-3172" for this suite. Apr 4 14:04:56.933: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 4 14:04:57.048: INFO: namespace secrets-3172 deletion completed in 6.128380475s • [SLOW TEST:10.327 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 4 14:04:57.048: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0644 on node default medium Apr 4 14:04:57.112: INFO: Waiting up to 5m0s for pod "pod-286c7350-29a9-418d-bebd-f43d8bd8e2c7" in namespace "emptydir-7757" to be "success or failure" Apr 4 14:04:57.115: INFO: Pod "pod-286c7350-29a9-418d-bebd-f43d8bd8e2c7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.943634ms Apr 4 14:04:59.118: INFO: Pod "pod-286c7350-29a9-418d-bebd-f43d8bd8e2c7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006337256s Apr 4 14:05:01.122: INFO: Pod "pod-286c7350-29a9-418d-bebd-f43d8bd8e2c7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010818941s STEP: Saw pod success Apr 4 14:05:01.123: INFO: Pod "pod-286c7350-29a9-418d-bebd-f43d8bd8e2c7" satisfied condition "success or failure" Apr 4 14:05:01.126: INFO: Trying to get logs from node iruya-worker2 pod pod-286c7350-29a9-418d-bebd-f43d8bd8e2c7 container test-container: STEP: delete the pod Apr 4 14:05:01.173: INFO: Waiting for pod pod-286c7350-29a9-418d-bebd-f43d8bd8e2c7 to disappear Apr 4 14:05:01.180: INFO: Pod pod-286c7350-29a9-418d-bebd-f43d8bd8e2c7 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 4 14:05:01.180: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7757" for this suite. Apr 4 14:05:07.208: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 4 14:05:07.285: INFO: namespace emptydir-7757 deletion completed in 6.101498909s • [SLOW TEST:10.237 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 4 14:05:07.285: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-3e29d985-1b6b-44b3-9465-792e1d50dba9 STEP: Creating a pod to test consume secrets Apr 4 14:05:07.386: INFO: Waiting up to 5m0s for pod "pod-secrets-936e3515-93f5-4318-b74e-b4bcb481c8f3" in namespace "secrets-7943" to be "success or failure" Apr 4 14:05:07.390: INFO: Pod "pod-secrets-936e3515-93f5-4318-b74e-b4bcb481c8f3": Phase="Pending", Reason="", readiness=false. Elapsed: 3.810267ms Apr 4 14:05:09.393: INFO: Pod "pod-secrets-936e3515-93f5-4318-b74e-b4bcb481c8f3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006997084s Apr 4 14:05:11.397: INFO: Pod "pod-secrets-936e3515-93f5-4318-b74e-b4bcb481c8f3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011091848s STEP: Saw pod success Apr 4 14:05:11.397: INFO: Pod "pod-secrets-936e3515-93f5-4318-b74e-b4bcb481c8f3" satisfied condition "success or failure" Apr 4 14:05:11.400: INFO: Trying to get logs from node iruya-worker pod pod-secrets-936e3515-93f5-4318-b74e-b4bcb481c8f3 container secret-volume-test: STEP: delete the pod Apr 4 14:05:11.431: INFO: Waiting for pod pod-secrets-936e3515-93f5-4318-b74e-b4bcb481c8f3 to disappear Apr 4 14:05:11.444: INFO: Pod pod-secrets-936e3515-93f5-4318-b74e-b4bcb481c8f3 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 4 14:05:11.444: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7943" for this suite. Apr 4 14:05:17.491: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 4 14:05:17.567: INFO: namespace secrets-7943 deletion completed in 6.1202884s • [SLOW TEST:10.283 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 4 14:05:17.568: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Setting up the test STEP: Creating hostNetwork=false pod STEP: Creating hostNetwork=true pod STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false Apr 4 14:05:27.756: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-5026 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 4 14:05:27.756: INFO: >>> kubeConfig: /root/.kube/config I0404 14:05:27.798008 6 log.go:172] (0xc000b99ef0) (0xc0031e9540) Create stream I0404 14:05:27.798050 6 log.go:172] (0xc000b99ef0) (0xc0031e9540) Stream added, broadcasting: 1 I0404 14:05:27.799946 6 log.go:172] (0xc000b99ef0) Reply frame received for 1 I0404 14:05:27.799975 6 log.go:172] (0xc000b99ef0) (0xc0031e95e0) Create stream I0404 14:05:27.799984 6 log.go:172] (0xc000b99ef0) (0xc0031e95e0) Stream added, broadcasting: 3 I0404 14:05:27.800976 6 log.go:172] (0xc000b99ef0) Reply frame received for 3 I0404 14:05:27.801025 6 log.go:172] (0xc000b99ef0) (0xc002aab540) Create stream I0404 14:05:27.801040 6 log.go:172] (0xc000b99ef0) (0xc002aab540) Stream added, broadcasting: 5 I0404 14:05:27.802197 6 log.go:172] (0xc000b99ef0) Reply frame received for 5 I0404 14:05:27.875146 6 log.go:172] (0xc000b99ef0) Data frame received for 5 I0404 14:05:27.875176 6 log.go:172] (0xc002aab540) (5) Data frame handling I0404 14:05:27.875199 6 log.go:172] (0xc000b99ef0) Data frame received for 3 I0404 14:05:27.875246 6 log.go:172] (0xc0031e95e0) (3) Data frame handling I0404 14:05:27.875289 6 log.go:172] (0xc0031e95e0) (3) Data frame sent I0404 14:05:27.875305 6 log.go:172] (0xc000b99ef0) Data frame received for 3 I0404 14:05:27.875322 6 log.go:172] (0xc0031e95e0) (3) Data frame handling I0404 14:05:27.877425 6 log.go:172] (0xc000b99ef0) Data frame received for 1 I0404 14:05:27.877463 6 log.go:172] (0xc0031e9540) (1) Data frame handling I0404 14:05:27.877487 6 log.go:172] (0xc0031e9540) (1) Data frame sent I0404 14:05:27.877531 6 log.go:172] (0xc000b99ef0) (0xc0031e9540) Stream removed, broadcasting: 1 I0404 14:05:27.877557 6 log.go:172] (0xc000b99ef0) Go away received I0404 14:05:27.877682 6 log.go:172] (0xc000b99ef0) (0xc0031e9540) Stream removed, broadcasting: 1 I0404 14:05:27.877706 6 log.go:172] (0xc000b99ef0) (0xc0031e95e0) Stream removed, broadcasting: 3 I0404 14:05:27.877742 6 log.go:172] (0xc000b99ef0) (0xc002aab540) Stream removed, broadcasting: 5 Apr 4 14:05:27.877: INFO: Exec stderr: "" Apr 4 14:05:27.877: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-5026 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 4 14:05:27.877: INFO: >>> kubeConfig: /root/.kube/config I0404 14:05:27.915413 6 log.go:172] (0xc002a5b550) (0xc002aab860) Create stream I0404 14:05:27.915444 6 log.go:172] (0xc002a5b550) (0xc002aab860) Stream added, broadcasting: 1 I0404 14:05:27.917721 6 log.go:172] (0xc002a5b550) Reply frame received for 1 I0404 14:05:27.917813 6 log.go:172] (0xc002a5b550) (0xc0012f59a0) Create stream I0404 14:05:27.917833 6 log.go:172] (0xc002a5b550) (0xc0012f59a0) Stream added, broadcasting: 3 I0404 14:05:27.918952 6 log.go:172] (0xc002a5b550) Reply frame received for 3 I0404 14:05:27.918995 6 log.go:172] (0xc002a5b550) (0xc0012f5ae0) Create stream I0404 14:05:27.919010 6 log.go:172] (0xc002a5b550) (0xc0012f5ae0) Stream added, broadcasting: 5 I0404 14:05:27.920213 6 log.go:172] (0xc002a5b550) Reply frame received for 5 I0404 14:05:27.988722 6 log.go:172] (0xc002a5b550) Data frame received for 3 I0404 14:05:27.988777 6 log.go:172] (0xc0012f59a0) (3) Data frame handling I0404 14:05:27.988814 6 log.go:172] (0xc0012f59a0) (3) Data frame sent I0404 14:05:27.988886 6 log.go:172] (0xc002a5b550) Data frame received for 5 I0404 14:05:27.988908 6 log.go:172] (0xc0012f5ae0) (5) Data frame handling I0404 14:05:27.988951 6 log.go:172] (0xc002a5b550) Data frame received for 3 I0404 14:05:27.988997 6 log.go:172] (0xc0012f59a0) (3) Data frame handling I0404 14:05:27.990452 6 log.go:172] (0xc002a5b550) Data frame received for 1 I0404 14:05:27.990471 6 log.go:172] (0xc002aab860) (1) Data frame handling I0404 14:05:27.990487 6 log.go:172] (0xc002aab860) (1) Data frame sent I0404 14:05:27.990498 6 log.go:172] (0xc002a5b550) (0xc002aab860) Stream removed, broadcasting: 1 I0404 14:05:27.990560 6 log.go:172] (0xc002a5b550) (0xc002aab860) Stream removed, broadcasting: 1 I0404 14:05:27.990579 6 log.go:172] (0xc002a5b550) Go away received I0404 14:05:27.990653 6 log.go:172] (0xc002a5b550) (0xc0012f59a0) Stream removed, broadcasting: 3 I0404 14:05:27.990693 6 log.go:172] (0xc002a5b550) (0xc0012f5ae0) Stream removed, broadcasting: 5 Apr 4 14:05:27.990: INFO: Exec stderr: "" Apr 4 14:05:27.990: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-5026 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 4 14:05:27.990: INFO: >>> kubeConfig: /root/.kube/config I0404 14:05:28.023044 6 log.go:172] (0xc000d98a50) (0xc00307ff40) Create stream I0404 14:05:28.023073 6 log.go:172] (0xc000d98a50) (0xc00307ff40) Stream added, broadcasting: 1 I0404 14:05:28.025794 6 log.go:172] (0xc000d98a50) Reply frame received for 1 I0404 14:05:28.025847 6 log.go:172] (0xc000d98a50) (0xc002d70000) Create stream I0404 14:05:28.025863 6 log.go:172] (0xc000d98a50) (0xc002d70000) Stream added, broadcasting: 3 I0404 14:05:28.026967 6 log.go:172] (0xc000d98a50) Reply frame received for 3 I0404 14:05:28.026990 6 log.go:172] (0xc000d98a50) (0xc002c15900) Create stream I0404 14:05:28.026996 6 log.go:172] (0xc000d98a50) (0xc002c15900) Stream added, broadcasting: 5 I0404 14:05:28.027965 6 log.go:172] (0xc000d98a50) Reply frame received for 5 I0404 14:05:28.087473 6 log.go:172] (0xc000d98a50) Data frame received for 5 I0404 14:05:28.087513 6 log.go:172] (0xc002c15900) (5) Data frame handling I0404 14:05:28.087534 6 log.go:172] (0xc000d98a50) Data frame received for 3 I0404 14:05:28.087545 6 log.go:172] (0xc002d70000) (3) Data frame handling I0404 14:05:28.087553 6 log.go:172] (0xc002d70000) (3) Data frame sent I0404 14:05:28.087561 6 log.go:172] (0xc000d98a50) Data frame received for 3 I0404 14:05:28.087568 6 log.go:172] (0xc002d70000) (3) Data frame handling I0404 14:05:28.089559 6 log.go:172] (0xc000d98a50) Data frame received for 1 I0404 14:05:28.089594 6 log.go:172] (0xc00307ff40) (1) Data frame handling I0404 14:05:28.089618 6 log.go:172] (0xc00307ff40) (1) Data frame sent I0404 14:05:28.089752 6 log.go:172] (0xc000d98a50) (0xc00307ff40) Stream removed, broadcasting: 1 I0404 14:05:28.089836 6 log.go:172] (0xc000d98a50) Go away received I0404 14:05:28.089892 6 log.go:172] (0xc000d98a50) (0xc00307ff40) Stream removed, broadcasting: 1 I0404 14:05:28.089937 6 log.go:172] (0xc000d98a50) (0xc002d70000) Stream removed, broadcasting: 3 I0404 14:05:28.089957 6 log.go:172] (0xc000d98a50) (0xc002c15900) Stream removed, broadcasting: 5 Apr 4 14:05:28.089: INFO: Exec stderr: "" Apr 4 14:05:28.090: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-5026 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 4 14:05:28.090: INFO: >>> kubeConfig: /root/.kube/config I0404 14:05:28.121419 6 log.go:172] (0xc00282d550) (0xc0031e9900) Create stream I0404 14:05:28.121454 6 log.go:172] (0xc00282d550) (0xc0031e9900) Stream added, broadcasting: 1 I0404 14:05:28.124072 6 log.go:172] (0xc00282d550) Reply frame received for 1 I0404 14:05:28.124110 6 log.go:172] (0xc00282d550) (0xc0031e99a0) Create stream I0404 14:05:28.124129 6 log.go:172] (0xc00282d550) (0xc0031e99a0) Stream added, broadcasting: 3 I0404 14:05:28.125058 6 log.go:172] (0xc00282d550) Reply frame received for 3 I0404 14:05:28.125100 6 log.go:172] (0xc00282d550) (0xc002c159a0) Create stream I0404 14:05:28.125247 6 log.go:172] (0xc00282d550) (0xc002c159a0) Stream added, broadcasting: 5 I0404 14:05:28.126153 6 log.go:172] (0xc00282d550) Reply frame received for 5 I0404 14:05:28.176464 6 log.go:172] (0xc00282d550) Data frame received for 5 I0404 14:05:28.176505 6 log.go:172] (0xc002c159a0) (5) Data frame handling I0404 14:05:28.176526 6 log.go:172] (0xc00282d550) Data frame received for 3 I0404 14:05:28.176533 6 log.go:172] (0xc0031e99a0) (3) Data frame handling I0404 14:05:28.176541 6 log.go:172] (0xc0031e99a0) (3) Data frame sent I0404 14:05:28.176548 6 log.go:172] (0xc00282d550) Data frame received for 3 I0404 14:05:28.176565 6 log.go:172] (0xc0031e99a0) (3) Data frame handling I0404 14:05:28.178289 6 log.go:172] (0xc00282d550) Data frame received for 1 I0404 14:05:28.178364 6 log.go:172] (0xc0031e9900) (1) Data frame handling I0404 14:05:28.178396 6 log.go:172] (0xc0031e9900) (1) Data frame sent I0404 14:05:28.178417 6 log.go:172] (0xc00282d550) (0xc0031e9900) Stream removed, broadcasting: 1 I0404 14:05:28.178436 6 log.go:172] (0xc00282d550) Go away received I0404 14:05:28.178620 6 log.go:172] (0xc00282d550) (0xc0031e9900) Stream removed, broadcasting: 1 I0404 14:05:28.178665 6 log.go:172] (0xc00282d550) (0xc0031e99a0) Stream removed, broadcasting: 3 I0404 14:05:28.178686 6 log.go:172] (0xc00282d550) (0xc002c159a0) Stream removed, broadcasting: 5 Apr 4 14:05:28.178: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount Apr 4 14:05:28.178: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-5026 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 4 14:05:28.178: INFO: >>> kubeConfig: /root/.kube/config I0404 14:05:28.214773 6 log.go:172] (0xc002a5bef0) (0xc002aabb80) Create stream I0404 14:05:28.214799 6 log.go:172] (0xc002a5bef0) (0xc002aabb80) Stream added, broadcasting: 1 I0404 14:05:28.217335 6 log.go:172] (0xc002a5bef0) Reply frame received for 1 I0404 14:05:28.217369 6 log.go:172] (0xc002a5bef0) (0xc002aabc20) Create stream I0404 14:05:28.217379 6 log.go:172] (0xc002a5bef0) (0xc002aabc20) Stream added, broadcasting: 3 I0404 14:05:28.218384 6 log.go:172] (0xc002a5bef0) Reply frame received for 3 I0404 14:05:28.218433 6 log.go:172] (0xc002a5bef0) (0xc002aabcc0) Create stream I0404 14:05:28.218454 6 log.go:172] (0xc002a5bef0) (0xc002aabcc0) Stream added, broadcasting: 5 I0404 14:05:28.219377 6 log.go:172] (0xc002a5bef0) Reply frame received for 5 I0404 14:05:28.281596 6 log.go:172] (0xc002a5bef0) Data frame received for 5 I0404 14:05:28.281635 6 log.go:172] (0xc002aabcc0) (5) Data frame handling I0404 14:05:28.281663 6 log.go:172] (0xc002a5bef0) Data frame received for 3 I0404 14:05:28.281674 6 log.go:172] (0xc002aabc20) (3) Data frame handling I0404 14:05:28.281685 6 log.go:172] (0xc002aabc20) (3) Data frame sent I0404 14:05:28.281694 6 log.go:172] (0xc002a5bef0) Data frame received for 3 I0404 14:05:28.281702 6 log.go:172] (0xc002aabc20) (3) Data frame handling I0404 14:05:28.283270 6 log.go:172] (0xc002a5bef0) Data frame received for 1 I0404 14:05:28.283297 6 log.go:172] (0xc002aabb80) (1) Data frame handling I0404 14:05:28.283309 6 log.go:172] (0xc002aabb80) (1) Data frame sent I0404 14:05:28.283326 6 log.go:172] (0xc002a5bef0) (0xc002aabb80) Stream removed, broadcasting: 1 I0404 14:05:28.283453 6 log.go:172] (0xc002a5bef0) (0xc002aabb80) Stream removed, broadcasting: 1 I0404 14:05:28.283488 6 log.go:172] (0xc002a5bef0) (0xc002aabc20) Stream removed, broadcasting: 3 I0404 14:05:28.283511 6 log.go:172] (0xc002a5bef0) (0xc002aabcc0) Stream removed, broadcasting: 5 Apr 4 14:05:28.283: INFO: Exec stderr: "" I0404 14:05:28.283567 6 log.go:172] (0xc002a5bef0) Go away received Apr 4 14:05:28.283: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-5026 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 4 14:05:28.283: INFO: >>> kubeConfig: /root/.kube/config I0404 14:05:28.315364 6 log.go:172] (0xc002b8c6e0) (0xc002c15e00) Create stream I0404 14:05:28.315387 6 log.go:172] (0xc002b8c6e0) (0xc002c15e00) Stream added, broadcasting: 1 I0404 14:05:28.319856 6 log.go:172] (0xc002b8c6e0) Reply frame received for 1 I0404 14:05:28.319917 6 log.go:172] (0xc002b8c6e0) (0xc002d700a0) Create stream I0404 14:05:28.319941 6 log.go:172] (0xc002b8c6e0) (0xc002d700a0) Stream added, broadcasting: 3 I0404 14:05:28.321982 6 log.go:172] (0xc002b8c6e0) Reply frame received for 3 I0404 14:05:28.322022 6 log.go:172] (0xc002b8c6e0) (0xc002d70140) Create stream I0404 14:05:28.322050 6 log.go:172] (0xc002b8c6e0) (0xc002d70140) Stream added, broadcasting: 5 I0404 14:05:28.323715 6 log.go:172] (0xc002b8c6e0) Reply frame received for 5 I0404 14:05:28.388683 6 log.go:172] (0xc002b8c6e0) Data frame received for 3 I0404 14:05:28.388720 6 log.go:172] (0xc002d700a0) (3) Data frame handling I0404 14:05:28.388735 6 log.go:172] (0xc002d700a0) (3) Data frame sent I0404 14:05:28.388745 6 log.go:172] (0xc002b8c6e0) Data frame received for 3 I0404 14:05:28.388765 6 log.go:172] (0xc002d700a0) (3) Data frame handling I0404 14:05:28.388793 6 log.go:172] (0xc002b8c6e0) Data frame received for 5 I0404 14:05:28.388806 6 log.go:172] (0xc002d70140) (5) Data frame handling I0404 14:05:28.390149 6 log.go:172] (0xc002b8c6e0) Data frame received for 1 I0404 14:05:28.390184 6 log.go:172] (0xc002c15e00) (1) Data frame handling I0404 14:05:28.390204 6 log.go:172] (0xc002c15e00) (1) Data frame sent I0404 14:05:28.390225 6 log.go:172] (0xc002b8c6e0) (0xc002c15e00) Stream removed, broadcasting: 1 I0404 14:05:28.390255 6 log.go:172] (0xc002b8c6e0) Go away received I0404 14:05:28.390396 6 log.go:172] (0xc002b8c6e0) (0xc002c15e00) Stream removed, broadcasting: 1 I0404 14:05:28.390434 6 log.go:172] (0xc002b8c6e0) (0xc002d700a0) Stream removed, broadcasting: 3 I0404 14:05:28.390460 6 log.go:172] (0xc002b8c6e0) (0xc002d70140) Stream removed, broadcasting: 5 Apr 4 14:05:28.390: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true Apr 4 14:05:28.390: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-5026 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 4 14:05:28.390: INFO: >>> kubeConfig: /root/.kube/config I0404 14:05:28.427846 6 log.go:172] (0xc000d99e40) (0xc002d70500) Create stream I0404 14:05:28.427878 6 log.go:172] (0xc000d99e40) (0xc002d70500) Stream added, broadcasting: 1 I0404 14:05:28.434199 6 log.go:172] (0xc000d99e40) Reply frame received for 1 I0404 14:05:28.434269 6 log.go:172] (0xc000d99e40) (0xc002aabd60) Create stream I0404 14:05:28.434305 6 log.go:172] (0xc000d99e40) (0xc002aabd60) Stream added, broadcasting: 3 I0404 14:05:28.436477 6 log.go:172] (0xc000d99e40) Reply frame received for 3 I0404 14:05:28.436528 6 log.go:172] (0xc000d99e40) (0xc0031e9a40) Create stream I0404 14:05:28.436549 6 log.go:172] (0xc000d99e40) (0xc0031e9a40) Stream added, broadcasting: 5 I0404 14:05:28.438960 6 log.go:172] (0xc000d99e40) Reply frame received for 5 I0404 14:05:28.483486 6 log.go:172] (0xc000d99e40) Data frame received for 3 I0404 14:05:28.483531 6 log.go:172] (0xc002aabd60) (3) Data frame handling I0404 14:05:28.483552 6 log.go:172] (0xc002aabd60) (3) Data frame sent I0404 14:05:28.483601 6 log.go:172] (0xc000d99e40) Data frame received for 3 I0404 14:05:28.483620 6 log.go:172] (0xc002aabd60) (3) Data frame handling I0404 14:05:28.483655 6 log.go:172] (0xc000d99e40) Data frame received for 5 I0404 14:05:28.483684 6 log.go:172] (0xc0031e9a40) (5) Data frame handling I0404 14:05:28.485323 6 log.go:172] (0xc000d99e40) Data frame received for 1 I0404 14:05:28.485405 6 log.go:172] (0xc002d70500) (1) Data frame handling I0404 14:05:28.485426 6 log.go:172] (0xc002d70500) (1) Data frame sent I0404 14:05:28.485446 6 log.go:172] (0xc000d99e40) (0xc002d70500) Stream removed, broadcasting: 1 I0404 14:05:28.485481 6 log.go:172] (0xc000d99e40) Go away received I0404 14:05:28.485567 6 log.go:172] (0xc000d99e40) (0xc002d70500) Stream removed, broadcasting: 1 I0404 14:05:28.485592 6 log.go:172] (0xc000d99e40) (0xc002aabd60) Stream removed, broadcasting: 3 I0404 14:05:28.485613 6 log.go:172] (0xc000d99e40) (0xc0031e9a40) Stream removed, broadcasting: 5 Apr 4 14:05:28.485: INFO: Exec stderr: "" Apr 4 14:05:28.485: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-5026 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 4 14:05:28.485: INFO: >>> kubeConfig: /root/.kube/config I0404 14:05:28.521349 6 log.go:172] (0xc002b8d810) (0xc000212820) Create stream I0404 14:05:28.521378 6 log.go:172] (0xc002b8d810) (0xc000212820) Stream added, broadcasting: 1 I0404 14:05:28.523701 6 log.go:172] (0xc002b8d810) Reply frame received for 1 I0404 14:05:28.523749 6 log.go:172] (0xc002b8d810) (0xc002aabe00) Create stream I0404 14:05:28.523765 6 log.go:172] (0xc002b8d810) (0xc002aabe00) Stream added, broadcasting: 3 I0404 14:05:28.524745 6 log.go:172] (0xc002b8d810) Reply frame received for 3 I0404 14:05:28.524803 6 log.go:172] (0xc002b8d810) (0xc0031e9ae0) Create stream I0404 14:05:28.524820 6 log.go:172] (0xc002b8d810) (0xc0031e9ae0) Stream added, broadcasting: 5 I0404 14:05:28.526126 6 log.go:172] (0xc002b8d810) Reply frame received for 5 I0404 14:05:28.592823 6 log.go:172] (0xc002b8d810) Data frame received for 3 I0404 14:05:28.592842 6 log.go:172] (0xc002aabe00) (3) Data frame handling I0404 14:05:28.592870 6 log.go:172] (0xc002b8d810) Data frame received for 5 I0404 14:05:28.592921 6 log.go:172] (0xc0031e9ae0) (5) Data frame handling I0404 14:05:28.592951 6 log.go:172] (0xc002aabe00) (3) Data frame sent I0404 14:05:28.592966 6 log.go:172] (0xc002b8d810) Data frame received for 3 I0404 14:05:28.592978 6 log.go:172] (0xc002aabe00) (3) Data frame handling I0404 14:05:28.594611 6 log.go:172] (0xc002b8d810) Data frame received for 1 I0404 14:05:28.594636 6 log.go:172] (0xc000212820) (1) Data frame handling I0404 14:05:28.594650 6 log.go:172] (0xc000212820) (1) Data frame sent I0404 14:05:28.594674 6 log.go:172] (0xc002b8d810) (0xc000212820) Stream removed, broadcasting: 1 I0404 14:05:28.594707 6 log.go:172] (0xc002b8d810) Go away received I0404 14:05:28.594819 6 log.go:172] (0xc002b8d810) (0xc000212820) Stream removed, broadcasting: 1 I0404 14:05:28.594838 6 log.go:172] (0xc002b8d810) (0xc002aabe00) Stream removed, broadcasting: 3 I0404 14:05:28.594848 6 log.go:172] (0xc002b8d810) (0xc0031e9ae0) Stream removed, broadcasting: 5 Apr 4 14:05:28.594: INFO: Exec stderr: "" Apr 4 14:05:28.594: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-5026 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 4 14:05:28.594: INFO: >>> kubeConfig: /root/.kube/config I0404 14:05:28.626218 6 log.go:172] (0xc003133290) (0xc0031e9e00) Create stream I0404 14:05:28.626234 6 log.go:172] (0xc003133290) (0xc0031e9e00) Stream added, broadcasting: 1 I0404 14:05:28.628070 6 log.go:172] (0xc003133290) Reply frame received for 1 I0404 14:05:28.628122 6 log.go:172] (0xc003133290) (0xc002aabea0) Create stream I0404 14:05:28.628141 6 log.go:172] (0xc003133290) (0xc002aabea0) Stream added, broadcasting: 3 I0404 14:05:28.629010 6 log.go:172] (0xc003133290) Reply frame received for 3 I0404 14:05:28.629031 6 log.go:172] (0xc003133290) (0xc0031e9ea0) Create stream I0404 14:05:28.629037 6 log.go:172] (0xc003133290) (0xc0031e9ea0) Stream added, broadcasting: 5 I0404 14:05:28.630081 6 log.go:172] (0xc003133290) Reply frame received for 5 I0404 14:05:28.682765 6 log.go:172] (0xc003133290) Data frame received for 5 I0404 14:05:28.682797 6 log.go:172] (0xc003133290) Data frame received for 3 I0404 14:05:28.682813 6 log.go:172] (0xc002aabea0) (3) Data frame handling I0404 14:05:28.682823 6 log.go:172] (0xc002aabea0) (3) Data frame sent I0404 14:05:28.682829 6 log.go:172] (0xc003133290) Data frame received for 3 I0404 14:05:28.682845 6 log.go:172] (0xc002aabea0) (3) Data frame handling I0404 14:05:28.682892 6 log.go:172] (0xc0031e9ea0) (5) Data frame handling I0404 14:05:28.684240 6 log.go:172] (0xc003133290) Data frame received for 1 I0404 14:05:28.684255 6 log.go:172] (0xc0031e9e00) (1) Data frame handling I0404 14:05:28.684272 6 log.go:172] (0xc0031e9e00) (1) Data frame sent I0404 14:05:28.684287 6 log.go:172] (0xc003133290) (0xc0031e9e00) Stream removed, broadcasting: 1 I0404 14:05:28.684296 6 log.go:172] (0xc003133290) Go away received I0404 14:05:28.684495 6 log.go:172] (0xc003133290) (0xc0031e9e00) Stream removed, broadcasting: 1 I0404 14:05:28.684527 6 log.go:172] (0xc003133290) (0xc002aabea0) Stream removed, broadcasting: 3 I0404 14:05:28.684551 6 log.go:172] (0xc003133290) (0xc0031e9ea0) Stream removed, broadcasting: 5 Apr 4 14:05:28.684: INFO: Exec stderr: "" Apr 4 14:05:28.684: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-5026 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 4 14:05:28.684: INFO: >>> kubeConfig: /root/.kube/config I0404 14:05:28.714980 6 log.go:172] (0xc002a716b0) (0xc0026481e0) Create stream I0404 14:05:28.715018 6 log.go:172] (0xc002a716b0) (0xc0026481e0) Stream added, broadcasting: 1 I0404 14:05:28.718190 6 log.go:172] (0xc002a716b0) Reply frame received for 1 I0404 14:05:28.718256 6 log.go:172] (0xc002a716b0) (0xc0002128c0) Create stream I0404 14:05:28.718274 6 log.go:172] (0xc002a716b0) (0xc0002128c0) Stream added, broadcasting: 3 I0404 14:05:28.719379 6 log.go:172] (0xc002a716b0) Reply frame received for 3 I0404 14:05:28.719413 6 log.go:172] (0xc002a716b0) (0xc000212b40) Create stream I0404 14:05:28.719423 6 log.go:172] (0xc002a716b0) (0xc000212b40) Stream added, broadcasting: 5 I0404 14:05:28.720479 6 log.go:172] (0xc002a716b0) Reply frame received for 5 I0404 14:05:28.782958 6 log.go:172] (0xc002a716b0) Data frame received for 3 I0404 14:05:28.782988 6 log.go:172] (0xc0002128c0) (3) Data frame handling I0404 14:05:28.783010 6 log.go:172] (0xc002a716b0) Data frame received for 5 I0404 14:05:28.783047 6 log.go:172] (0xc000212b40) (5) Data frame handling I0404 14:05:28.783077 6 log.go:172] (0xc0002128c0) (3) Data frame sent I0404 14:05:28.783100 6 log.go:172] (0xc002a716b0) Data frame received for 3 I0404 14:05:28.783121 6 log.go:172] (0xc0002128c0) (3) Data frame handling I0404 14:05:28.784686 6 log.go:172] (0xc002a716b0) Data frame received for 1 I0404 14:05:28.784713 6 log.go:172] (0xc0026481e0) (1) Data frame handling I0404 14:05:28.784727 6 log.go:172] (0xc0026481e0) (1) Data frame sent I0404 14:05:28.784749 6 log.go:172] (0xc002a716b0) (0xc0026481e0) Stream removed, broadcasting: 1 I0404 14:05:28.784772 6 log.go:172] (0xc002a716b0) Go away received I0404 14:05:28.784910 6 log.go:172] (0xc002a716b0) (0xc0026481e0) Stream removed, broadcasting: 1 I0404 14:05:28.784936 6 log.go:172] (0xc002a716b0) (0xc0002128c0) Stream removed, broadcasting: 3 I0404 14:05:28.784959 6 log.go:172] (0xc002a716b0) (0xc000212b40) Stream removed, broadcasting: 5 Apr 4 14:05:28.784: INFO: Exec stderr: "" [AfterEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 4 14:05:28.785: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-kubelet-etc-hosts-5026" for this suite. Apr 4 14:06:14.807: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 4 14:06:14.880: INFO: namespace e2e-kubelet-etc-hosts-5026 deletion completed in 46.090416051s • [SLOW TEST:57.312 seconds] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 4 14:06:14.880: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Given a Pod with a 'name' label pod-adoption is created STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 4 14:06:20.076: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-4577" for this suite. Apr 4 14:06:42.103: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 4 14:06:42.206: INFO: namespace replication-controller-4577 deletion completed in 22.125948126s • [SLOW TEST:27.326 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 4 14:06:42.207: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-map-c018ff8e-d661-4ae0-80dc-e626e289f16b STEP: Creating a pod to test consume configMaps Apr 4 14:06:42.296: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-d3c17d44-c301-4402-9385-c289c53aca54" in namespace "projected-1779" to be "success or failure" Apr 4 14:06:42.302: INFO: Pod "pod-projected-configmaps-d3c17d44-c301-4402-9385-c289c53aca54": Phase="Pending", Reason="", readiness=false. Elapsed: 5.685889ms Apr 4 14:06:44.307: INFO: Pod "pod-projected-configmaps-d3c17d44-c301-4402-9385-c289c53aca54": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010326575s Apr 4 14:06:46.324: INFO: Pod "pod-projected-configmaps-d3c17d44-c301-4402-9385-c289c53aca54": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.02788281s STEP: Saw pod success Apr 4 14:06:46.324: INFO: Pod "pod-projected-configmaps-d3c17d44-c301-4402-9385-c289c53aca54" satisfied condition "success or failure" Apr 4 14:06:46.327: INFO: Trying to get logs from node iruya-worker pod pod-projected-configmaps-d3c17d44-c301-4402-9385-c289c53aca54 container projected-configmap-volume-test: STEP: delete the pod Apr 4 14:06:46.369: INFO: Waiting for pod pod-projected-configmaps-d3c17d44-c301-4402-9385-c289c53aca54 to disappear Apr 4 14:06:46.379: INFO: Pod pod-projected-configmaps-d3c17d44-c301-4402-9385-c289c53aca54 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 4 14:06:46.379: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1779" for this suite. Apr 4 14:06:52.389: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 4 14:06:52.489: INFO: namespace projected-1779 deletion completed in 6.107859699s • [SLOW TEST:10.282 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 4 14:06:52.491: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Apr 4 14:06:52.520: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 4 14:06:56.673: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-2510" for this suite. Apr 4 14:07:46.691: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 4 14:07:46.770: INFO: namespace pods-2510 deletion completed in 50.092225509s • [SLOW TEST:54.279 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 4 14:07:46.770: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name cm-test-opt-del-b5dbd776-402d-4f14-bf50-b495a0f4779a STEP: Creating configMap with name cm-test-opt-upd-5b3d5c68-c650-4bae-b0c6-b7ac7f30ab68 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-b5dbd776-402d-4f14-bf50-b495a0f4779a STEP: Updating configmap cm-test-opt-upd-5b3d5c68-c650-4bae-b0c6-b7ac7f30ab68 STEP: Creating configMap with name cm-test-opt-create-345569f1-51d8-40e0-a012-3b1040ac53f4 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 4 14:09:05.228: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6850" for this suite. Apr 4 14:09:27.241: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 4 14:09:27.316: INFO: namespace projected-6850 deletion completed in 22.085361942s • [SLOW TEST:100.547 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 4 14:09:27.317: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap that has name configmap-test-emptyKey-f652f425-ea47-4c22-83dd-c11c91121fac [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 4 14:09:27.351: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8349" for this suite. Apr 4 14:09:33.376: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 4 14:09:33.456: INFO: namespace configmap-8349 deletion completed in 6.101366828s • [SLOW TEST:6.139 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 4 14:09:33.457: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars Apr 4 14:09:33.518: INFO: Waiting up to 5m0s for pod "downward-api-fbabd930-9c15-438b-a95c-8e5aa8c3cd0a" in namespace "downward-api-1734" to be "success or failure" Apr 4 14:09:33.522: INFO: Pod "downward-api-fbabd930-9c15-438b-a95c-8e5aa8c3cd0a": Phase="Pending", Reason="", readiness=false. Elapsed: 3.930093ms Apr 4 14:09:35.551: INFO: Pod "downward-api-fbabd930-9c15-438b-a95c-8e5aa8c3cd0a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032883976s Apr 4 14:09:37.556: INFO: Pod "downward-api-fbabd930-9c15-438b-a95c-8e5aa8c3cd0a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.037466717s STEP: Saw pod success Apr 4 14:09:37.556: INFO: Pod "downward-api-fbabd930-9c15-438b-a95c-8e5aa8c3cd0a" satisfied condition "success or failure" Apr 4 14:09:37.559: INFO: Trying to get logs from node iruya-worker2 pod downward-api-fbabd930-9c15-438b-a95c-8e5aa8c3cd0a container dapi-container: STEP: delete the pod Apr 4 14:09:37.597: INFO: Waiting for pod downward-api-fbabd930-9c15-438b-a95c-8e5aa8c3cd0a to disappear Apr 4 14:09:37.626: INFO: Pod downward-api-fbabd930-9c15-438b-a95c-8e5aa8c3cd0a no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 4 14:09:37.626: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1734" for this suite. Apr 4 14:09:43.642: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 4 14:09:43.714: INFO: namespace downward-api-1734 deletion completed in 6.085833355s • [SLOW TEST:10.257 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 4 14:09:43.715: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Apr 4 14:09:47.862: INFO: Waiting up to 5m0s for pod "client-envvars-6baa7f4d-f13c-4fce-a252-f3b5fbbd65e2" in namespace "pods-8533" to be "success or failure" Apr 4 14:09:47.934: INFO: Pod "client-envvars-6baa7f4d-f13c-4fce-a252-f3b5fbbd65e2": Phase="Pending", Reason="", readiness=false. Elapsed: 72.641855ms Apr 4 14:09:49.965: INFO: Pod "client-envvars-6baa7f4d-f13c-4fce-a252-f3b5fbbd65e2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.103680256s Apr 4 14:09:51.970: INFO: Pod "client-envvars-6baa7f4d-f13c-4fce-a252-f3b5fbbd65e2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.107924589s STEP: Saw pod success Apr 4 14:09:51.970: INFO: Pod "client-envvars-6baa7f4d-f13c-4fce-a252-f3b5fbbd65e2" satisfied condition "success or failure" Apr 4 14:09:51.973: INFO: Trying to get logs from node iruya-worker2 pod client-envvars-6baa7f4d-f13c-4fce-a252-f3b5fbbd65e2 container env3cont: STEP: delete the pod Apr 4 14:09:51.994: INFO: Waiting for pod client-envvars-6baa7f4d-f13c-4fce-a252-f3b5fbbd65e2 to disappear Apr 4 14:09:51.999: INFO: Pod client-envvars-6baa7f4d-f13c-4fce-a252-f3b5fbbd65e2 no longer exists [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 4 14:09:51.999: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-8533" for this suite. Apr 4 14:10:34.048: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 4 14:10:34.146: INFO: namespace pods-8533 deletion completed in 42.144786398s • [SLOW TEST:50.432 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 4 14:10:34.147: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Apr 4 14:10:34.189: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 4 14:10:35.310: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-1358" for this suite. Apr 4 14:10:41.330: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 4 14:10:41.413: INFO: namespace custom-resource-definition-1358 deletion completed in 6.098140696s • [SLOW TEST:7.266 seconds] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Simple CustomResourceDefinition /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:35 creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 4 14:10:41.414: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W0404 14:11:12.047094 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Apr 4 14:11:12.047: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 4 14:11:12.047: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-4081" for this suite. Apr 4 14:11:18.059: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 4 14:11:18.160: INFO: namespace gc-4081 deletion completed in 6.109611673s • [SLOW TEST:36.746 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 4 14:11:18.160: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Apr 4 14:11:18.227: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a55b2719-6519-46b8-93f6-37995973ae1b" in namespace "downward-api-6329" to be "success or failure" Apr 4 14:11:18.230: INFO: Pod "downwardapi-volume-a55b2719-6519-46b8-93f6-37995973ae1b": Phase="Pending", Reason="", readiness=false. Elapsed: 3.310862ms Apr 4 14:11:20.234: INFO: Pod "downwardapi-volume-a55b2719-6519-46b8-93f6-37995973ae1b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00728478s Apr 4 14:11:22.239: INFO: Pod "downwardapi-volume-a55b2719-6519-46b8-93f6-37995973ae1b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012314814s STEP: Saw pod success Apr 4 14:11:22.239: INFO: Pod "downwardapi-volume-a55b2719-6519-46b8-93f6-37995973ae1b" satisfied condition "success or failure" Apr 4 14:11:22.243: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-a55b2719-6519-46b8-93f6-37995973ae1b container client-container: STEP: delete the pod Apr 4 14:11:22.256: INFO: Waiting for pod downwardapi-volume-a55b2719-6519-46b8-93f6-37995973ae1b to disappear Apr 4 14:11:22.260: INFO: Pod downwardapi-volume-a55b2719-6519-46b8-93f6-37995973ae1b no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 4 14:11:22.260: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6329" for this suite. Apr 4 14:11:28.350: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 4 14:11:28.427: INFO: namespace downward-api-6329 deletion completed in 6.164657088s • [SLOW TEST:10.268 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl expose should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 4 14:11:28.428: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating Redis RC Apr 4 14:11:28.486: INFO: namespace kubectl-3820 Apr 4 14:11:28.486: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3820' Apr 4 14:11:31.137: INFO: stderr: "" Apr 4 14:11:31.137: INFO: stdout: "replicationcontroller/redis-master created\n" STEP: Waiting for Redis master to start. Apr 4 14:11:32.142: INFO: Selector matched 1 pods for map[app:redis] Apr 4 14:11:32.142: INFO: Found 0 / 1 Apr 4 14:11:33.142: INFO: Selector matched 1 pods for map[app:redis] Apr 4 14:11:33.142: INFO: Found 0 / 1 Apr 4 14:11:34.140: INFO: Selector matched 1 pods for map[app:redis] Apr 4 14:11:34.141: INFO: Found 1 / 1 Apr 4 14:11:34.141: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Apr 4 14:11:34.144: INFO: Selector matched 1 pods for map[app:redis] Apr 4 14:11:34.144: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Apr 4 14:11:34.144: INFO: wait on redis-master startup in kubectl-3820 Apr 4 14:11:34.144: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-ftcvb redis-master --namespace=kubectl-3820' Apr 4 14:11:34.247: INFO: stderr: "" Apr 4 14:11:34.248: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 04 Apr 14:11:33.529 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 04 Apr 14:11:33.529 # Server started, Redis version 3.2.12\n1:M 04 Apr 14:11:33.529 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 04 Apr 14:11:33.529 * The server is now ready to accept connections on port 6379\n" STEP: exposing RC Apr 4 14:11:34.248: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc redis-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-3820' Apr 4 14:11:34.376: INFO: stderr: "" Apr 4 14:11:34.376: INFO: stdout: "service/rm2 exposed\n" Apr 4 14:11:34.388: INFO: Service rm2 in namespace kubectl-3820 found. STEP: exposing service Apr 4 14:11:36.395: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-3820' Apr 4 14:11:36.528: INFO: stderr: "" Apr 4 14:11:36.528: INFO: stdout: "service/rm3 exposed\n" Apr 4 14:11:36.534: INFO: Service rm3 in namespace kubectl-3820 found. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 4 14:11:38.541: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3820" for this suite. Apr 4 14:12:00.562: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 4 14:12:00.644: INFO: namespace kubectl-3820 deletion completed in 22.098228543s • [SLOW TEST:32.216 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl expose /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 4 14:12:00.644: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-upd-04df4133-3d80-42fa-b4dd-4a1f617707d1 STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 4 14:12:04.747: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2443" for this suite. Apr 4 14:12:26.766: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 4 14:12:26.851: INFO: namespace configmap-2443 deletion completed in 22.099837904s • [SLOW TEST:26.207 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 4 14:12:26.851: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Apr 4 14:12:26.935: INFO: Waiting up to 5m0s for pod "downwardapi-volume-36f07e1b-aee3-43fb-af2d-1cb22afeda72" in namespace "downward-api-1195" to be "success or failure" Apr 4 14:12:26.952: INFO: Pod "downwardapi-volume-36f07e1b-aee3-43fb-af2d-1cb22afeda72": Phase="Pending", Reason="", readiness=false. Elapsed: 17.29367ms Apr 4 14:12:28.956: INFO: Pod "downwardapi-volume-36f07e1b-aee3-43fb-af2d-1cb22afeda72": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020994554s Apr 4 14:12:30.960: INFO: Pod "downwardapi-volume-36f07e1b-aee3-43fb-af2d-1cb22afeda72": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.02530388s STEP: Saw pod success Apr 4 14:12:30.961: INFO: Pod "downwardapi-volume-36f07e1b-aee3-43fb-af2d-1cb22afeda72" satisfied condition "success or failure" Apr 4 14:12:30.964: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-36f07e1b-aee3-43fb-af2d-1cb22afeda72 container client-container: STEP: delete the pod Apr 4 14:12:30.996: INFO: Waiting for pod downwardapi-volume-36f07e1b-aee3-43fb-af2d-1cb22afeda72 to disappear Apr 4 14:12:31.005: INFO: Pod downwardapi-volume-36f07e1b-aee3-43fb-af2d-1cb22afeda72 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 4 14:12:31.005: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1195" for this suite. Apr 4 14:12:37.021: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 4 14:12:37.128: INFO: namespace downward-api-1195 deletion completed in 6.118948579s • [SLOW TEST:10.276 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 4 14:12:37.128: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars Apr 4 14:12:37.177: INFO: Waiting up to 5m0s for pod "downward-api-d281e005-b9e2-42ce-8ce3-5779d8c48061" in namespace "downward-api-1557" to be "success or failure" Apr 4 14:12:37.194: INFO: Pod "downward-api-d281e005-b9e2-42ce-8ce3-5779d8c48061": Phase="Pending", Reason="", readiness=false. Elapsed: 16.551605ms Apr 4 14:12:39.199: INFO: Pod "downward-api-d281e005-b9e2-42ce-8ce3-5779d8c48061": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021742976s Apr 4 14:12:41.204: INFO: Pod "downward-api-d281e005-b9e2-42ce-8ce3-5779d8c48061": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.026343491s STEP: Saw pod success Apr 4 14:12:41.204: INFO: Pod "downward-api-d281e005-b9e2-42ce-8ce3-5779d8c48061" satisfied condition "success or failure" Apr 4 14:12:41.207: INFO: Trying to get logs from node iruya-worker pod downward-api-d281e005-b9e2-42ce-8ce3-5779d8c48061 container dapi-container: STEP: delete the pod Apr 4 14:12:41.237: INFO: Waiting for pod downward-api-d281e005-b9e2-42ce-8ce3-5779d8c48061 to disappear Apr 4 14:12:41.241: INFO: Pod downward-api-d281e005-b9e2-42ce-8ce3-5779d8c48061 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 4 14:12:41.241: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1557" for this suite. Apr 4 14:12:47.263: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 4 14:12:47.344: INFO: namespace downward-api-1557 deletion completed in 6.092812507s • [SLOW TEST:10.216 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Guestbook application should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 4 14:12:47.344: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating all guestbook components Apr 4 14:12:47.381: INFO: apiVersion: v1 kind: Service metadata: name: redis-slave labels: app: redis role: slave tier: backend spec: ports: - port: 6379 selector: app: redis role: slave tier: backend Apr 4 14:12:47.381: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7367' Apr 4 14:12:47.684: INFO: stderr: "" Apr 4 14:12:47.684: INFO: stdout: "service/redis-slave created\n" Apr 4 14:12:47.684: INFO: apiVersion: v1 kind: Service metadata: name: redis-master labels: app: redis role: master tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: redis role: master tier: backend Apr 4 14:12:47.684: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7367' Apr 4 14:12:47.956: INFO: stderr: "" Apr 4 14:12:47.956: INFO: stdout: "service/redis-master created\n" Apr 4 14:12:47.956: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend Apr 4 14:12:47.956: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7367' Apr 4 14:12:48.296: INFO: stderr: "" Apr 4 14:12:48.296: INFO: stdout: "service/frontend created\n" Apr 4 14:12:48.296: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: frontend spec: replicas: 3 selector: matchLabels: app: guestbook tier: frontend template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: php-redis image: gcr.io/google-samples/gb-frontend:v6 resources: requests: cpu: 100m memory: 100Mi env: - name: GET_HOSTS_FROM value: dns # If your cluster config does not include a dns service, then to # instead access environment variables to find service host # info, comment out the 'value: dns' line above, and uncomment the # line below: # value: env ports: - containerPort: 80 Apr 4 14:12:48.296: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7367' Apr 4 14:12:48.559: INFO: stderr: "" Apr 4 14:12:48.559: INFO: stdout: "deployment.apps/frontend created\n" Apr 4 14:12:48.559: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: redis-master spec: replicas: 1 selector: matchLabels: app: redis role: master tier: backend template: metadata: labels: app: redis role: master tier: backend spec: containers: - name: master image: gcr.io/kubernetes-e2e-test-images/redis:1.0 resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Apr 4 14:12:48.559: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7367' Apr 4 14:12:48.853: INFO: stderr: "" Apr 4 14:12:48.853: INFO: stdout: "deployment.apps/redis-master created\n" Apr 4 14:12:48.853: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: redis-slave spec: replicas: 2 selector: matchLabels: app: redis role: slave tier: backend template: metadata: labels: app: redis role: slave tier: backend spec: containers: - name: slave image: gcr.io/google-samples/gb-redisslave:v3 resources: requests: cpu: 100m memory: 100Mi env: - name: GET_HOSTS_FROM value: dns # If your cluster config does not include a dns service, then to # instead access an environment variable to find the master # service's host, comment out the 'value: dns' line above, and # uncomment the line below: # value: env ports: - containerPort: 6379 Apr 4 14:12:48.853: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7367' Apr 4 14:12:49.128: INFO: stderr: "" Apr 4 14:12:49.129: INFO: stdout: "deployment.apps/redis-slave created\n" STEP: validating guestbook app Apr 4 14:12:49.129: INFO: Waiting for all frontend pods to be Running. Apr 4 14:12:59.179: INFO: Waiting for frontend to serve content. Apr 4 14:12:59.195: INFO: Trying to add a new entry to the guestbook. Apr 4 14:12:59.214: INFO: Verifying that added entry can be retrieved. STEP: using delete to clean up resources Apr 4 14:12:59.233: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-7367' Apr 4 14:12:59.370: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 4 14:12:59.370: INFO: stdout: "service \"redis-slave\" force deleted\n" STEP: using delete to clean up resources Apr 4 14:12:59.370: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-7367' Apr 4 14:12:59.507: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 4 14:12:59.507: INFO: stdout: "service \"redis-master\" force deleted\n" STEP: using delete to clean up resources Apr 4 14:12:59.508: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-7367' Apr 4 14:12:59.662: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 4 14:12:59.662: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources Apr 4 14:12:59.663: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-7367' Apr 4 14:12:59.782: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 4 14:12:59.782: INFO: stdout: "deployment.apps \"frontend\" force deleted\n" STEP: using delete to clean up resources Apr 4 14:12:59.783: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-7367' Apr 4 14:12:59.898: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 4 14:12:59.898: INFO: stdout: "deployment.apps \"redis-master\" force deleted\n" STEP: using delete to clean up resources Apr 4 14:12:59.898: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-7367' Apr 4 14:13:00.071: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 4 14:13:00.071: INFO: stdout: "deployment.apps \"redis-slave\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 4 14:13:00.072: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7367" for this suite. Apr 4 14:13:42.090: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 4 14:13:42.165: INFO: namespace kubectl-7367 deletion completed in 42.090044262s • [SLOW TEST:54.822 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Guestbook application /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 4 14:13:42.166: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-fbb293eb-1bd0-49dd-ab39-88eb4f86b6fc STEP: Creating a pod to test consume secrets Apr 4 14:13:42.310: INFO: Waiting up to 5m0s for pod "pod-secrets-f415b9d7-ec6d-4c9e-b896-241ebe651435" in namespace "secrets-8189" to be "success or failure" Apr 4 14:13:42.327: INFO: Pod "pod-secrets-f415b9d7-ec6d-4c9e-b896-241ebe651435": Phase="Pending", Reason="", readiness=false. Elapsed: 17.262512ms Apr 4 14:13:44.331: INFO: Pod "pod-secrets-f415b9d7-ec6d-4c9e-b896-241ebe651435": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021342593s Apr 4 14:13:46.336: INFO: Pod "pod-secrets-f415b9d7-ec6d-4c9e-b896-241ebe651435": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.026058328s STEP: Saw pod success Apr 4 14:13:46.336: INFO: Pod "pod-secrets-f415b9d7-ec6d-4c9e-b896-241ebe651435" satisfied condition "success or failure" Apr 4 14:13:46.339: INFO: Trying to get logs from node iruya-worker2 pod pod-secrets-f415b9d7-ec6d-4c9e-b896-241ebe651435 container secret-volume-test: STEP: delete the pod Apr 4 14:13:46.356: INFO: Waiting for pod pod-secrets-f415b9d7-ec6d-4c9e-b896-241ebe651435 to disappear Apr 4 14:13:46.361: INFO: Pod pod-secrets-f415b9d7-ec6d-4c9e-b896-241ebe651435 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 4 14:13:46.362: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-8189" for this suite. Apr 4 14:13:52.393: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 4 14:13:52.464: INFO: namespace secrets-8189 deletion completed in 6.100233323s STEP: Destroying namespace "secret-namespace-1748" for this suite. Apr 4 14:13:58.538: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 4 14:13:58.620: INFO: namespace secret-namespace-1748 deletion completed in 6.155459176s • [SLOW TEST:16.454 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 4 14:13:58.621: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-2bf29092-d28f-4e57-a84a-6608ecaa819d STEP: Creating a pod to test consume configMaps Apr 4 14:13:58.680: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-da51bc51-e3d9-4245-a8e2-a98f6efbb3ad" in namespace "projected-3492" to be "success or failure" Apr 4 14:13:58.693: INFO: Pod "pod-projected-configmaps-da51bc51-e3d9-4245-a8e2-a98f6efbb3ad": Phase="Pending", Reason="", readiness=false. Elapsed: 12.849442ms Apr 4 14:14:00.698: INFO: Pod "pod-projected-configmaps-da51bc51-e3d9-4245-a8e2-a98f6efbb3ad": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017077837s Apr 4 14:14:02.701: INFO: Pod "pod-projected-configmaps-da51bc51-e3d9-4245-a8e2-a98f6efbb3ad": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.02055718s STEP: Saw pod success Apr 4 14:14:02.701: INFO: Pod "pod-projected-configmaps-da51bc51-e3d9-4245-a8e2-a98f6efbb3ad" satisfied condition "success or failure" Apr 4 14:14:02.703: INFO: Trying to get logs from node iruya-worker pod pod-projected-configmaps-da51bc51-e3d9-4245-a8e2-a98f6efbb3ad container projected-configmap-volume-test: STEP: delete the pod Apr 4 14:14:02.744: INFO: Waiting for pod pod-projected-configmaps-da51bc51-e3d9-4245-a8e2-a98f6efbb3ad to disappear Apr 4 14:14:02.759: INFO: Pod pod-projected-configmaps-da51bc51-e3d9-4245-a8e2-a98f6efbb3ad no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 4 14:14:02.759: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3492" for this suite. Apr 4 14:14:08.774: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 4 14:14:08.882: INFO: namespace projected-3492 deletion completed in 6.11995899s • [SLOW TEST:10.261 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 4 14:14:08.882: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-0530224e-f665-4f4f-a1c2-2800f83cf315 STEP: Creating a pod to test consume secrets Apr 4 14:14:08.965: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-49635f3e-623b-4315-b2ce-baa948c8a4af" in namespace "projected-3046" to be "success or failure" Apr 4 14:14:08.983: INFO: Pod "pod-projected-secrets-49635f3e-623b-4315-b2ce-baa948c8a4af": Phase="Pending", Reason="", readiness=false. Elapsed: 18.865276ms Apr 4 14:14:10.987: INFO: Pod "pod-projected-secrets-49635f3e-623b-4315-b2ce-baa948c8a4af": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022893817s Apr 4 14:14:12.992: INFO: Pod "pod-projected-secrets-49635f3e-623b-4315-b2ce-baa948c8a4af": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.027059263s STEP: Saw pod success Apr 4 14:14:12.992: INFO: Pod "pod-projected-secrets-49635f3e-623b-4315-b2ce-baa948c8a4af" satisfied condition "success or failure" Apr 4 14:14:12.995: INFO: Trying to get logs from node iruya-worker pod pod-projected-secrets-49635f3e-623b-4315-b2ce-baa948c8a4af container projected-secret-volume-test: STEP: delete the pod Apr 4 14:14:13.044: INFO: Waiting for pod pod-projected-secrets-49635f3e-623b-4315-b2ce-baa948c8a4af to disappear Apr 4 14:14:13.047: INFO: Pod pod-projected-secrets-49635f3e-623b-4315-b2ce-baa948c8a4af no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 4 14:14:13.047: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3046" for this suite. Apr 4 14:14:19.076: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 4 14:14:19.157: INFO: namespace projected-3046 deletion completed in 6.106067606s • [SLOW TEST:10.275 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 4 14:14:19.158: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Apr 4 14:14:19.198: INFO: Creating ReplicaSet my-hostname-basic-5faa6e67-c989-4cc6-854a-f105b7ecbaf8 Apr 4 14:14:19.232: INFO: Pod name my-hostname-basic-5faa6e67-c989-4cc6-854a-f105b7ecbaf8: Found 0 pods out of 1 Apr 4 14:14:24.236: INFO: Pod name my-hostname-basic-5faa6e67-c989-4cc6-854a-f105b7ecbaf8: Found 1 pods out of 1 Apr 4 14:14:24.236: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-5faa6e67-c989-4cc6-854a-f105b7ecbaf8" is running Apr 4 14:14:24.239: INFO: Pod "my-hostname-basic-5faa6e67-c989-4cc6-854a-f105b7ecbaf8-lx8s9" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-04 14:14:19 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-04 14:14:21 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-04 14:14:21 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-04 14:14:19 +0000 UTC Reason: Message:}]) Apr 4 14:14:24.240: INFO: Trying to dial the pod Apr 4 14:14:29.250: INFO: Controller my-hostname-basic-5faa6e67-c989-4cc6-854a-f105b7ecbaf8: Got expected result from replica 1 [my-hostname-basic-5faa6e67-c989-4cc6-854a-f105b7ecbaf8-lx8s9]: "my-hostname-basic-5faa6e67-c989-4cc6-854a-f105b7ecbaf8-lx8s9", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 4 14:14:29.250: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-4064" for this suite. Apr 4 14:14:35.289: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 4 14:14:35.372: INFO: namespace replicaset-4064 deletion completed in 6.118222168s • [SLOW TEST:16.214 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 4 14:14:35.372: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test override command Apr 4 14:14:35.450: INFO: Waiting up to 5m0s for pod "client-containers-48aba5c2-2ee6-4e32-997d-1026d0ad83d6" in namespace "containers-8212" to be "success or failure" Apr 4 14:14:35.455: INFO: Pod "client-containers-48aba5c2-2ee6-4e32-997d-1026d0ad83d6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.560696ms Apr 4 14:14:37.459: INFO: Pod "client-containers-48aba5c2-2ee6-4e32-997d-1026d0ad83d6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00907163s Apr 4 14:14:39.468: INFO: Pod "client-containers-48aba5c2-2ee6-4e32-997d-1026d0ad83d6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018048929s STEP: Saw pod success Apr 4 14:14:39.468: INFO: Pod "client-containers-48aba5c2-2ee6-4e32-997d-1026d0ad83d6" satisfied condition "success or failure" Apr 4 14:14:39.471: INFO: Trying to get logs from node iruya-worker2 pod client-containers-48aba5c2-2ee6-4e32-997d-1026d0ad83d6 container test-container: STEP: delete the pod Apr 4 14:14:39.510: INFO: Waiting for pod client-containers-48aba5c2-2ee6-4e32-997d-1026d0ad83d6 to disappear Apr 4 14:14:39.520: INFO: Pod client-containers-48aba5c2-2ee6-4e32-997d-1026d0ad83d6 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 4 14:14:39.520: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-8212" for this suite. Apr 4 14:14:45.536: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 4 14:14:45.615: INFO: namespace containers-8212 deletion completed in 6.091783575s • [SLOW TEST:10.243 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 4 14:14:45.615: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 4 14:15:19.162: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-9599" for this suite. Apr 4 14:15:25.177: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 4 14:15:25.250: INFO: namespace container-runtime-9599 deletion completed in 6.084445845s • [SLOW TEST:39.634 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 when starting a container that exits /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:39 should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 4 14:15:25.250: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Delete Grace Period /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:47 [It] should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: setting up selector STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes Apr 4 14:15:29.398: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0' STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice Apr 4 14:15:34.495: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 4 14:15:34.499: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-8542" for this suite. Apr 4 14:15:40.518: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 4 14:15:40.601: INFO: namespace pods-8542 deletion completed in 6.097094914s • [SLOW TEST:15.350 seconds] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 [k8s.io] Delete Grace Period /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run pod should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 4 14:15:40.601: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1685 [It] should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Apr 4 14:15:40.666: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-3758' Apr 4 14:15:40.798: INFO: stderr: "" Apr 4 14:15:40.798: INFO: stdout: "pod/e2e-test-nginx-pod created\n" STEP: verifying the pod e2e-test-nginx-pod was created [AfterEach] [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1690 Apr 4 14:15:40.802: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=kubectl-3758' Apr 4 14:15:52.193: INFO: stderr: "" Apr 4 14:15:52.193: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 4 14:15:52.194: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3758" for this suite. Apr 4 14:15:58.214: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 4 14:15:58.295: INFO: namespace kubectl-3758 deletion completed in 6.097365331s • [SLOW TEST:17.694 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 4 14:15:58.295: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap configmap-5466/configmap-test-534eb3de-99ea-4355-9c66-0564e9eb366a STEP: Creating a pod to test consume configMaps Apr 4 14:15:58.356: INFO: Waiting up to 5m0s for pod "pod-configmaps-fd4a0e2d-38f5-495f-bfd1-6459b6be7560" in namespace "configmap-5466" to be "success or failure" Apr 4 14:15:58.366: INFO: Pod "pod-configmaps-fd4a0e2d-38f5-495f-bfd1-6459b6be7560": Phase="Pending", Reason="", readiness=false. Elapsed: 10.062574ms Apr 4 14:16:00.370: INFO: Pod "pod-configmaps-fd4a0e2d-38f5-495f-bfd1-6459b6be7560": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013977568s Apr 4 14:16:02.374: INFO: Pod "pod-configmaps-fd4a0e2d-38f5-495f-bfd1-6459b6be7560": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018262326s STEP: Saw pod success Apr 4 14:16:02.374: INFO: Pod "pod-configmaps-fd4a0e2d-38f5-495f-bfd1-6459b6be7560" satisfied condition "success or failure" Apr 4 14:16:02.377: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-fd4a0e2d-38f5-495f-bfd1-6459b6be7560 container env-test: STEP: delete the pod Apr 4 14:16:02.398: INFO: Waiting for pod pod-configmaps-fd4a0e2d-38f5-495f-bfd1-6459b6be7560 to disappear Apr 4 14:16:02.414: INFO: Pod pod-configmaps-fd4a0e2d-38f5-495f-bfd1-6459b6be7560 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 4 14:16:02.414: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5466" for this suite. Apr 4 14:16:08.436: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 4 14:16:08.516: INFO: namespace configmap-5466 deletion completed in 6.097491194s • [SLOW TEST:10.220 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 4 14:16:08.516: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object Apr 4 14:16:08.579: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-3162,SelfLink:/api/v1/namespaces/watch-3162/configmaps/e2e-watch-test-label-changed,UID:2e70bc62-545e-479f-a05c-81c8b792b2b0,ResourceVersion:3599645,Generation:0,CreationTimestamp:2020-04-04 14:16:08 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} Apr 4 14:16:08.579: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-3162,SelfLink:/api/v1/namespaces/watch-3162/configmaps/e2e-watch-test-label-changed,UID:2e70bc62-545e-479f-a05c-81c8b792b2b0,ResourceVersion:3599646,Generation:0,CreationTimestamp:2020-04-04 14:16:08 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} Apr 4 14:16:08.579: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-3162,SelfLink:/api/v1/namespaces/watch-3162/configmaps/e2e-watch-test-label-changed,UID:2e70bc62-545e-479f-a05c-81c8b792b2b0,ResourceVersion:3599647,Generation:0,CreationTimestamp:2020-04-04 14:16:08 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored Apr 4 14:16:18.620: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-3162,SelfLink:/api/v1/namespaces/watch-3162/configmaps/e2e-watch-test-label-changed,UID:2e70bc62-545e-479f-a05c-81c8b792b2b0,ResourceVersion:3599669,Generation:0,CreationTimestamp:2020-04-04 14:16:08 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Apr 4 14:16:18.620: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-3162,SelfLink:/api/v1/namespaces/watch-3162/configmaps/e2e-watch-test-label-changed,UID:2e70bc62-545e-479f-a05c-81c8b792b2b0,ResourceVersion:3599670,Generation:0,CreationTimestamp:2020-04-04 14:16:08 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} Apr 4 14:16:18.620: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-3162,SelfLink:/api/v1/namespaces/watch-3162/configmaps/e2e-watch-test-label-changed,UID:2e70bc62-545e-479f-a05c-81c8b792b2b0,ResourceVersion:3599671,Generation:0,CreationTimestamp:2020-04-04 14:16:08 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 4 14:16:18.621: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-3162" for this suite. Apr 4 14:16:24.659: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 4 14:16:24.739: INFO: namespace watch-3162 deletion completed in 6.107574527s • [SLOW TEST:16.223 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 4 14:16:24.739: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-4089 [It] Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating stateful set ss in namespace statefulset-4089 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-4089 Apr 4 14:16:24.810: INFO: Found 0 stateful pods, waiting for 1 Apr 4 14:16:34.814: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod Apr 4 14:16:34.816: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4089 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Apr 4 14:16:35.052: INFO: stderr: "I0404 14:16:34.949906 2615 log.go:172] (0xc000a666e0) (0xc0006aeaa0) Create stream\nI0404 14:16:34.949959 2615 log.go:172] (0xc000a666e0) (0xc0006aeaa0) Stream added, broadcasting: 1\nI0404 14:16:34.953431 2615 log.go:172] (0xc000a666e0) Reply frame received for 1\nI0404 14:16:34.953484 2615 log.go:172] (0xc000a666e0) (0xc0006ae1e0) Create stream\nI0404 14:16:34.953497 2615 log.go:172] (0xc000a666e0) (0xc0006ae1e0) Stream added, broadcasting: 3\nI0404 14:16:34.954403 2615 log.go:172] (0xc000a666e0) Reply frame received for 3\nI0404 14:16:34.954468 2615 log.go:172] (0xc000a666e0) (0xc000012000) Create stream\nI0404 14:16:34.954497 2615 log.go:172] (0xc000a666e0) (0xc000012000) Stream added, broadcasting: 5\nI0404 14:16:34.955373 2615 log.go:172] (0xc000a666e0) Reply frame received for 5\nI0404 14:16:35.016897 2615 log.go:172] (0xc000a666e0) Data frame received for 5\nI0404 14:16:35.016930 2615 log.go:172] (0xc000012000) (5) Data frame handling\nI0404 14:16:35.016949 2615 log.go:172] (0xc000012000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0404 14:16:35.044401 2615 log.go:172] (0xc000a666e0) Data frame received for 3\nI0404 14:16:35.044439 2615 log.go:172] (0xc0006ae1e0) (3) Data frame handling\nI0404 14:16:35.044450 2615 log.go:172] (0xc0006ae1e0) (3) Data frame sent\nI0404 14:16:35.044471 2615 log.go:172] (0xc000a666e0) Data frame received for 5\nI0404 14:16:35.044477 2615 log.go:172] (0xc000012000) (5) Data frame handling\nI0404 14:16:35.044638 2615 log.go:172] (0xc000a666e0) Data frame received for 3\nI0404 14:16:35.044731 2615 log.go:172] (0xc0006ae1e0) (3) Data frame handling\nI0404 14:16:35.047169 2615 log.go:172] (0xc000a666e0) Data frame received for 1\nI0404 14:16:35.047183 2615 log.go:172] (0xc0006aeaa0) (1) Data frame handling\nI0404 14:16:35.047190 2615 log.go:172] (0xc0006aeaa0) (1) Data frame sent\nI0404 14:16:35.047199 2615 log.go:172] (0xc000a666e0) (0xc0006aeaa0) Stream removed, broadcasting: 1\nI0404 14:16:35.047427 2615 log.go:172] (0xc000a666e0) Go away received\nI0404 14:16:35.047492 2615 log.go:172] (0xc000a666e0) (0xc0006aeaa0) Stream removed, broadcasting: 1\nI0404 14:16:35.047537 2615 log.go:172] (0xc000a666e0) (0xc0006ae1e0) Stream removed, broadcasting: 3\nI0404 14:16:35.047551 2615 log.go:172] (0xc000a666e0) (0xc000012000) Stream removed, broadcasting: 5\n" Apr 4 14:16:35.053: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Apr 4 14:16:35.053: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Apr 4 14:16:35.056: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Apr 4 14:16:45.061: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Apr 4 14:16:45.061: INFO: Waiting for statefulset status.replicas updated to 0 Apr 4 14:16:45.079: INFO: POD NODE PHASE GRACE CONDITIONS Apr 4 14:16:45.079: INFO: ss-0 iruya-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-04 14:16:24 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-04 14:16:35 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-04 14:16:35 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-04 14:16:24 +0000 UTC }] Apr 4 14:16:45.079: INFO: Apr 4 14:16:45.079: INFO: StatefulSet ss has not reached scale 3, at 1 Apr 4 14:16:46.084: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.992178592s Apr 4 14:16:47.147: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.987387762s Apr 4 14:16:48.150: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.924295644s Apr 4 14:16:49.261: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.920538395s Apr 4 14:16:50.266: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.809593031s Apr 4 14:16:51.270: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.805213029s Apr 4 14:16:52.275: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.800965461s Apr 4 14:16:53.281: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.796008278s Apr 4 14:16:54.284: INFO: Verifying statefulset ss doesn't scale past 3 for another 790.324163ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-4089 Apr 4 14:16:55.289: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4089 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 4 14:16:55.491: INFO: stderr: "I0404 14:16:55.418378 2637 log.go:172] (0xc000976420) (0xc0003cc820) Create stream\nI0404 14:16:55.418456 2637 log.go:172] (0xc000976420) (0xc0003cc820) Stream added, broadcasting: 1\nI0404 14:16:55.420814 2637 log.go:172] (0xc000976420) Reply frame received for 1\nI0404 14:16:55.420912 2637 log.go:172] (0xc000976420) (0xc0003cc8c0) Create stream\nI0404 14:16:55.420966 2637 log.go:172] (0xc000976420) (0xc0003cc8c0) Stream added, broadcasting: 3\nI0404 14:16:55.422352 2637 log.go:172] (0xc000976420) Reply frame received for 3\nI0404 14:16:55.422401 2637 log.go:172] (0xc000976420) (0xc00070a000) Create stream\nI0404 14:16:55.422425 2637 log.go:172] (0xc000976420) (0xc00070a000) Stream added, broadcasting: 5\nI0404 14:16:55.423271 2637 log.go:172] (0xc000976420) Reply frame received for 5\nI0404 14:16:55.484336 2637 log.go:172] (0xc000976420) Data frame received for 5\nI0404 14:16:55.484357 2637 log.go:172] (0xc00070a000) (5) Data frame handling\nI0404 14:16:55.484368 2637 log.go:172] (0xc00070a000) (5) Data frame sent\nI0404 14:16:55.484376 2637 log.go:172] (0xc000976420) Data frame received for 5\nI0404 14:16:55.484384 2637 log.go:172] (0xc00070a000) (5) Data frame handling\nI0404 14:16:55.484397 2637 log.go:172] (0xc000976420) Data frame received for 3\nI0404 14:16:55.484406 2637 log.go:172] (0xc0003cc8c0) (3) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0404 14:16:55.484415 2637 log.go:172] (0xc0003cc8c0) (3) Data frame sent\nI0404 14:16:55.484476 2637 log.go:172] (0xc000976420) Data frame received for 3\nI0404 14:16:55.484496 2637 log.go:172] (0xc0003cc8c0) (3) Data frame handling\nI0404 14:16:55.486135 2637 log.go:172] (0xc000976420) Data frame received for 1\nI0404 14:16:55.486165 2637 log.go:172] (0xc0003cc820) (1) Data frame handling\nI0404 14:16:55.486181 2637 log.go:172] (0xc0003cc820) (1) Data frame sent\nI0404 14:16:55.486199 2637 log.go:172] (0xc000976420) (0xc0003cc820) Stream removed, broadcasting: 1\nI0404 14:16:55.486216 2637 log.go:172] (0xc000976420) Go away received\nI0404 14:16:55.486669 2637 log.go:172] (0xc000976420) (0xc0003cc820) Stream removed, broadcasting: 1\nI0404 14:16:55.486700 2637 log.go:172] (0xc000976420) (0xc0003cc8c0) Stream removed, broadcasting: 3\nI0404 14:16:55.486717 2637 log.go:172] (0xc000976420) (0xc00070a000) Stream removed, broadcasting: 5\n" Apr 4 14:16:55.491: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Apr 4 14:16:55.491: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Apr 4 14:16:55.491: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4089 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 4 14:16:55.695: INFO: stderr: "I0404 14:16:55.617606 2658 log.go:172] (0xc0006ccb00) (0xc0003e6820) Create stream\nI0404 14:16:55.617685 2658 log.go:172] (0xc0006ccb00) (0xc0003e6820) Stream added, broadcasting: 1\nI0404 14:16:55.620373 2658 log.go:172] (0xc0006ccb00) Reply frame received for 1\nI0404 14:16:55.620438 2658 log.go:172] (0xc0006ccb00) (0xc0003e68c0) Create stream\nI0404 14:16:55.620452 2658 log.go:172] (0xc0006ccb00) (0xc0003e68c0) Stream added, broadcasting: 3\nI0404 14:16:55.621954 2658 log.go:172] (0xc0006ccb00) Reply frame received for 3\nI0404 14:16:55.621997 2658 log.go:172] (0xc0006ccb00) (0xc0007cc000) Create stream\nI0404 14:16:55.622013 2658 log.go:172] (0xc0006ccb00) (0xc0007cc000) Stream added, broadcasting: 5\nI0404 14:16:55.623203 2658 log.go:172] (0xc0006ccb00) Reply frame received for 5\nI0404 14:16:55.688400 2658 log.go:172] (0xc0006ccb00) Data frame received for 5\nI0404 14:16:55.688437 2658 log.go:172] (0xc0007cc000) (5) Data frame handling\nI0404 14:16:55.688457 2658 log.go:172] (0xc0007cc000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0404 14:16:55.688476 2658 log.go:172] (0xc0006ccb00) Data frame received for 5\nI0404 14:16:55.688566 2658 log.go:172] (0xc0007cc000) (5) Data frame handling\nI0404 14:16:55.688607 2658 log.go:172] (0xc0006ccb00) Data frame received for 3\nI0404 14:16:55.688624 2658 log.go:172] (0xc0003e68c0) (3) Data frame handling\nI0404 14:16:55.688646 2658 log.go:172] (0xc0003e68c0) (3) Data frame sent\nI0404 14:16:55.688659 2658 log.go:172] (0xc0006ccb00) Data frame received for 3\nI0404 14:16:55.688672 2658 log.go:172] (0xc0003e68c0) (3) Data frame handling\nI0404 14:16:55.690358 2658 log.go:172] (0xc0006ccb00) Data frame received for 1\nI0404 14:16:55.690392 2658 log.go:172] (0xc0003e6820) (1) Data frame handling\nI0404 14:16:55.690416 2658 log.go:172] (0xc0003e6820) (1) Data frame sent\nI0404 14:16:55.690455 2658 log.go:172] (0xc0006ccb00) (0xc0003e6820) Stream removed, broadcasting: 1\nI0404 14:16:55.690709 2658 log.go:172] (0xc0006ccb00) Go away received\nI0404 14:16:55.690950 2658 log.go:172] (0xc0006ccb00) (0xc0003e6820) Stream removed, broadcasting: 1\nI0404 14:16:55.690978 2658 log.go:172] (0xc0006ccb00) (0xc0003e68c0) Stream removed, broadcasting: 3\nI0404 14:16:55.690993 2658 log.go:172] (0xc0006ccb00) (0xc0007cc000) Stream removed, broadcasting: 5\n" Apr 4 14:16:55.695: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Apr 4 14:16:55.695: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Apr 4 14:16:55.695: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4089 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 4 14:16:55.902: INFO: stderr: "I0404 14:16:55.820363 2678 log.go:172] (0xc000a64160) (0xc0003c41e0) Create stream\nI0404 14:16:55.820415 2678 log.go:172] (0xc000a64160) (0xc0003c41e0) Stream added, broadcasting: 1\nI0404 14:16:55.822589 2678 log.go:172] (0xc000a64160) Reply frame received for 1\nI0404 14:16:55.822621 2678 log.go:172] (0xc000a64160) (0xc0005c8140) Create stream\nI0404 14:16:55.822628 2678 log.go:172] (0xc000a64160) (0xc0005c8140) Stream added, broadcasting: 3\nI0404 14:16:55.823568 2678 log.go:172] (0xc000a64160) Reply frame received for 3\nI0404 14:16:55.823624 2678 log.go:172] (0xc000a64160) (0xc000336000) Create stream\nI0404 14:16:55.823641 2678 log.go:172] (0xc000a64160) (0xc000336000) Stream added, broadcasting: 5\nI0404 14:16:55.824500 2678 log.go:172] (0xc000a64160) Reply frame received for 5\nI0404 14:16:55.895548 2678 log.go:172] (0xc000a64160) Data frame received for 5\nI0404 14:16:55.895676 2678 log.go:172] (0xc000336000) (5) Data frame handling\nI0404 14:16:55.895695 2678 log.go:172] (0xc000336000) (5) Data frame sent\nI0404 14:16:55.895707 2678 log.go:172] (0xc000a64160) Data frame received for 5\nI0404 14:16:55.895719 2678 log.go:172] (0xc000336000) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0404 14:16:55.895760 2678 log.go:172] (0xc000a64160) Data frame received for 3\nI0404 14:16:55.895780 2678 log.go:172] (0xc0005c8140) (3) Data frame handling\nI0404 14:16:55.895805 2678 log.go:172] (0xc0005c8140) (3) Data frame sent\nI0404 14:16:55.895821 2678 log.go:172] (0xc000a64160) Data frame received for 3\nI0404 14:16:55.895832 2678 log.go:172] (0xc0005c8140) (3) Data frame handling\nI0404 14:16:55.897297 2678 log.go:172] (0xc000a64160) Data frame received for 1\nI0404 14:16:55.897327 2678 log.go:172] (0xc0003c41e0) (1) Data frame handling\nI0404 14:16:55.897340 2678 log.go:172] (0xc0003c41e0) (1) Data frame sent\nI0404 14:16:55.897574 2678 log.go:172] (0xc000a64160) (0xc0003c41e0) Stream removed, broadcasting: 1\nI0404 14:16:55.897682 2678 log.go:172] (0xc000a64160) Go away received\nI0404 14:16:55.897913 2678 log.go:172] (0xc000a64160) (0xc0003c41e0) Stream removed, broadcasting: 1\nI0404 14:16:55.897934 2678 log.go:172] (0xc000a64160) (0xc0005c8140) Stream removed, broadcasting: 3\nI0404 14:16:55.897943 2678 log.go:172] (0xc000a64160) (0xc000336000) Stream removed, broadcasting: 5\n" Apr 4 14:16:55.902: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Apr 4 14:16:55.902: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Apr 4 14:16:55.906: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false Apr 4 14:17:05.911: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Apr 4 14:17:05.911: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Apr 4 14:17:05.911: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod Apr 4 14:17:05.915: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4089 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Apr 4 14:17:06.162: INFO: stderr: "I0404 14:17:06.050331 2698 log.go:172] (0xc0009d2580) (0xc00044a820) Create stream\nI0404 14:17:06.050393 2698 log.go:172] (0xc0009d2580) (0xc00044a820) Stream added, broadcasting: 1\nI0404 14:17:06.053681 2698 log.go:172] (0xc0009d2580) Reply frame received for 1\nI0404 14:17:06.053725 2698 log.go:172] (0xc0009d2580) (0xc00044a8c0) Create stream\nI0404 14:17:06.053752 2698 log.go:172] (0xc0009d2580) (0xc00044a8c0) Stream added, broadcasting: 3\nI0404 14:17:06.054837 2698 log.go:172] (0xc0009d2580) Reply frame received for 3\nI0404 14:17:06.054893 2698 log.go:172] (0xc0009d2580) (0xc00044a960) Create stream\nI0404 14:17:06.054918 2698 log.go:172] (0xc0009d2580) (0xc00044a960) Stream added, broadcasting: 5\nI0404 14:17:06.055763 2698 log.go:172] (0xc0009d2580) Reply frame received for 5\nI0404 14:17:06.155047 2698 log.go:172] (0xc0009d2580) Data frame received for 5\nI0404 14:17:06.155092 2698 log.go:172] (0xc0009d2580) Data frame received for 3\nI0404 14:17:06.155127 2698 log.go:172] (0xc00044a8c0) (3) Data frame handling\nI0404 14:17:06.155148 2698 log.go:172] (0xc00044a8c0) (3) Data frame sent\nI0404 14:17:06.155161 2698 log.go:172] (0xc0009d2580) Data frame received for 3\nI0404 14:17:06.155170 2698 log.go:172] (0xc00044a8c0) (3) Data frame handling\nI0404 14:17:06.155205 2698 log.go:172] (0xc00044a960) (5) Data frame handling\nI0404 14:17:06.155255 2698 log.go:172] (0xc00044a960) (5) Data frame sent\nI0404 14:17:06.155277 2698 log.go:172] (0xc0009d2580) Data frame received for 5\nI0404 14:17:06.155298 2698 log.go:172] (0xc00044a960) (5) Data frame handling\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0404 14:17:06.156754 2698 log.go:172] (0xc0009d2580) Data frame received for 1\nI0404 14:17:06.156782 2698 log.go:172] (0xc00044a820) (1) Data frame handling\nI0404 14:17:06.156802 2698 log.go:172] (0xc00044a820) (1) Data frame sent\nI0404 14:17:06.156836 2698 log.go:172] (0xc0009d2580) (0xc00044a820) Stream removed, broadcasting: 1\nI0404 14:17:06.157383 2698 log.go:172] (0xc0009d2580) (0xc00044a820) Stream removed, broadcasting: 1\nI0404 14:17:06.157408 2698 log.go:172] (0xc0009d2580) (0xc00044a8c0) Stream removed, broadcasting: 3\nI0404 14:17:06.157420 2698 log.go:172] (0xc0009d2580) (0xc00044a960) Stream removed, broadcasting: 5\n" Apr 4 14:17:06.162: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Apr 4 14:17:06.162: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Apr 4 14:17:06.162: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4089 ss-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Apr 4 14:17:06.403: INFO: stderr: "I0404 14:17:06.292766 2718 log.go:172] (0xc0009709a0) (0xc000516aa0) Create stream\nI0404 14:17:06.292825 2718 log.go:172] (0xc0009709a0) (0xc000516aa0) Stream added, broadcasting: 1\nI0404 14:17:06.296153 2718 log.go:172] (0xc0009709a0) Reply frame received for 1\nI0404 14:17:06.296188 2718 log.go:172] (0xc0009709a0) (0xc0005161e0) Create stream\nI0404 14:17:06.296197 2718 log.go:172] (0xc0009709a0) (0xc0005161e0) Stream added, broadcasting: 3\nI0404 14:17:06.296982 2718 log.go:172] (0xc0009709a0) Reply frame received for 3\nI0404 14:17:06.297048 2718 log.go:172] (0xc0009709a0) (0xc000090000) Create stream\nI0404 14:17:06.297070 2718 log.go:172] (0xc0009709a0) (0xc000090000) Stream added, broadcasting: 5\nI0404 14:17:06.298609 2718 log.go:172] (0xc0009709a0) Reply frame received for 5\nI0404 14:17:06.362813 2718 log.go:172] (0xc0009709a0) Data frame received for 5\nI0404 14:17:06.362843 2718 log.go:172] (0xc000090000) (5) Data frame handling\nI0404 14:17:06.362867 2718 log.go:172] (0xc000090000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0404 14:17:06.395389 2718 log.go:172] (0xc0009709a0) Data frame received for 3\nI0404 14:17:06.395428 2718 log.go:172] (0xc0005161e0) (3) Data frame handling\nI0404 14:17:06.395520 2718 log.go:172] (0xc0005161e0) (3) Data frame sent\nI0404 14:17:06.395542 2718 log.go:172] (0xc0009709a0) Data frame received for 3\nI0404 14:17:06.395556 2718 log.go:172] (0xc0005161e0) (3) Data frame handling\nI0404 14:17:06.395659 2718 log.go:172] (0xc0009709a0) Data frame received for 5\nI0404 14:17:06.395694 2718 log.go:172] (0xc000090000) (5) Data frame handling\nI0404 14:17:06.397878 2718 log.go:172] (0xc0009709a0) Data frame received for 1\nI0404 14:17:06.397907 2718 log.go:172] (0xc000516aa0) (1) Data frame handling\nI0404 14:17:06.397941 2718 log.go:172] (0xc000516aa0) (1) Data frame sent\nI0404 14:17:06.397967 2718 log.go:172] (0xc0009709a0) (0xc000516aa0) Stream removed, broadcasting: 1\nI0404 14:17:06.398045 2718 log.go:172] (0xc0009709a0) Go away received\nI0404 14:17:06.398494 2718 log.go:172] (0xc0009709a0) (0xc000516aa0) Stream removed, broadcasting: 1\nI0404 14:17:06.398516 2718 log.go:172] (0xc0009709a0) (0xc0005161e0) Stream removed, broadcasting: 3\nI0404 14:17:06.398527 2718 log.go:172] (0xc0009709a0) (0xc000090000) Stream removed, broadcasting: 5\n" Apr 4 14:17:06.403: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Apr 4 14:17:06.403: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Apr 4 14:17:06.403: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4089 ss-2 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Apr 4 14:17:06.631: INFO: stderr: "I0404 14:17:06.531163 2740 log.go:172] (0xc000970630) (0xc000600be0) Create stream\nI0404 14:17:06.531220 2740 log.go:172] (0xc000970630) (0xc000600be0) Stream added, broadcasting: 1\nI0404 14:17:06.535416 2740 log.go:172] (0xc000970630) Reply frame received for 1\nI0404 14:17:06.535462 2740 log.go:172] (0xc000970630) (0xc000600320) Create stream\nI0404 14:17:06.535476 2740 log.go:172] (0xc000970630) (0xc000600320) Stream added, broadcasting: 3\nI0404 14:17:06.536519 2740 log.go:172] (0xc000970630) Reply frame received for 3\nI0404 14:17:06.536577 2740 log.go:172] (0xc000970630) (0xc0001b6000) Create stream\nI0404 14:17:06.536596 2740 log.go:172] (0xc000970630) (0xc0001b6000) Stream added, broadcasting: 5\nI0404 14:17:06.537660 2740 log.go:172] (0xc000970630) Reply frame received for 5\nI0404 14:17:06.599852 2740 log.go:172] (0xc000970630) Data frame received for 5\nI0404 14:17:06.599889 2740 log.go:172] (0xc0001b6000) (5) Data frame handling\nI0404 14:17:06.599922 2740 log.go:172] (0xc0001b6000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0404 14:17:06.625778 2740 log.go:172] (0xc000970630) Data frame received for 3\nI0404 14:17:06.625805 2740 log.go:172] (0xc000600320) (3) Data frame handling\nI0404 14:17:06.625813 2740 log.go:172] (0xc000600320) (3) Data frame sent\nI0404 14:17:06.625818 2740 log.go:172] (0xc000970630) Data frame received for 3\nI0404 14:17:06.625822 2740 log.go:172] (0xc000600320) (3) Data frame handling\nI0404 14:17:06.625849 2740 log.go:172] (0xc000970630) Data frame received for 5\nI0404 14:17:06.625855 2740 log.go:172] (0xc0001b6000) (5) Data frame handling\nI0404 14:17:06.627917 2740 log.go:172] (0xc000970630) Data frame received for 1\nI0404 14:17:06.627929 2740 log.go:172] (0xc000600be0) (1) Data frame handling\nI0404 14:17:06.627941 2740 log.go:172] (0xc000600be0) (1) Data frame sent\nI0404 14:17:06.627950 2740 log.go:172] (0xc000970630) (0xc000600be0) Stream removed, broadcasting: 1\nI0404 14:17:06.628028 2740 log.go:172] (0xc000970630) Go away received\nI0404 14:17:06.628166 2740 log.go:172] (0xc000970630) (0xc000600be0) Stream removed, broadcasting: 1\nI0404 14:17:06.628177 2740 log.go:172] (0xc000970630) (0xc000600320) Stream removed, broadcasting: 3\nI0404 14:17:06.628182 2740 log.go:172] (0xc000970630) (0xc0001b6000) Stream removed, broadcasting: 5\n" Apr 4 14:17:06.631: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Apr 4 14:17:06.631: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Apr 4 14:17:06.631: INFO: Waiting for statefulset status.replicas updated to 0 Apr 4 14:17:06.635: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 3 Apr 4 14:17:16.642: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Apr 4 14:17:16.642: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Apr 4 14:17:16.642: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Apr 4 14:17:16.655: INFO: POD NODE PHASE GRACE CONDITIONS Apr 4 14:17:16.655: INFO: ss-0 iruya-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-04 14:16:24 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-04 14:17:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-04 14:17:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-04 14:16:24 +0000 UTC }] Apr 4 14:17:16.655: INFO: ss-1 iruya-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-04 14:16:45 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-04 14:17:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-04 14:17:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-04 14:16:45 +0000 UTC }] Apr 4 14:17:16.655: INFO: ss-2 iruya-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-04 14:16:45 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-04 14:17:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-04 14:17:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-04 14:16:45 +0000 UTC }] Apr 4 14:17:16.655: INFO: Apr 4 14:17:16.655: INFO: StatefulSet ss has not reached scale 0, at 3 Apr 4 14:17:17.752: INFO: POD NODE PHASE GRACE CONDITIONS Apr 4 14:17:17.752: INFO: ss-0 iruya-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-04 14:16:24 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-04 14:17:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-04 14:17:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-04 14:16:24 +0000 UTC }] Apr 4 14:17:17.752: INFO: ss-1 iruya-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-04 14:16:45 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-04 14:17:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-04 14:17:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-04 14:16:45 +0000 UTC }] Apr 4 14:17:17.752: INFO: ss-2 iruya-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-04 14:16:45 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-04 14:17:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-04 14:17:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-04 14:16:45 +0000 UTC }] Apr 4 14:17:17.752: INFO: Apr 4 14:17:17.752: INFO: StatefulSet ss has not reached scale 0, at 3 Apr 4 14:17:18.761: INFO: POD NODE PHASE GRACE CONDITIONS Apr 4 14:17:18.761: INFO: ss-0 iruya-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-04 14:16:24 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-04 14:17:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-04 14:17:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-04 14:16:24 +0000 UTC }] Apr 4 14:17:18.761: INFO: ss-1 iruya-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-04 14:16:45 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-04 14:17:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-04 14:17:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-04 14:16:45 +0000 UTC }] Apr 4 14:17:18.761: INFO: ss-2 iruya-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-04 14:16:45 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-04 14:17:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-04 14:17:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-04 14:16:45 +0000 UTC }] Apr 4 14:17:18.761: INFO: Apr 4 14:17:18.761: INFO: StatefulSet ss has not reached scale 0, at 3 Apr 4 14:17:19.766: INFO: POD NODE PHASE GRACE CONDITIONS Apr 4 14:17:19.766: INFO: ss-0 iruya-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-04 14:16:24 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-04 14:17:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-04 14:17:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-04 14:16:24 +0000 UTC }] Apr 4 14:17:19.766: INFO: ss-1 iruya-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-04 14:16:45 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-04 14:17:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-04 14:17:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-04 14:16:45 +0000 UTC }] Apr 4 14:17:19.766: INFO: ss-2 iruya-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-04 14:16:45 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-04 14:17:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-04 14:17:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-04 14:16:45 +0000 UTC }] Apr 4 14:17:19.766: INFO: Apr 4 14:17:19.766: INFO: StatefulSet ss has not reached scale 0, at 3 Apr 4 14:17:20.770: INFO: POD NODE PHASE GRACE CONDITIONS Apr 4 14:17:20.770: INFO: ss-0 iruya-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-04 14:16:24 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-04 14:17:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-04 14:17:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-04 14:16:24 +0000 UTC }] Apr 4 14:17:20.770: INFO: ss-1 iruya-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-04 14:16:45 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-04 14:17:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-04 14:17:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-04 14:16:45 +0000 UTC }] Apr 4 14:17:20.770: INFO: ss-2 iruya-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-04 14:16:45 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-04 14:17:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-04 14:17:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-04 14:16:45 +0000 UTC }] Apr 4 14:17:20.770: INFO: Apr 4 14:17:20.770: INFO: StatefulSet ss has not reached scale 0, at 3 Apr 4 14:17:21.775: INFO: POD NODE PHASE GRACE CONDITIONS Apr 4 14:17:21.775: INFO: ss-0 iruya-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-04 14:16:24 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-04 14:17:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-04 14:17:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-04 14:16:24 +0000 UTC }] Apr 4 14:17:21.775: INFO: ss-1 iruya-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-04 14:16:45 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-04 14:17:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-04 14:17:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-04 14:16:45 +0000 UTC }] Apr 4 14:17:21.775: INFO: ss-2 iruya-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-04 14:16:45 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-04 14:17:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-04 14:17:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-04 14:16:45 +0000 UTC }] Apr 4 14:17:21.775: INFO: Apr 4 14:17:21.775: INFO: StatefulSet ss has not reached scale 0, at 3 Apr 4 14:17:22.779: INFO: Verifying statefulset ss doesn't scale past 0 for another 3.873100085s Apr 4 14:17:23.783: INFO: Verifying statefulset ss doesn't scale past 0 for another 2.8688415s Apr 4 14:17:24.788: INFO: Verifying statefulset ss doesn't scale past 0 for another 1.864840643s Apr 4 14:17:25.792: INFO: Verifying statefulset ss doesn't scale past 0 for another 860.476471ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-4089 Apr 4 14:17:26.796: INFO: Scaling statefulset ss to 0 Apr 4 14:17:26.805: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 Apr 4 14:17:26.807: INFO: Deleting all statefulset in ns statefulset-4089 Apr 4 14:17:26.809: INFO: Scaling statefulset ss to 0 Apr 4 14:17:26.816: INFO: Waiting for statefulset status.replicas updated to 0 Apr 4 14:17:26.818: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 4 14:17:26.855: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-4089" for this suite. Apr 4 14:17:32.879: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 4 14:17:32.962: INFO: namespace statefulset-4089 deletion completed in 6.103112911s • [SLOW TEST:68.223 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 4 14:17:32.962: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating the pod Apr 4 14:17:37.569: INFO: Successfully updated pod "labelsupdate27f2976d-1e92-44d4-853e-72857854d9d1" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 4 14:17:39.594: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5202" for this suite. Apr 4 14:18:01.642: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 4 14:18:01.728: INFO: namespace projected-5202 deletion completed in 22.129944826s • [SLOW TEST:28.766 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 4 14:18:01.729: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-map-7b18ff30-24c9-4e30-afcd-226855d6bc3b STEP: Creating a pod to test consume configMaps Apr 4 14:18:01.837: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-f5ed3ace-ad16-4445-8132-44060df154c5" in namespace "projected-9556" to be "success or failure" Apr 4 14:18:01.842: INFO: Pod "pod-projected-configmaps-f5ed3ace-ad16-4445-8132-44060df154c5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.32173ms Apr 4 14:18:03.845: INFO: Pod "pod-projected-configmaps-f5ed3ace-ad16-4445-8132-44060df154c5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008052736s Apr 4 14:18:05.848: INFO: Pod "pod-projected-configmaps-f5ed3ace-ad16-4445-8132-44060df154c5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011222485s STEP: Saw pod success Apr 4 14:18:05.848: INFO: Pod "pod-projected-configmaps-f5ed3ace-ad16-4445-8132-44060df154c5" satisfied condition "success or failure" Apr 4 14:18:05.851: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-configmaps-f5ed3ace-ad16-4445-8132-44060df154c5 container projected-configmap-volume-test: STEP: delete the pod Apr 4 14:18:05.866: INFO: Waiting for pod pod-projected-configmaps-f5ed3ace-ad16-4445-8132-44060df154c5 to disappear Apr 4 14:18:05.870: INFO: Pod pod-projected-configmaps-f5ed3ace-ad16-4445-8132-44060df154c5 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 4 14:18:05.870: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9556" for this suite. Apr 4 14:18:11.886: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 4 14:18:11.981: INFO: namespace projected-9556 deletion completed in 6.10683571s • [SLOW TEST:10.251 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 4 14:18:11.981: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-map-c9f23dc1-5d16-49ac-9ca2-6663ddf9f3fd STEP: Creating a pod to test consume secrets Apr 4 14:18:12.046: INFO: Waiting up to 5m0s for pod "pod-secrets-2154fc06-6a5e-476b-9445-edb55ded59c9" in namespace "secrets-989" to be "success or failure" Apr 4 14:18:12.062: INFO: Pod "pod-secrets-2154fc06-6a5e-476b-9445-edb55ded59c9": Phase="Pending", Reason="", readiness=false. Elapsed: 15.851249ms Apr 4 14:18:14.066: INFO: Pod "pod-secrets-2154fc06-6a5e-476b-9445-edb55ded59c9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019552191s Apr 4 14:18:16.070: INFO: Pod "pod-secrets-2154fc06-6a5e-476b-9445-edb55ded59c9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023202991s STEP: Saw pod success Apr 4 14:18:16.070: INFO: Pod "pod-secrets-2154fc06-6a5e-476b-9445-edb55ded59c9" satisfied condition "success or failure" Apr 4 14:18:16.073: INFO: Trying to get logs from node iruya-worker2 pod pod-secrets-2154fc06-6a5e-476b-9445-edb55ded59c9 container secret-volume-test: STEP: delete the pod Apr 4 14:18:16.100: INFO: Waiting for pod pod-secrets-2154fc06-6a5e-476b-9445-edb55ded59c9 to disappear Apr 4 14:18:16.110: INFO: Pod pod-secrets-2154fc06-6a5e-476b-9445-edb55ded59c9 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 4 14:18:16.110: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-989" for this suite. Apr 4 14:18:22.131: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 4 14:18:22.207: INFO: namespace secrets-989 deletion completed in 6.093194716s • [SLOW TEST:10.226 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 4 14:18:22.207: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: starting an echo server on multiple ports STEP: creating replication controller proxy-service-gh74f in namespace proxy-2457 I0404 14:18:22.315304 6 runners.go:180] Created replication controller with name: proxy-service-gh74f, namespace: proxy-2457, replica count: 1 I0404 14:18:23.365782 6 runners.go:180] proxy-service-gh74f Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0404 14:18:24.366026 6 runners.go:180] proxy-service-gh74f Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0404 14:18:25.366354 6 runners.go:180] proxy-service-gh74f Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0404 14:18:26.366572 6 runners.go:180] proxy-service-gh74f Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0404 14:18:27.366830 6 runners.go:180] proxy-service-gh74f Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0404 14:18:28.367066 6 runners.go:180] proxy-service-gh74f Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0404 14:18:29.367365 6 runners.go:180] proxy-service-gh74f Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0404 14:18:30.367646 6 runners.go:180] proxy-service-gh74f Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0404 14:18:31.367861 6 runners.go:180] proxy-service-gh74f Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Apr 4 14:18:31.370: INFO: setup took 9.0885097s, starting test cases STEP: running 16 cases, 20 attempts per case, 320 total attempts Apr 4 14:18:31.373: INFO: (0) /api/v1/namespaces/proxy-2457/pods/proxy-service-gh74f-r2lnh:1080/proxy/: test<... (200; 2.905328ms) Apr 4 14:18:31.377: INFO: (0) /api/v1/namespaces/proxy-2457/pods/http:proxy-service-gh74f-r2lnh:1080/proxy/: ... (200; 7.036358ms) Apr 4 14:18:31.377: INFO: (0) /api/v1/namespaces/proxy-2457/services/proxy-service-gh74f:portname1/proxy/: foo (200; 7.076598ms) Apr 4 14:18:31.377: INFO: (0) /api/v1/namespaces/proxy-2457/pods/proxy-service-gh74f-r2lnh:160/proxy/: foo (200; 7.106461ms) Apr 4 14:18:31.377: INFO: (0) /api/v1/namespaces/proxy-2457/services/http:proxy-service-gh74f:portname1/proxy/: foo (200; 7.169378ms) Apr 4 14:18:31.377: INFO: (0) /api/v1/namespaces/proxy-2457/services/http:proxy-service-gh74f:portname2/proxy/: bar (200; 7.372322ms) Apr 4 14:18:31.378: INFO: (0) /api/v1/namespaces/proxy-2457/services/proxy-service-gh74f:portname2/proxy/: bar (200; 7.641089ms) Apr 4 14:18:31.378: INFO: (0) /api/v1/namespaces/proxy-2457/pods/proxy-service-gh74f-r2lnh/proxy/: test (200; 7.795061ms) Apr 4 14:18:31.378: INFO: (0) /api/v1/namespaces/proxy-2457/pods/http:proxy-service-gh74f-r2lnh:160/proxy/: foo (200; 7.668953ms) Apr 4 14:18:31.378: INFO: (0) /api/v1/namespaces/proxy-2457/pods/http:proxy-service-gh74f-r2lnh:162/proxy/: bar (200; 7.76768ms) Apr 4 14:18:31.379: INFO: (0) /api/v1/namespaces/proxy-2457/pods/https:proxy-service-gh74f-r2lnh:462/proxy/: tls qux (200; 8.742851ms) Apr 4 14:18:31.379: INFO: (0) /api/v1/namespaces/proxy-2457/pods/proxy-service-gh74f-r2lnh:162/proxy/: bar (200; 9.107178ms) Apr 4 14:18:31.379: INFO: (0) /api/v1/namespaces/proxy-2457/pods/https:proxy-service-gh74f-r2lnh:443/proxy/: ... (200; 4.378386ms) Apr 4 14:18:31.388: INFO: (1) /api/v1/namespaces/proxy-2457/pods/proxy-service-gh74f-r2lnh/proxy/: test (200; 4.385996ms) Apr 4 14:18:31.388: INFO: (1) /api/v1/namespaces/proxy-2457/pods/proxy-service-gh74f-r2lnh:162/proxy/: bar (200; 4.394648ms) Apr 4 14:18:31.388: INFO: (1) /api/v1/namespaces/proxy-2457/pods/https:proxy-service-gh74f-r2lnh:460/proxy/: tls baz (200; 4.610889ms) Apr 4 14:18:31.388: INFO: (1) /api/v1/namespaces/proxy-2457/services/http:proxy-service-gh74f:portname2/proxy/: bar (200; 4.637772ms) Apr 4 14:18:31.388: INFO: (1) /api/v1/namespaces/proxy-2457/services/http:proxy-service-gh74f:portname1/proxy/: foo (200; 4.83934ms) Apr 4 14:18:31.388: INFO: (1) /api/v1/namespaces/proxy-2457/pods/proxy-service-gh74f-r2lnh:1080/proxy/: test<... (200; 4.828405ms) Apr 4 14:18:31.388: INFO: (1) /api/v1/namespaces/proxy-2457/pods/proxy-service-gh74f-r2lnh:160/proxy/: foo (200; 4.978667ms) Apr 4 14:18:31.388: INFO: (1) /api/v1/namespaces/proxy-2457/pods/http:proxy-service-gh74f-r2lnh:160/proxy/: foo (200; 5.075848ms) Apr 4 14:18:31.389: INFO: (1) /api/v1/namespaces/proxy-2457/services/proxy-service-gh74f:portname1/proxy/: foo (200; 5.539021ms) Apr 4 14:18:31.389: INFO: (1) /api/v1/namespaces/proxy-2457/services/https:proxy-service-gh74f:tlsportname1/proxy/: tls baz (200; 5.716326ms) Apr 4 14:18:31.389: INFO: (1) /api/v1/namespaces/proxy-2457/services/https:proxy-service-gh74f:tlsportname2/proxy/: tls qux (200; 5.64863ms) Apr 4 14:18:31.389: INFO: (1) /api/v1/namespaces/proxy-2457/services/proxy-service-gh74f:portname2/proxy/: bar (200; 5.669581ms) Apr 4 14:18:31.393: INFO: (2) /api/v1/namespaces/proxy-2457/pods/proxy-service-gh74f-r2lnh:1080/proxy/: test<... (200; 4.261518ms) Apr 4 14:18:31.393: INFO: (2) /api/v1/namespaces/proxy-2457/services/http:proxy-service-gh74f:portname2/proxy/: bar (200; 4.384185ms) Apr 4 14:18:31.394: INFO: (2) /api/v1/namespaces/proxy-2457/services/proxy-service-gh74f:portname1/proxy/: foo (200; 4.360942ms) Apr 4 14:18:31.394: INFO: (2) /api/v1/namespaces/proxy-2457/pods/https:proxy-service-gh74f-r2lnh:460/proxy/: tls baz (200; 4.395348ms) Apr 4 14:18:31.394: INFO: (2) /api/v1/namespaces/proxy-2457/pods/https:proxy-service-gh74f-r2lnh:462/proxy/: tls qux (200; 4.367057ms) Apr 4 14:18:31.394: INFO: (2) /api/v1/namespaces/proxy-2457/services/http:proxy-service-gh74f:portname1/proxy/: foo (200; 4.5361ms) Apr 4 14:18:31.394: INFO: (2) /api/v1/namespaces/proxy-2457/pods/proxy-service-gh74f-r2lnh:162/proxy/: bar (200; 4.602556ms) Apr 4 14:18:31.394: INFO: (2) /api/v1/namespaces/proxy-2457/services/https:proxy-service-gh74f:tlsportname1/proxy/: tls baz (200; 4.584668ms) Apr 4 14:18:31.394: INFO: (2) /api/v1/namespaces/proxy-2457/pods/proxy-service-gh74f-r2lnh/proxy/: test (200; 4.632794ms) Apr 4 14:18:31.394: INFO: (2) /api/v1/namespaces/proxy-2457/services/https:proxy-service-gh74f:tlsportname2/proxy/: tls qux (200; 5.003715ms) Apr 4 14:18:31.394: INFO: (2) /api/v1/namespaces/proxy-2457/pods/http:proxy-service-gh74f-r2lnh:162/proxy/: bar (200; 5.080172ms) Apr 4 14:18:31.394: INFO: (2) /api/v1/namespaces/proxy-2457/pods/http:proxy-service-gh74f-r2lnh:160/proxy/: foo (200; 4.992395ms) Apr 4 14:18:31.394: INFO: (2) /api/v1/namespaces/proxy-2457/pods/https:proxy-service-gh74f-r2lnh:443/proxy/: ... (200; 5.109317ms) Apr 4 14:18:31.394: INFO: (2) /api/v1/namespaces/proxy-2457/services/proxy-service-gh74f:portname2/proxy/: bar (200; 5.033444ms) Apr 4 14:18:31.394: INFO: (2) /api/v1/namespaces/proxy-2457/pods/proxy-service-gh74f-r2lnh:160/proxy/: foo (200; 5.110776ms) Apr 4 14:18:31.398: INFO: (3) /api/v1/namespaces/proxy-2457/pods/proxy-service-gh74f-r2lnh:160/proxy/: foo (200; 3.292829ms) Apr 4 14:18:31.398: INFO: (3) /api/v1/namespaces/proxy-2457/pods/proxy-service-gh74f-r2lnh/proxy/: test (200; 3.338799ms) Apr 4 14:18:31.398: INFO: (3) /api/v1/namespaces/proxy-2457/pods/http:proxy-service-gh74f-r2lnh:1080/proxy/: ... (200; 3.262189ms) Apr 4 14:18:31.398: INFO: (3) /api/v1/namespaces/proxy-2457/pods/http:proxy-service-gh74f-r2lnh:162/proxy/: bar (200; 3.636618ms) Apr 4 14:18:31.398: INFO: (3) /api/v1/namespaces/proxy-2457/pods/proxy-service-gh74f-r2lnh:1080/proxy/: test<... (200; 3.555222ms) Apr 4 14:18:31.398: INFO: (3) /api/v1/namespaces/proxy-2457/pods/http:proxy-service-gh74f-r2lnh:160/proxy/: foo (200; 3.664743ms) Apr 4 14:18:31.398: INFO: (3) /api/v1/namespaces/proxy-2457/pods/https:proxy-service-gh74f-r2lnh:460/proxy/: tls baz (200; 3.631665ms) Apr 4 14:18:31.398: INFO: (3) /api/v1/namespaces/proxy-2457/pods/proxy-service-gh74f-r2lnh:162/proxy/: bar (200; 3.638275ms) Apr 4 14:18:31.398: INFO: (3) /api/v1/namespaces/proxy-2457/services/proxy-service-gh74f:portname2/proxy/: bar (200; 3.824029ms) Apr 4 14:18:31.398: INFO: (3) /api/v1/namespaces/proxy-2457/services/http:proxy-service-gh74f:portname2/proxy/: bar (200; 3.848521ms) Apr 4 14:18:31.398: INFO: (3) /api/v1/namespaces/proxy-2457/services/https:proxy-service-gh74f:tlsportname1/proxy/: tls baz (200; 4.070148ms) Apr 4 14:18:31.399: INFO: (3) /api/v1/namespaces/proxy-2457/pods/https:proxy-service-gh74f-r2lnh:443/proxy/: test (200; 4.236673ms) Apr 4 14:18:31.404: INFO: (4) /api/v1/namespaces/proxy-2457/pods/http:proxy-service-gh74f-r2lnh:1080/proxy/: ... (200; 4.287701ms) Apr 4 14:18:31.404: INFO: (4) /api/v1/namespaces/proxy-2457/pods/https:proxy-service-gh74f-r2lnh:443/proxy/: test<... (200; 5.315693ms) Apr 4 14:18:31.407: INFO: (4) /api/v1/namespaces/proxy-2457/services/http:proxy-service-gh74f:portname1/proxy/: foo (200; 7.121352ms) Apr 4 14:18:31.407: INFO: (4) /api/v1/namespaces/proxy-2457/services/proxy-service-gh74f:portname2/proxy/: bar (200; 7.419055ms) Apr 4 14:18:31.407: INFO: (4) /api/v1/namespaces/proxy-2457/services/http:proxy-service-gh74f:portname2/proxy/: bar (200; 7.503697ms) Apr 4 14:18:31.407: INFO: (4) /api/v1/namespaces/proxy-2457/services/proxy-service-gh74f:portname1/proxy/: foo (200; 7.548059ms) Apr 4 14:18:31.407: INFO: (4) /api/v1/namespaces/proxy-2457/services/https:proxy-service-gh74f:tlsportname1/proxy/: tls baz (200; 7.669716ms) Apr 4 14:18:31.407: INFO: (4) /api/v1/namespaces/proxy-2457/services/https:proxy-service-gh74f:tlsportname2/proxy/: tls qux (200; 7.716217ms) Apr 4 14:18:31.412: INFO: (5) /api/v1/namespaces/proxy-2457/pods/proxy-service-gh74f-r2lnh:160/proxy/: foo (200; 4.814101ms) Apr 4 14:18:31.412: INFO: (5) /api/v1/namespaces/proxy-2457/services/proxy-service-gh74f:portname1/proxy/: foo (200; 5.112694ms) Apr 4 14:18:31.412: INFO: (5) /api/v1/namespaces/proxy-2457/services/https:proxy-service-gh74f:tlsportname1/proxy/: tls baz (200; 5.054136ms) Apr 4 14:18:31.412: INFO: (5) /api/v1/namespaces/proxy-2457/services/https:proxy-service-gh74f:tlsportname2/proxy/: tls qux (200; 5.126309ms) Apr 4 14:18:31.412: INFO: (5) /api/v1/namespaces/proxy-2457/services/http:proxy-service-gh74f:portname2/proxy/: bar (200; 5.13893ms) Apr 4 14:18:31.413: INFO: (5) /api/v1/namespaces/proxy-2457/pods/https:proxy-service-gh74f-r2lnh:460/proxy/: tls baz (200; 5.293334ms) Apr 4 14:18:31.413: INFO: (5) /api/v1/namespaces/proxy-2457/pods/proxy-service-gh74f-r2lnh:162/proxy/: bar (200; 5.945801ms) Apr 4 14:18:31.413: INFO: (5) /api/v1/namespaces/proxy-2457/services/proxy-service-gh74f:portname2/proxy/: bar (200; 5.932515ms) Apr 4 14:18:31.413: INFO: (5) /api/v1/namespaces/proxy-2457/pods/proxy-service-gh74f-r2lnh:1080/proxy/: test<... (200; 5.944451ms) Apr 4 14:18:31.413: INFO: (5) /api/v1/namespaces/proxy-2457/pods/proxy-service-gh74f-r2lnh/proxy/: test (200; 5.949846ms) Apr 4 14:18:31.413: INFO: (5) /api/v1/namespaces/proxy-2457/pods/https:proxy-service-gh74f-r2lnh:462/proxy/: tls qux (200; 6.049723ms) Apr 4 14:18:31.413: INFO: (5) /api/v1/namespaces/proxy-2457/pods/http:proxy-service-gh74f-r2lnh:1080/proxy/: ... (200; 5.962635ms) Apr 4 14:18:31.413: INFO: (5) /api/v1/namespaces/proxy-2457/pods/http:proxy-service-gh74f-r2lnh:160/proxy/: foo (200; 6.126463ms) Apr 4 14:18:31.413: INFO: (5) /api/v1/namespaces/proxy-2457/pods/http:proxy-service-gh74f-r2lnh:162/proxy/: bar (200; 6.088104ms) Apr 4 14:18:31.413: INFO: (5) /api/v1/namespaces/proxy-2457/pods/https:proxy-service-gh74f-r2lnh:443/proxy/: ... (200; 3.387883ms) Apr 4 14:18:31.417: INFO: (6) /api/v1/namespaces/proxy-2457/pods/proxy-service-gh74f-r2lnh:1080/proxy/: test<... (200; 3.445759ms) Apr 4 14:18:31.417: INFO: (6) /api/v1/namespaces/proxy-2457/pods/http:proxy-service-gh74f-r2lnh:162/proxy/: bar (200; 3.526475ms) Apr 4 14:18:31.417: INFO: (6) /api/v1/namespaces/proxy-2457/pods/proxy-service-gh74f-r2lnh:160/proxy/: foo (200; 3.652414ms) Apr 4 14:18:31.417: INFO: (6) /api/v1/namespaces/proxy-2457/pods/https:proxy-service-gh74f-r2lnh:460/proxy/: tls baz (200; 3.651515ms) Apr 4 14:18:31.417: INFO: (6) /api/v1/namespaces/proxy-2457/pods/https:proxy-service-gh74f-r2lnh:462/proxy/: tls qux (200; 3.844741ms) Apr 4 14:18:31.418: INFO: (6) /api/v1/namespaces/proxy-2457/pods/proxy-service-gh74f-r2lnh/proxy/: test (200; 4.035126ms) Apr 4 14:18:31.418: INFO: (6) /api/v1/namespaces/proxy-2457/pods/https:proxy-service-gh74f-r2lnh:443/proxy/: ... (200; 2.552733ms) Apr 4 14:18:31.422: INFO: (7) /api/v1/namespaces/proxy-2457/services/proxy-service-gh74f:portname2/proxy/: bar (200; 3.323709ms) Apr 4 14:18:31.422: INFO: (7) /api/v1/namespaces/proxy-2457/pods/proxy-service-gh74f-r2lnh:1080/proxy/: test<... (200; 3.522213ms) Apr 4 14:18:31.422: INFO: (7) /api/v1/namespaces/proxy-2457/services/http:proxy-service-gh74f:portname1/proxy/: foo (200; 3.562016ms) Apr 4 14:18:31.422: INFO: (7) /api/v1/namespaces/proxy-2457/pods/https:proxy-service-gh74f-r2lnh:443/proxy/: test (200; 3.963936ms) Apr 4 14:18:31.423: INFO: (7) /api/v1/namespaces/proxy-2457/pods/proxy-service-gh74f-r2lnh:162/proxy/: bar (200; 4.029418ms) Apr 4 14:18:31.423: INFO: (7) /api/v1/namespaces/proxy-2457/services/https:proxy-service-gh74f:tlsportname2/proxy/: tls qux (200; 4.074105ms) Apr 4 14:18:31.423: INFO: (7) /api/v1/namespaces/proxy-2457/pods/proxy-service-gh74f-r2lnh:160/proxy/: foo (200; 3.955478ms) Apr 4 14:18:31.426: INFO: (8) /api/v1/namespaces/proxy-2457/pods/proxy-service-gh74f-r2lnh/proxy/: test (200; 2.960405ms) Apr 4 14:18:31.426: INFO: (8) /api/v1/namespaces/proxy-2457/pods/proxy-service-gh74f-r2lnh:160/proxy/: foo (200; 3.172471ms) Apr 4 14:18:31.426: INFO: (8) /api/v1/namespaces/proxy-2457/pods/proxy-service-gh74f-r2lnh:1080/proxy/: test<... (200; 3.101447ms) Apr 4 14:18:31.426: INFO: (8) /api/v1/namespaces/proxy-2457/pods/http:proxy-service-gh74f-r2lnh:160/proxy/: foo (200; 3.531809ms) Apr 4 14:18:31.426: INFO: (8) /api/v1/namespaces/proxy-2457/pods/proxy-service-gh74f-r2lnh:162/proxy/: bar (200; 3.517222ms) Apr 4 14:18:31.426: INFO: (8) /api/v1/namespaces/proxy-2457/pods/http:proxy-service-gh74f-r2lnh:162/proxy/: bar (200; 3.476138ms) Apr 4 14:18:31.426: INFO: (8) /api/v1/namespaces/proxy-2457/pods/http:proxy-service-gh74f-r2lnh:1080/proxy/: ... (200; 3.708637ms) Apr 4 14:18:31.426: INFO: (8) /api/v1/namespaces/proxy-2457/pods/https:proxy-service-gh74f-r2lnh:462/proxy/: tls qux (200; 3.659032ms) Apr 4 14:18:31.426: INFO: (8) /api/v1/namespaces/proxy-2457/pods/https:proxy-service-gh74f-r2lnh:443/proxy/: test<... (200; 3.65801ms) Apr 4 14:18:31.432: INFO: (9) /api/v1/namespaces/proxy-2457/pods/proxy-service-gh74f-r2lnh/proxy/: test (200; 3.69592ms) Apr 4 14:18:31.432: INFO: (9) /api/v1/namespaces/proxy-2457/pods/http:proxy-service-gh74f-r2lnh:162/proxy/: bar (200; 3.776815ms) Apr 4 14:18:31.432: INFO: (9) /api/v1/namespaces/proxy-2457/pods/http:proxy-service-gh74f-r2lnh:1080/proxy/: ... (200; 3.798303ms) Apr 4 14:18:31.432: INFO: (9) /api/v1/namespaces/proxy-2457/pods/https:proxy-service-gh74f-r2lnh:462/proxy/: tls qux (200; 3.81714ms) Apr 4 14:18:31.432: INFO: (9) /api/v1/namespaces/proxy-2457/pods/proxy-service-gh74f-r2lnh:162/proxy/: bar (200; 3.851605ms) Apr 4 14:18:31.433: INFO: (9) /api/v1/namespaces/proxy-2457/pods/https:proxy-service-gh74f-r2lnh:460/proxy/: tls baz (200; 4.774878ms) Apr 4 14:18:31.433: INFO: (9) /api/v1/namespaces/proxy-2457/services/http:proxy-service-gh74f:portname2/proxy/: bar (200; 5.276083ms) Apr 4 14:18:31.433: INFO: (9) /api/v1/namespaces/proxy-2457/services/https:proxy-service-gh74f:tlsportname2/proxy/: tls qux (200; 5.38336ms) Apr 4 14:18:31.433: INFO: (9) /api/v1/namespaces/proxy-2457/services/http:proxy-service-gh74f:portname1/proxy/: foo (200; 5.305926ms) Apr 4 14:18:31.433: INFO: (9) /api/v1/namespaces/proxy-2457/pods/http:proxy-service-gh74f-r2lnh:160/proxy/: foo (200; 5.388641ms) Apr 4 14:18:31.433: INFO: (9) /api/v1/namespaces/proxy-2457/services/https:proxy-service-gh74f:tlsportname1/proxy/: tls baz (200; 5.316468ms) Apr 4 14:18:31.433: INFO: (9) /api/v1/namespaces/proxy-2457/pods/proxy-service-gh74f-r2lnh:160/proxy/: foo (200; 5.40287ms) Apr 4 14:18:31.433: INFO: (9) /api/v1/namespaces/proxy-2457/services/proxy-service-gh74f:portname2/proxy/: bar (200; 5.350827ms) Apr 4 14:18:31.433: INFO: (9) /api/v1/namespaces/proxy-2457/pods/https:proxy-service-gh74f-r2lnh:443/proxy/: test<... (200; 2.185137ms) Apr 4 14:18:31.437: INFO: (10) /api/v1/namespaces/proxy-2457/pods/http:proxy-service-gh74f-r2lnh:1080/proxy/: ... (200; 3.319932ms) Apr 4 14:18:31.437: INFO: (10) /api/v1/namespaces/proxy-2457/services/https:proxy-service-gh74f:tlsportname1/proxy/: tls baz (200; 3.235924ms) Apr 4 14:18:31.437: INFO: (10) /api/v1/namespaces/proxy-2457/pods/http:proxy-service-gh74f-r2lnh:162/proxy/: bar (200; 3.508552ms) Apr 4 14:18:31.437: INFO: (10) /api/v1/namespaces/proxy-2457/services/https:proxy-service-gh74f:tlsportname2/proxy/: tls qux (200; 3.568806ms) Apr 4 14:18:31.437: INFO: (10) /api/v1/namespaces/proxy-2457/pods/https:proxy-service-gh74f-r2lnh:460/proxy/: tls baz (200; 3.636179ms) Apr 4 14:18:31.437: INFO: (10) /api/v1/namespaces/proxy-2457/pods/https:proxy-service-gh74f-r2lnh:443/proxy/: test (200; 3.83545ms) Apr 4 14:18:31.437: INFO: (10) /api/v1/namespaces/proxy-2457/pods/https:proxy-service-gh74f-r2lnh:462/proxy/: tls qux (200; 3.878626ms) Apr 4 14:18:31.437: INFO: (10) /api/v1/namespaces/proxy-2457/pods/proxy-service-gh74f-r2lnh:160/proxy/: foo (200; 3.972773ms) Apr 4 14:18:31.437: INFO: (10) /api/v1/namespaces/proxy-2457/services/http:proxy-service-gh74f:portname1/proxy/: foo (200; 3.995066ms) Apr 4 14:18:31.437: INFO: (10) /api/v1/namespaces/proxy-2457/services/proxy-service-gh74f:portname1/proxy/: foo (200; 4.10965ms) Apr 4 14:18:31.437: INFO: (10) /api/v1/namespaces/proxy-2457/pods/proxy-service-gh74f-r2lnh:162/proxy/: bar (200; 4.072721ms) Apr 4 14:18:31.440: INFO: (11) /api/v1/namespaces/proxy-2457/pods/proxy-service-gh74f-r2lnh:1080/proxy/: test<... (200; 2.580397ms) Apr 4 14:18:31.440: INFO: (11) /api/v1/namespaces/proxy-2457/pods/http:proxy-service-gh74f-r2lnh:162/proxy/: bar (200; 2.796501ms) Apr 4 14:18:31.441: INFO: (11) /api/v1/namespaces/proxy-2457/pods/http:proxy-service-gh74f-r2lnh:160/proxy/: foo (200; 3.250812ms) Apr 4 14:18:31.441: INFO: (11) /api/v1/namespaces/proxy-2457/pods/http:proxy-service-gh74f-r2lnh:1080/proxy/: ... (200; 3.286548ms) Apr 4 14:18:31.441: INFO: (11) /api/v1/namespaces/proxy-2457/pods/proxy-service-gh74f-r2lnh:162/proxy/: bar (200; 3.209486ms) Apr 4 14:18:31.441: INFO: (11) /api/v1/namespaces/proxy-2457/pods/proxy-service-gh74f-r2lnh/proxy/: test (200; 3.234195ms) Apr 4 14:18:31.441: INFO: (11) /api/v1/namespaces/proxy-2457/pods/https:proxy-service-gh74f-r2lnh:462/proxy/: tls qux (200; 3.245757ms) Apr 4 14:18:31.441: INFO: (11) /api/v1/namespaces/proxy-2457/pods/https:proxy-service-gh74f-r2lnh:443/proxy/: ... (200; 3.386992ms) Apr 4 14:18:31.446: INFO: (12) /api/v1/namespaces/proxy-2457/services/proxy-service-gh74f:portname2/proxy/: bar (200; 3.50922ms) Apr 4 14:18:31.446: INFO: (12) /api/v1/namespaces/proxy-2457/pods/https:proxy-service-gh74f-r2lnh:443/proxy/: test<... (200; 4.338449ms) Apr 4 14:18:31.446: INFO: (12) /api/v1/namespaces/proxy-2457/pods/http:proxy-service-gh74f-r2lnh:162/proxy/: bar (200; 4.247244ms) Apr 4 14:18:31.446: INFO: (12) /api/v1/namespaces/proxy-2457/pods/https:proxy-service-gh74f-r2lnh:460/proxy/: tls baz (200; 4.322214ms) Apr 4 14:18:31.446: INFO: (12) /api/v1/namespaces/proxy-2457/pods/proxy-service-gh74f-r2lnh:160/proxy/: foo (200; 4.310128ms) Apr 4 14:18:31.447: INFO: (12) /api/v1/namespaces/proxy-2457/pods/https:proxy-service-gh74f-r2lnh:462/proxy/: tls qux (200; 4.427692ms) Apr 4 14:18:31.447: INFO: (12) /api/v1/namespaces/proxy-2457/services/proxy-service-gh74f:portname1/proxy/: foo (200; 4.441525ms) Apr 4 14:18:31.447: INFO: (12) /api/v1/namespaces/proxy-2457/services/https:proxy-service-gh74f:tlsportname2/proxy/: tls qux (200; 4.358492ms) Apr 4 14:18:31.447: INFO: (12) /api/v1/namespaces/proxy-2457/pods/proxy-service-gh74f-r2lnh/proxy/: test (200; 4.523713ms) Apr 4 14:18:31.447: INFO: (12) /api/v1/namespaces/proxy-2457/pods/proxy-service-gh74f-r2lnh:162/proxy/: bar (200; 4.512948ms) Apr 4 14:18:31.447: INFO: (12) /api/v1/namespaces/proxy-2457/pods/http:proxy-service-gh74f-r2lnh:160/proxy/: foo (200; 4.535658ms) Apr 4 14:18:31.449: INFO: (13) /api/v1/namespaces/proxy-2457/pods/proxy-service-gh74f-r2lnh:160/proxy/: foo (200; 2.570432ms) Apr 4 14:18:31.450: INFO: (13) /api/v1/namespaces/proxy-2457/pods/http:proxy-service-gh74f-r2lnh:160/proxy/: foo (200; 2.744492ms) Apr 4 14:18:31.450: INFO: (13) /api/v1/namespaces/proxy-2457/pods/proxy-service-gh74f-r2lnh:162/proxy/: bar (200; 3.004445ms) Apr 4 14:18:31.450: INFO: (13) /api/v1/namespaces/proxy-2457/pods/https:proxy-service-gh74f-r2lnh:460/proxy/: tls baz (200; 3.055604ms) Apr 4 14:18:31.450: INFO: (13) /api/v1/namespaces/proxy-2457/pods/http:proxy-service-gh74f-r2lnh:162/proxy/: bar (200; 3.129395ms) Apr 4 14:18:31.450: INFO: (13) /api/v1/namespaces/proxy-2457/pods/https:proxy-service-gh74f-r2lnh:443/proxy/: ... (200; 3.212639ms) Apr 4 14:18:31.450: INFO: (13) /api/v1/namespaces/proxy-2457/pods/proxy-service-gh74f-r2lnh:1080/proxy/: test<... (200; 3.109679ms) Apr 4 14:18:31.450: INFO: (13) /api/v1/namespaces/proxy-2457/services/proxy-service-gh74f:portname1/proxy/: foo (200; 3.391586ms) Apr 4 14:18:31.450: INFO: (13) /api/v1/namespaces/proxy-2457/services/https:proxy-service-gh74f:tlsportname1/proxy/: tls baz (200; 3.430067ms) Apr 4 14:18:31.450: INFO: (13) /api/v1/namespaces/proxy-2457/services/http:proxy-service-gh74f:portname2/proxy/: bar (200; 3.394114ms) Apr 4 14:18:31.450: INFO: (13) /api/v1/namespaces/proxy-2457/pods/proxy-service-gh74f-r2lnh/proxy/: test (200; 3.413418ms) Apr 4 14:18:31.450: INFO: (13) /api/v1/namespaces/proxy-2457/services/http:proxy-service-gh74f:portname1/proxy/: foo (200; 3.448829ms) Apr 4 14:18:31.450: INFO: (13) /api/v1/namespaces/proxy-2457/pods/https:proxy-service-gh74f-r2lnh:462/proxy/: tls qux (200; 3.571095ms) Apr 4 14:18:31.450: INFO: (13) /api/v1/namespaces/proxy-2457/services/https:proxy-service-gh74f:tlsportname2/proxy/: tls qux (200; 3.668678ms) Apr 4 14:18:31.450: INFO: (13) /api/v1/namespaces/proxy-2457/services/proxy-service-gh74f:portname2/proxy/: bar (200; 3.581609ms) Apr 4 14:18:31.456: INFO: (14) /api/v1/namespaces/proxy-2457/pods/proxy-service-gh74f-r2lnh:160/proxy/: foo (200; 5.51801ms) Apr 4 14:18:31.456: INFO: (14) /api/v1/namespaces/proxy-2457/pods/http:proxy-service-gh74f-r2lnh:162/proxy/: bar (200; 5.654171ms) Apr 4 14:18:31.457: INFO: (14) /api/v1/namespaces/proxy-2457/pods/http:proxy-service-gh74f-r2lnh:160/proxy/: foo (200; 6.055936ms) Apr 4 14:18:31.457: INFO: (14) /api/v1/namespaces/proxy-2457/pods/https:proxy-service-gh74f-r2lnh:443/proxy/: test<... (200; 6.304924ms) Apr 4 14:18:31.457: INFO: (14) /api/v1/namespaces/proxy-2457/pods/https:proxy-service-gh74f-r2lnh:462/proxy/: tls qux (200; 6.286341ms) Apr 4 14:18:31.457: INFO: (14) /api/v1/namespaces/proxy-2457/pods/proxy-service-gh74f-r2lnh:162/proxy/: bar (200; 6.427148ms) Apr 4 14:18:31.457: INFO: (14) /api/v1/namespaces/proxy-2457/pods/https:proxy-service-gh74f-r2lnh:460/proxy/: tls baz (200; 6.48079ms) Apr 4 14:18:31.457: INFO: (14) /api/v1/namespaces/proxy-2457/pods/http:proxy-service-gh74f-r2lnh:1080/proxy/: ... (200; 6.489966ms) Apr 4 14:18:31.457: INFO: (14) /api/v1/namespaces/proxy-2457/pods/proxy-service-gh74f-r2lnh/proxy/: test (200; 6.724402ms) Apr 4 14:18:31.457: INFO: (14) /api/v1/namespaces/proxy-2457/services/proxy-service-gh74f:portname2/proxy/: bar (200; 6.69823ms) Apr 4 14:18:31.457: INFO: (14) /api/v1/namespaces/proxy-2457/services/http:proxy-service-gh74f:portname1/proxy/: foo (200; 6.73838ms) Apr 4 14:18:31.457: INFO: (14) /api/v1/namespaces/proxy-2457/services/https:proxy-service-gh74f:tlsportname2/proxy/: tls qux (200; 6.811921ms) Apr 4 14:18:31.457: INFO: (14) /api/v1/namespaces/proxy-2457/services/https:proxy-service-gh74f:tlsportname1/proxy/: tls baz (200; 6.797632ms) Apr 4 14:18:31.458: INFO: (14) /api/v1/namespaces/proxy-2457/services/http:proxy-service-gh74f:portname2/proxy/: bar (200; 7.365087ms) Apr 4 14:18:31.458: INFO: (14) /api/v1/namespaces/proxy-2457/services/proxy-service-gh74f:portname1/proxy/: foo (200; 7.559098ms) Apr 4 14:18:31.460: INFO: (15) /api/v1/namespaces/proxy-2457/pods/proxy-service-gh74f-r2lnh/proxy/: test (200; 1.833632ms) Apr 4 14:18:31.461: INFO: (15) /api/v1/namespaces/proxy-2457/pods/https:proxy-service-gh74f-r2lnh:462/proxy/: tls qux (200; 2.686096ms) Apr 4 14:18:31.461: INFO: (15) /api/v1/namespaces/proxy-2457/pods/proxy-service-gh74f-r2lnh:162/proxy/: bar (200; 2.734907ms) Apr 4 14:18:31.461: INFO: (15) /api/v1/namespaces/proxy-2457/pods/https:proxy-service-gh74f-r2lnh:443/proxy/: test<... (200; 3.962273ms) Apr 4 14:18:31.462: INFO: (15) /api/v1/namespaces/proxy-2457/pods/http:proxy-service-gh74f-r2lnh:160/proxy/: foo (200; 4.081642ms) Apr 4 14:18:31.462: INFO: (15) /api/v1/namespaces/proxy-2457/services/http:proxy-service-gh74f:portname2/proxy/: bar (200; 4.153963ms) Apr 4 14:18:31.462: INFO: (15) /api/v1/namespaces/proxy-2457/pods/http:proxy-service-gh74f-r2lnh:1080/proxy/: ... (200; 4.205652ms) Apr 4 14:18:31.463: INFO: (15) /api/v1/namespaces/proxy-2457/pods/proxy-service-gh74f-r2lnh:160/proxy/: foo (200; 4.449027ms) Apr 4 14:18:31.463: INFO: (15) /api/v1/namespaces/proxy-2457/pods/http:proxy-service-gh74f-r2lnh:162/proxy/: bar (200; 4.435992ms) Apr 4 14:18:31.463: INFO: (15) /api/v1/namespaces/proxy-2457/services/http:proxy-service-gh74f:portname1/proxy/: foo (200; 4.400855ms) Apr 4 14:18:31.463: INFO: (15) /api/v1/namespaces/proxy-2457/services/https:proxy-service-gh74f:tlsportname1/proxy/: tls baz (200; 4.404244ms) Apr 4 14:18:31.463: INFO: (15) /api/v1/namespaces/proxy-2457/services/proxy-service-gh74f:portname2/proxy/: bar (200; 4.467331ms) Apr 4 14:18:31.463: INFO: (15) /api/v1/namespaces/proxy-2457/services/https:proxy-service-gh74f:tlsportname2/proxy/: tls qux (200; 4.514381ms) Apr 4 14:18:31.463: INFO: (15) /api/v1/namespaces/proxy-2457/services/proxy-service-gh74f:portname1/proxy/: foo (200; 4.495039ms) Apr 4 14:18:31.465: INFO: (16) /api/v1/namespaces/proxy-2457/pods/http:proxy-service-gh74f-r2lnh:162/proxy/: bar (200; 2.457218ms) Apr 4 14:18:31.467: INFO: (16) /api/v1/namespaces/proxy-2457/pods/http:proxy-service-gh74f-r2lnh:160/proxy/: foo (200; 4.263289ms) Apr 4 14:18:31.467: INFO: (16) /api/v1/namespaces/proxy-2457/pods/proxy-service-gh74f-r2lnh:1080/proxy/: test<... (200; 4.288862ms) Apr 4 14:18:31.467: INFO: (16) /api/v1/namespaces/proxy-2457/services/https:proxy-service-gh74f:tlsportname2/proxy/: tls qux (200; 4.320967ms) Apr 4 14:18:31.467: INFO: (16) /api/v1/namespaces/proxy-2457/pods/https:proxy-service-gh74f-r2lnh:443/proxy/: ... (200; 4.529607ms) Apr 4 14:18:31.467: INFO: (16) /api/v1/namespaces/proxy-2457/pods/proxy-service-gh74f-r2lnh/proxy/: test (200; 4.611527ms) Apr 4 14:18:31.468: INFO: (16) /api/v1/namespaces/proxy-2457/pods/proxy-service-gh74f-r2lnh:160/proxy/: foo (200; 4.828042ms) Apr 4 14:18:31.468: INFO: (16) /api/v1/namespaces/proxy-2457/services/http:proxy-service-gh74f:portname1/proxy/: foo (200; 4.852844ms) Apr 4 14:18:31.468: INFO: (16) /api/v1/namespaces/proxy-2457/services/https:proxy-service-gh74f:tlsportname1/proxy/: tls baz (200; 5.07269ms) Apr 4 14:18:31.468: INFO: (16) /api/v1/namespaces/proxy-2457/services/proxy-service-gh74f:portname1/proxy/: foo (200; 5.054469ms) Apr 4 14:18:31.468: INFO: (16) /api/v1/namespaces/proxy-2457/pods/https:proxy-service-gh74f-r2lnh:460/proxy/: tls baz (200; 5.100915ms) Apr 4 14:18:31.472: INFO: (17) /api/v1/namespaces/proxy-2457/pods/proxy-service-gh74f-r2lnh:162/proxy/: bar (200; 3.593436ms) Apr 4 14:18:31.472: INFO: (17) /api/v1/namespaces/proxy-2457/services/proxy-service-gh74f:portname1/proxy/: foo (200; 3.595515ms) Apr 4 14:18:31.472: INFO: (17) /api/v1/namespaces/proxy-2457/pods/http:proxy-service-gh74f-r2lnh:162/proxy/: bar (200; 3.823869ms) Apr 4 14:18:31.472: INFO: (17) /api/v1/namespaces/proxy-2457/pods/https:proxy-service-gh74f-r2lnh:460/proxy/: tls baz (200; 3.895532ms) Apr 4 14:18:31.472: INFO: (17) /api/v1/namespaces/proxy-2457/pods/proxy-service-gh74f-r2lnh:1080/proxy/: test<... (200; 4.234117ms) Apr 4 14:18:31.472: INFO: (17) /api/v1/namespaces/proxy-2457/pods/http:proxy-service-gh74f-r2lnh:1080/proxy/: ... (200; 4.397975ms) Apr 4 14:18:31.472: INFO: (17) /api/v1/namespaces/proxy-2457/pods/https:proxy-service-gh74f-r2lnh:443/proxy/: test (200; 4.687036ms) Apr 4 14:18:31.473: INFO: (17) /api/v1/namespaces/proxy-2457/pods/http:proxy-service-gh74f-r2lnh:160/proxy/: foo (200; 4.66143ms) Apr 4 14:18:31.473: INFO: (17) /api/v1/namespaces/proxy-2457/pods/proxy-service-gh74f-r2lnh:160/proxy/: foo (200; 4.8424ms) Apr 4 14:18:31.473: INFO: (17) /api/v1/namespaces/proxy-2457/pods/https:proxy-service-gh74f-r2lnh:462/proxy/: tls qux (200; 4.952878ms) Apr 4 14:18:31.474: INFO: (17) /api/v1/namespaces/proxy-2457/services/proxy-service-gh74f:portname2/proxy/: bar (200; 6.209066ms) Apr 4 14:18:31.474: INFO: (17) /api/v1/namespaces/proxy-2457/services/http:proxy-service-gh74f:portname1/proxy/: foo (200; 6.376792ms) Apr 4 14:18:31.475: INFO: (17) /api/v1/namespaces/proxy-2457/services/https:proxy-service-gh74f:tlsportname2/proxy/: tls qux (200; 6.550291ms) Apr 4 14:18:31.475: INFO: (17) /api/v1/namespaces/proxy-2457/services/http:proxy-service-gh74f:portname2/proxy/: bar (200; 6.5913ms) Apr 4 14:18:31.475: INFO: (17) /api/v1/namespaces/proxy-2457/services/https:proxy-service-gh74f:tlsportname1/proxy/: tls baz (200; 6.619118ms) Apr 4 14:18:31.477: INFO: (18) /api/v1/namespaces/proxy-2457/pods/proxy-service-gh74f-r2lnh:160/proxy/: foo (200; 2.749226ms) Apr 4 14:18:31.477: INFO: (18) /api/v1/namespaces/proxy-2457/pods/https:proxy-service-gh74f-r2lnh:460/proxy/: tls baz (200; 2.765774ms) Apr 4 14:18:31.478: INFO: (18) /api/v1/namespaces/proxy-2457/pods/https:proxy-service-gh74f-r2lnh:443/proxy/: test<... (200; 4.363894ms) Apr 4 14:18:31.479: INFO: (18) /api/v1/namespaces/proxy-2457/pods/http:proxy-service-gh74f-r2lnh:1080/proxy/: ... (200; 4.340994ms) Apr 4 14:18:31.479: INFO: (18) /api/v1/namespaces/proxy-2457/services/proxy-service-gh74f:portname2/proxy/: bar (200; 4.461857ms) Apr 4 14:18:31.479: INFO: (18) /api/v1/namespaces/proxy-2457/services/proxy-service-gh74f:portname1/proxy/: foo (200; 4.399363ms) Apr 4 14:18:31.479: INFO: (18) /api/v1/namespaces/proxy-2457/pods/proxy-service-gh74f-r2lnh/proxy/: test (200; 4.462597ms) Apr 4 14:18:31.479: INFO: (18) /api/v1/namespaces/proxy-2457/pods/http:proxy-service-gh74f-r2lnh:162/proxy/: bar (200; 4.613712ms) Apr 4 14:18:31.479: INFO: (18) /api/v1/namespaces/proxy-2457/pods/proxy-service-gh74f-r2lnh:162/proxy/: bar (200; 4.644073ms) Apr 4 14:18:31.479: INFO: (18) /api/v1/namespaces/proxy-2457/pods/http:proxy-service-gh74f-r2lnh:160/proxy/: foo (200; 4.617365ms) Apr 4 14:18:31.479: INFO: (18) /api/v1/namespaces/proxy-2457/pods/https:proxy-service-gh74f-r2lnh:462/proxy/: tls qux (200; 4.655613ms) Apr 4 14:18:31.479: INFO: (18) /api/v1/namespaces/proxy-2457/services/https:proxy-service-gh74f:tlsportname1/proxy/: tls baz (200; 4.7503ms) Apr 4 14:18:31.480: INFO: (18) /api/v1/namespaces/proxy-2457/services/https:proxy-service-gh74f:tlsportname2/proxy/: tls qux (200; 4.762425ms) Apr 4 14:18:31.480: INFO: (18) /api/v1/namespaces/proxy-2457/services/http:proxy-service-gh74f:portname1/proxy/: foo (200; 4.788407ms) Apr 4 14:18:31.483: INFO: (19) /api/v1/namespaces/proxy-2457/pods/proxy-service-gh74f-r2lnh:160/proxy/: foo (200; 3.768463ms) Apr 4 14:18:31.484: INFO: (19) /api/v1/namespaces/proxy-2457/pods/https:proxy-service-gh74f-r2lnh:460/proxy/: tls baz (200; 3.874523ms) Apr 4 14:18:31.484: INFO: (19) /api/v1/namespaces/proxy-2457/pods/proxy-service-gh74f-r2lnh:162/proxy/: bar (200; 3.804277ms) Apr 4 14:18:31.484: INFO: (19) /api/v1/namespaces/proxy-2457/pods/proxy-service-gh74f-r2lnh:1080/proxy/: test<... (200; 3.886866ms) Apr 4 14:18:31.484: INFO: (19) /api/v1/namespaces/proxy-2457/pods/http:proxy-service-gh74f-r2lnh:1080/proxy/: ... (200; 3.935623ms) Apr 4 14:18:31.484: INFO: (19) /api/v1/namespaces/proxy-2457/pods/http:proxy-service-gh74f-r2lnh:160/proxy/: foo (200; 3.93549ms) Apr 4 14:18:31.484: INFO: (19) /api/v1/namespaces/proxy-2457/pods/proxy-service-gh74f-r2lnh/proxy/: test (200; 3.796741ms) Apr 4 14:18:31.484: INFO: (19) /api/v1/namespaces/proxy-2457/pods/https:proxy-service-gh74f-r2lnh:462/proxy/: tls qux (200; 3.943361ms) Apr 4 14:18:31.484: INFO: (19) /api/v1/namespaces/proxy-2457/pods/http:proxy-service-gh74f-r2lnh:162/proxy/: bar (200; 3.915509ms) Apr 4 14:18:31.484: INFO: (19) /api/v1/namespaces/proxy-2457/pods/https:proxy-service-gh74f-r2lnh:443/proxy/: >> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Apr 4 14:18:52.982: INFO: Successfully updated pod "pod-update-b2807905-ed42-42a8-87a1-41a2e5078d61" STEP: verifying the updated pod is in kubernetes Apr 4 14:18:52.994: INFO: Pod update OK [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 4 14:18:52.994: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-1034" for this suite. Apr 4 14:19:15.007: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 4 14:19:15.079: INFO: namespace pods-1034 deletion completed in 22.08110948s • [SLOW TEST:26.667 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 4 14:19:15.079: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44 [It] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod Apr 4 14:19:15.124: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 4 14:19:22.158: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-5962" for this suite. Apr 4 14:19:28.212: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 4 14:19:28.316: INFO: namespace init-container-5962 deletion completed in 6.132296328s • [SLOW TEST:13.237 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 4 14:19:28.316: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-map-9878a2a7-4e75-41dc-b1ac-477e65cbc169 STEP: Creating a pod to test consume configMaps Apr 4 14:19:28.377: INFO: Waiting up to 5m0s for pod "pod-configmaps-59721ec5-5efb-401b-aa93-9f0377594046" in namespace "configmap-2350" to be "success or failure" Apr 4 14:19:28.430: INFO: Pod "pod-configmaps-59721ec5-5efb-401b-aa93-9f0377594046": Phase="Pending", Reason="", readiness=false. Elapsed: 53.401926ms Apr 4 14:19:30.435: INFO: Pod "pod-configmaps-59721ec5-5efb-401b-aa93-9f0377594046": Phase="Pending", Reason="", readiness=false. Elapsed: 2.057827516s Apr 4 14:19:32.439: INFO: Pod "pod-configmaps-59721ec5-5efb-401b-aa93-9f0377594046": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.062204865s STEP: Saw pod success Apr 4 14:19:32.439: INFO: Pod "pod-configmaps-59721ec5-5efb-401b-aa93-9f0377594046" satisfied condition "success or failure" Apr 4 14:19:32.442: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-59721ec5-5efb-401b-aa93-9f0377594046 container configmap-volume-test: STEP: delete the pod Apr 4 14:19:32.487: INFO: Waiting for pod pod-configmaps-59721ec5-5efb-401b-aa93-9f0377594046 to disappear Apr 4 14:19:32.507: INFO: Pod pod-configmaps-59721ec5-5efb-401b-aa93-9f0377594046 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 4 14:19:32.507: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2350" for this suite. Apr 4 14:19:38.522: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 4 14:19:38.644: INFO: namespace configmap-2350 deletion completed in 6.132804743s • [SLOW TEST:10.328 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 4 14:19:38.645: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Apr 4 14:19:38.720: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d61962ff-5695-4227-8f5d-c2179edbbd43" in namespace "projected-6911" to be "success or failure" Apr 4 14:19:38.722: INFO: Pod "downwardapi-volume-d61962ff-5695-4227-8f5d-c2179edbbd43": Phase="Pending", Reason="", readiness=false. Elapsed: 2.383992ms Apr 4 14:19:40.726: INFO: Pod "downwardapi-volume-d61962ff-5695-4227-8f5d-c2179edbbd43": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005822859s Apr 4 14:19:42.730: INFO: Pod "downwardapi-volume-d61962ff-5695-4227-8f5d-c2179edbbd43": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010241272s STEP: Saw pod success Apr 4 14:19:42.730: INFO: Pod "downwardapi-volume-d61962ff-5695-4227-8f5d-c2179edbbd43" satisfied condition "success or failure" Apr 4 14:19:42.734: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-d61962ff-5695-4227-8f5d-c2179edbbd43 container client-container: STEP: delete the pod Apr 4 14:19:42.755: INFO: Waiting for pod downwardapi-volume-d61962ff-5695-4227-8f5d-c2179edbbd43 to disappear Apr 4 14:19:42.775: INFO: Pod downwardapi-volume-d61962ff-5695-4227-8f5d-c2179edbbd43 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 4 14:19:42.775: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6911" for this suite. Apr 4 14:19:48.793: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 4 14:19:48.896: INFO: namespace projected-6911 deletion completed in 6.118636345s • [SLOW TEST:10.251 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 4 14:19:48.897: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars Apr 4 14:19:48.971: INFO: Waiting up to 5m0s for pod "downward-api-867f90b5-09b2-484e-a8a8-a7e96e3992ec" in namespace "downward-api-1433" to be "success or failure" Apr 4 14:19:49.000: INFO: Pod "downward-api-867f90b5-09b2-484e-a8a8-a7e96e3992ec": Phase="Pending", Reason="", readiness=false. Elapsed: 28.431852ms Apr 4 14:19:51.004: INFO: Pod "downward-api-867f90b5-09b2-484e-a8a8-a7e96e3992ec": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032201227s Apr 4 14:19:53.008: INFO: Pod "downward-api-867f90b5-09b2-484e-a8a8-a7e96e3992ec": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.036331026s STEP: Saw pod success Apr 4 14:19:53.008: INFO: Pod "downward-api-867f90b5-09b2-484e-a8a8-a7e96e3992ec" satisfied condition "success or failure" Apr 4 14:19:53.011: INFO: Trying to get logs from node iruya-worker2 pod downward-api-867f90b5-09b2-484e-a8a8-a7e96e3992ec container dapi-container: STEP: delete the pod Apr 4 14:19:53.067: INFO: Waiting for pod downward-api-867f90b5-09b2-484e-a8a8-a7e96e3992ec to disappear Apr 4 14:19:53.070: INFO: Pod downward-api-867f90b5-09b2-484e-a8a8-a7e96e3992ec no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 4 14:19:53.071: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1433" for this suite. Apr 4 14:19:59.093: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 4 14:19:59.164: INFO: namespace downward-api-1433 deletion completed in 6.089936915s • [SLOW TEST:10.267 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 4 14:19:59.164: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Apr 4 14:20:07.286: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 4 14:20:07.323: INFO: Pod pod-with-prestop-exec-hook still exists Apr 4 14:20:09.323: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 4 14:20:09.328: INFO: Pod pod-with-prestop-exec-hook still exists Apr 4 14:20:11.323: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 4 14:20:11.327: INFO: Pod pod-with-prestop-exec-hook still exists Apr 4 14:20:13.323: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 4 14:20:13.327: INFO: Pod pod-with-prestop-exec-hook still exists Apr 4 14:20:15.323: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 4 14:20:15.329: INFO: Pod pod-with-prestop-exec-hook still exists Apr 4 14:20:17.323: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 4 14:20:17.327: INFO: Pod pod-with-prestop-exec-hook still exists Apr 4 14:20:19.323: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 4 14:20:19.327: INFO: Pod pod-with-prestop-exec-hook still exists Apr 4 14:20:21.323: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 4 14:20:21.327: INFO: Pod pod-with-prestop-exec-hook still exists Apr 4 14:20:23.323: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 4 14:20:23.328: INFO: Pod pod-with-prestop-exec-hook still exists Apr 4 14:20:25.323: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 4 14:20:25.327: INFO: Pod pod-with-prestop-exec-hook still exists Apr 4 14:20:27.323: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 4 14:20:27.328: INFO: Pod pod-with-prestop-exec-hook still exists Apr 4 14:20:29.323: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 4 14:20:29.327: INFO: Pod pod-with-prestop-exec-hook still exists Apr 4 14:20:31.323: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 4 14:20:31.328: INFO: Pod pod-with-prestop-exec-hook still exists Apr 4 14:20:33.323: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 4 14:20:33.327: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 4 14:20:33.334: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-7103" for this suite. Apr 4 14:20:55.364: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 4 14:20:55.446: INFO: namespace container-lifecycle-hook-7103 deletion completed in 22.107941254s • [SLOW TEST:56.282 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 4 14:20:55.447: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars Apr 4 14:20:55.506: INFO: Waiting up to 5m0s for pod "downward-api-2c77dde8-ed1d-4974-a5e8-4f6af6d6ac60" in namespace "downward-api-4658" to be "success or failure" Apr 4 14:20:55.510: INFO: Pod "downward-api-2c77dde8-ed1d-4974-a5e8-4f6af6d6ac60": Phase="Pending", Reason="", readiness=false. Elapsed: 3.769806ms Apr 4 14:20:57.513: INFO: Pod "downward-api-2c77dde8-ed1d-4974-a5e8-4f6af6d6ac60": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007253376s Apr 4 14:20:59.517: INFO: Pod "downward-api-2c77dde8-ed1d-4974-a5e8-4f6af6d6ac60": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011551405s STEP: Saw pod success Apr 4 14:20:59.517: INFO: Pod "downward-api-2c77dde8-ed1d-4974-a5e8-4f6af6d6ac60" satisfied condition "success or failure" Apr 4 14:20:59.521: INFO: Trying to get logs from node iruya-worker pod downward-api-2c77dde8-ed1d-4974-a5e8-4f6af6d6ac60 container dapi-container: STEP: delete the pod Apr 4 14:20:59.554: INFO: Waiting for pod downward-api-2c77dde8-ed1d-4974-a5e8-4f6af6d6ac60 to disappear Apr 4 14:20:59.563: INFO: Pod downward-api-2c77dde8-ed1d-4974-a5e8-4f6af6d6ac60 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 4 14:20:59.563: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4658" for this suite. Apr 4 14:21:05.585: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 4 14:21:05.658: INFO: namespace downward-api-4658 deletion completed in 6.091411541s • [SLOW TEST:10.210 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 4 14:21:05.658: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 4 14:21:09.747: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-121" for this suite. Apr 4 14:21:15.794: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 4 14:21:15.876: INFO: namespace kubelet-test-121 deletion completed in 6.124632173s • [SLOW TEST:10.218 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78 should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 4 14:21:15.877: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir volume type on tmpfs Apr 4 14:21:15.917: INFO: Waiting up to 5m0s for pod "pod-d27e63fe-8f53-4e49-96af-6190bef92f75" in namespace "emptydir-5119" to be "success or failure" Apr 4 14:21:15.959: INFO: Pod "pod-d27e63fe-8f53-4e49-96af-6190bef92f75": Phase="Pending", Reason="", readiness=false. Elapsed: 41.401417ms Apr 4 14:21:17.963: INFO: Pod "pod-d27e63fe-8f53-4e49-96af-6190bef92f75": Phase="Pending", Reason="", readiness=false. Elapsed: 2.046296881s Apr 4 14:21:19.968: INFO: Pod "pod-d27e63fe-8f53-4e49-96af-6190bef92f75": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.050699431s STEP: Saw pod success Apr 4 14:21:19.968: INFO: Pod "pod-d27e63fe-8f53-4e49-96af-6190bef92f75" satisfied condition "success or failure" Apr 4 14:21:19.971: INFO: Trying to get logs from node iruya-worker pod pod-d27e63fe-8f53-4e49-96af-6190bef92f75 container test-container: STEP: delete the pod Apr 4 14:21:20.003: INFO: Waiting for pod pod-d27e63fe-8f53-4e49-96af-6190bef92f75 to disappear Apr 4 14:21:20.018: INFO: Pod pod-d27e63fe-8f53-4e49-96af-6190bef92f75 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 4 14:21:20.018: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5119" for this suite. Apr 4 14:21:26.033: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 4 14:21:26.114: INFO: namespace emptydir-5119 deletion completed in 6.092025252s • [SLOW TEST:10.237 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 4 14:21:26.115: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: validating cluster-info Apr 4 14:21:26.164: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info' Apr 4 14:21:26.256: INFO: stderr: "" Apr 4 14:21:26.256: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32769\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32769/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 4 14:21:26.256: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5494" for this suite. Apr 4 14:21:32.274: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 4 14:21:32.349: INFO: namespace kubectl-5494 deletion completed in 6.089631007s • [SLOW TEST:6.234 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl cluster-info /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 4 14:21:32.350: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod busybox-e0450e6f-0fbb-4599-8353-113871c279ec in namespace container-probe-7495 Apr 4 14:21:36.428: INFO: Started pod busybox-e0450e6f-0fbb-4599-8353-113871c279ec in namespace container-probe-7495 STEP: checking the pod's current state and verifying that restartCount is present Apr 4 14:21:36.432: INFO: Initial restart count of pod busybox-e0450e6f-0fbb-4599-8353-113871c279ec is 0 Apr 4 14:22:28.549: INFO: Restart count of pod container-probe-7495/busybox-e0450e6f-0fbb-4599-8353-113871c279ec is now 1 (52.11719863s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 4 14:22:28.565: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-7495" for this suite. Apr 4 14:22:34.609: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 4 14:22:34.697: INFO: namespace container-probe-7495 deletion completed in 6.108179923s • [SLOW TEST:62.348 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 4 14:22:34.699: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Apr 4 14:22:34.736: INFO: Creating deployment "nginx-deployment" Apr 4 14:22:34.753: INFO: Waiting for observed generation 1 Apr 4 14:22:36.767: INFO: Waiting for all required pods to come up Apr 4 14:22:36.772: INFO: Pod name nginx: Found 10 pods out of 10 STEP: ensuring each pod is running Apr 4 14:22:44.781: INFO: Waiting for deployment "nginx-deployment" to complete Apr 4 14:22:44.787: INFO: Updating deployment "nginx-deployment" with a non-existent image Apr 4 14:22:44.792: INFO: Updating deployment nginx-deployment Apr 4 14:22:44.792: INFO: Waiting for observed generation 2 Apr 4 14:22:46.802: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 Apr 4 14:22:46.805: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 Apr 4 14:22:46.876: INFO: Waiting for the first rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas Apr 4 14:22:46.885: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 Apr 4 14:22:46.885: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 Apr 4 14:22:46.887: INFO: Waiting for the second rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas Apr 4 14:22:46.891: INFO: Verifying that deployment "nginx-deployment" has minimum required number of available replicas Apr 4 14:22:46.891: INFO: Scaling up the deployment "nginx-deployment" from 10 to 30 Apr 4 14:22:46.896: INFO: Updating deployment nginx-deployment Apr 4 14:22:46.896: INFO: Waiting for the replicasets of deployment "nginx-deployment" to have desired number of replicas Apr 4 14:22:46.905: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 Apr 4 14:22:46.927: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 Apr 4 14:22:47.094: INFO: Deployment "nginx-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment,GenerateName:,Namespace:deployment-4551,SelfLink:/apis/apps/v1/namespaces/deployment-4551/deployments/nginx-deployment,UID:52c08804-8543-4a16-90bb-2099cf0bf20e,ResourceVersion:3601195,Generation:3,CreationTimestamp:2020-04-04 14:22:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*30,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:13,UpdatedReplicas:5,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[{Progressing True 2020-04-04 14:22:45 +0000 UTC 2020-04-04 14:22:34 +0000 UTC ReplicaSetUpdated ReplicaSet "nginx-deployment-55fb7cb77f" is progressing.} {Available False 2020-04-04 14:22:46 +0000 UTC 2020-04-04 14:22:46 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.}],ReadyReplicas:8,CollisionCount:nil,},} Apr 4 14:22:47.225: INFO: New ReplicaSet "nginx-deployment-55fb7cb77f" of Deployment "nginx-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f,GenerateName:,Namespace:deployment-4551,SelfLink:/apis/apps/v1/namespaces/deployment-4551/replicasets/nginx-deployment-55fb7cb77f,UID:e10343fb-8c6f-420b-b816-440fe47ad496,ResourceVersion:3601238,Generation:3,CreationTimestamp:2020-04-04 14:22:44 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment nginx-deployment 52c08804-8543-4a16-90bb-2099cf0bf20e 0xc0030ec797 0xc0030ec798}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*13,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Apr 4 14:22:47.225: INFO: All old ReplicaSets of Deployment "nginx-deployment": Apr 4 14:22:47.226: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498,GenerateName:,Namespace:deployment-4551,SelfLink:/apis/apps/v1/namespaces/deployment-4551/replicasets/nginx-deployment-7b8c6f4498,UID:e7a17cb2-fd0b-4811-a2ca-d701ed45c622,ResourceVersion:3601223,Generation:3,CreationTimestamp:2020-04-04 14:22:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment nginx-deployment 52c08804-8543-4a16-90bb-2099cf0bf20e 0xc0030ec867 0xc0030ec868}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*20,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:8,FullyLabeledReplicas:8,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[],},} Apr 4 14:22:47.231: INFO: Pod "nginx-deployment-55fb7cb77f-5q74n" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-5q74n,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-4551,SelfLink:/api/v1/namespaces/deployment-4551/pods/nginx-deployment-55fb7cb77f-5q74n,UID:7b8cce98-79ac-471c-b7fa-422afef12b2f,ResourceVersion:3601149,Generation:0,CreationTimestamp:2020-04-04 14:22:44 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f e10343fb-8c6f-420b-b816-440fe47ad496 0xc0030ed1c7 0xc0030ed1c8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-dh2zt {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dh2zt,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-dh2zt true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0030ed260} {node.kubernetes.io/unreachable Exists NoExecute 0xc0030ed280}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-04 14:22:44 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-04 14:22:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-04 14:22:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-04 14:22:44 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:,StartTime:2020-04-04 14:22:44 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 4 14:22:47.232: INFO: Pod "nginx-deployment-55fb7cb77f-7lgg4" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-7lgg4,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-4551,SelfLink:/api/v1/namespaces/deployment-4551/pods/nginx-deployment-55fb7cb77f-7lgg4,UID:1f9b6a83-3626-489d-8e64-49af32fe72fd,ResourceVersion:3601213,Generation:0,CreationTimestamp:2020-04-04 14:22:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f e10343fb-8c6f-420b-b816-440fe47ad496 0xc0030ed350 0xc0030ed351}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-dh2zt {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dh2zt,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-dh2zt true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0030ed3d0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0030ed3f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-04 14:22:46 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 4 14:22:47.232: INFO: Pod "nginx-deployment-55fb7cb77f-85tdp" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-85tdp,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-4551,SelfLink:/api/v1/namespaces/deployment-4551/pods/nginx-deployment-55fb7cb77f-85tdp,UID:ad5b7243-120d-43e7-a3bc-91782fe6604b,ResourceVersion:3601197,Generation:0,CreationTimestamp:2020-04-04 14:22:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f e10343fb-8c6f-420b-b816-440fe47ad496 0xc0030ed477 0xc0030ed478}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-dh2zt {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dh2zt,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-dh2zt true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0030ed4f0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0030ed510}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-04 14:22:46 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 4 14:22:47.232: INFO: Pod "nginx-deployment-55fb7cb77f-c666b" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-c666b,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-4551,SelfLink:/api/v1/namespaces/deployment-4551/pods/nginx-deployment-55fb7cb77f-c666b,UID:fdc9bfc0-d241-46ec-b40f-ab4747a907ae,ResourceVersion:3601227,Generation:0,CreationTimestamp:2020-04-04 14:22:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f e10343fb-8c6f-420b-b816-440fe47ad496 0xc0030ed597 0xc0030ed598}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-dh2zt {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dh2zt,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-dh2zt true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0030ed610} {node.kubernetes.io/unreachable Exists NoExecute 0xc0030ed630}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-04 14:22:47 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 4 14:22:47.232: INFO: Pod "nginx-deployment-55fb7cb77f-kzrn8" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-kzrn8,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-4551,SelfLink:/api/v1/namespaces/deployment-4551/pods/nginx-deployment-55fb7cb77f-kzrn8,UID:949a6567-a02d-4b65-8af1-520cf61fa816,ResourceVersion:3601236,Generation:0,CreationTimestamp:2020-04-04 14:22:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f e10343fb-8c6f-420b-b816-440fe47ad496 0xc0030ed6b7 0xc0030ed6b8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-dh2zt {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dh2zt,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-dh2zt true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0030ed730} {node.kubernetes.io/unreachable Exists NoExecute 0xc0030ed750}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-04 14:22:47 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 4 14:22:47.232: INFO: Pod "nginx-deployment-55fb7cb77f-lgf8l" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-lgf8l,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-4551,SelfLink:/api/v1/namespaces/deployment-4551/pods/nginx-deployment-55fb7cb77f-lgf8l,UID:e5db1954-ab77-4e3a-b3f0-82bfb6b6e7a6,ResourceVersion:3601207,Generation:0,CreationTimestamp:2020-04-04 14:22:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f e10343fb-8c6f-420b-b816-440fe47ad496 0xc0030ed7d7 0xc0030ed7d8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-dh2zt {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dh2zt,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-dh2zt true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0030ed850} {node.kubernetes.io/unreachable Exists NoExecute 0xc0030ed870}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-04 14:22:46 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 4 14:22:47.233: INFO: Pod "nginx-deployment-55fb7cb77f-nkf54" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-nkf54,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-4551,SelfLink:/api/v1/namespaces/deployment-4551/pods/nginx-deployment-55fb7cb77f-nkf54,UID:5acfe2d3-3e61-4afd-98ae-000a5f474279,ResourceVersion:3601177,Generation:0,CreationTimestamp:2020-04-04 14:22:45 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f e10343fb-8c6f-420b-b816-440fe47ad496 0xc0030ed8f7 0xc0030ed8f8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-dh2zt {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dh2zt,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-dh2zt true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0030ed970} {node.kubernetes.io/unreachable Exists NoExecute 0xc0030ed990}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-04 14:22:45 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-04 14:22:45 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-04 14:22:45 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-04 14:22:45 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:,StartTime:2020-04-04 14:22:45 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 4 14:22:47.233: INFO: Pod "nginx-deployment-55fb7cb77f-p5vxb" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-p5vxb,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-4551,SelfLink:/api/v1/namespaces/deployment-4551/pods/nginx-deployment-55fb7cb77f-p5vxb,UID:0d1e98bc-24c5-4992-9aa0-8ca183f7ef23,ResourceVersion:3601162,Generation:0,CreationTimestamp:2020-04-04 14:22:44 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f e10343fb-8c6f-420b-b816-440fe47ad496 0xc0030eda70 0xc0030eda71}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-dh2zt {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dh2zt,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-dh2zt true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0030edaf0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0030edb10}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-04 14:22:44 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-04 14:22:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-04 14:22:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-04 14:22:44 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:,StartTime:2020-04-04 14:22:44 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 4 14:22:47.233: INFO: Pod "nginx-deployment-55fb7cb77f-sbj7m" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-sbj7m,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-4551,SelfLink:/api/v1/namespaces/deployment-4551/pods/nginx-deployment-55fb7cb77f-sbj7m,UID:24034f66-e466-4ce3-b375-65f79ad5418d,ResourceVersion:3601174,Generation:0,CreationTimestamp:2020-04-04 14:22:45 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f e10343fb-8c6f-420b-b816-440fe47ad496 0xc0030edbe0 0xc0030edbe1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-dh2zt {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dh2zt,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-dh2zt true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0030edc60} {node.kubernetes.io/unreachable Exists NoExecute 0xc0030edc80}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-04 14:22:45 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-04 14:22:45 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-04 14:22:45 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-04 14:22:45 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:,StartTime:2020-04-04 14:22:45 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 4 14:22:47.233: INFO: Pod "nginx-deployment-55fb7cb77f-sxs5v" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-sxs5v,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-4551,SelfLink:/api/v1/namespaces/deployment-4551/pods/nginx-deployment-55fb7cb77f-sxs5v,UID:aaf5c32c-52f7-4c57-a1ed-982a7d3dbb84,ResourceVersion:3601228,Generation:0,CreationTimestamp:2020-04-04 14:22:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f e10343fb-8c6f-420b-b816-440fe47ad496 0xc0030edd50 0xc0030edd51}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-dh2zt {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dh2zt,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-dh2zt true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0030eddd0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0030eddf0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-04 14:22:47 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 4 14:22:47.234: INFO: Pod "nginx-deployment-55fb7cb77f-xjpw5" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-xjpw5,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-4551,SelfLink:/api/v1/namespaces/deployment-4551/pods/nginx-deployment-55fb7cb77f-xjpw5,UID:66aa3393-8677-4422-bc69-a975f243ce71,ResourceVersion:3601226,Generation:0,CreationTimestamp:2020-04-04 14:22:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f e10343fb-8c6f-420b-b816-440fe47ad496 0xc0030ede77 0xc0030ede78}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-dh2zt {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dh2zt,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-dh2zt true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0030edef0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0030edf10}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-04 14:22:47 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 4 14:22:47.234: INFO: Pod "nginx-deployment-55fb7cb77f-xqh84" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-xqh84,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-4551,SelfLink:/api/v1/namespaces/deployment-4551/pods/nginx-deployment-55fb7cb77f-xqh84,UID:228efcdd-d133-4916-8ac1-518b3565519e,ResourceVersion:3601224,Generation:0,CreationTimestamp:2020-04-04 14:22:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f e10343fb-8c6f-420b-b816-440fe47ad496 0xc0030edf97 0xc0030edf98}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-dh2zt {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dh2zt,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-dh2zt true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00032a070} {node.kubernetes.io/unreachable Exists NoExecute 0xc00032a0a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-04 14:22:47 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 4 14:22:47.234: INFO: Pod "nginx-deployment-55fb7cb77f-zbkcc" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-zbkcc,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-4551,SelfLink:/api/v1/namespaces/deployment-4551/pods/nginx-deployment-55fb7cb77f-zbkcc,UID:e71e04bd-aa5d-465d-9af4-ca8ae27ffd0f,ResourceVersion:3601151,Generation:0,CreationTimestamp:2020-04-04 14:22:44 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f e10343fb-8c6f-420b-b816-440fe47ad496 0xc00032a2d7 0xc00032a2d8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-dh2zt {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dh2zt,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-dh2zt true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00032a460} {node.kubernetes.io/unreachable Exists NoExecute 0xc00032a480}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-04 14:22:44 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-04 14:22:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-04 14:22:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-04 14:22:44 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:,StartTime:2020-04-04 14:22:44 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 4 14:22:47.234: INFO: Pod "nginx-deployment-7b8c6f4498-5fxj5" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-5fxj5,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-4551,SelfLink:/api/v1/namespaces/deployment-4551/pods/nginx-deployment-7b8c6f4498-5fxj5,UID:3eca95f7-37e8-4665-8f33-68c9f8c61cf5,ResourceVersion:3601242,Generation:0,CreationTimestamp:2020-04-04 14:22:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 e7a17cb2-fd0b-4811-a2ca-d701ed45c622 0xc00032a670 0xc00032a671}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-dh2zt {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dh2zt,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-dh2zt true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00032a910} {node.kubernetes.io/unreachable Exists NoExecute 0xc00032a950}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-04 14:22:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-04 14:22:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-04 14:22:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-04 14:22:46 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:,StartTime:2020-04-04 14:22:47 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 4 14:22:47.234: INFO: Pod "nginx-deployment-7b8c6f4498-656xv" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-656xv,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-4551,SelfLink:/api/v1/namespaces/deployment-4551/pods/nginx-deployment-7b8c6f4498-656xv,UID:8622cb81-cbd7-4043-b9bf-e0e6a66cd7d5,ResourceVersion:3601117,Generation:0,CreationTimestamp:2020-04-04 14:22:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 e7a17cb2-fd0b-4811-a2ca-d701ed45c622 0xc00032ace7 0xc00032ace8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-dh2zt {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dh2zt,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-dh2zt true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00032af80} {node.kubernetes.io/unreachable Exists NoExecute 0xc00032b320}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-04 14:22:34 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-04-04 14:22:43 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-04-04 14:22:43 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-04 14:22:34 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:10.244.2.77,StartTime:2020-04-04 14:22:34 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-04-04 14:22:43 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://6cbdc7cdba0e4b17e3a0e3424751a4bec52ae12bfd7e9820f0ee8af10c440576}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 4 14:22:47.235: INFO: Pod "nginx-deployment-7b8c6f4498-69hd5" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-69hd5,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-4551,SelfLink:/api/v1/namespaces/deployment-4551/pods/nginx-deployment-7b8c6f4498-69hd5,UID:59ddc35c-2312-4313-b7eb-e9aa47d9d22c,ResourceVersion:3601229,Generation:0,CreationTimestamp:2020-04-04 14:22:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 e7a17cb2-fd0b-4811-a2ca-d701ed45c622 0xc00032b8f7 0xc00032b8f8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-dh2zt {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dh2zt,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-dh2zt true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00032baa0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00032baf0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-04 14:22:47 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 4 14:22:47.235: INFO: Pod "nginx-deployment-7b8c6f4498-6c2nd" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-6c2nd,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-4551,SelfLink:/api/v1/namespaces/deployment-4551/pods/nginx-deployment-7b8c6f4498-6c2nd,UID:3a8fe6f4-4536-448d-b99a-7e5789220aba,ResourceVersion:3601231,Generation:0,CreationTimestamp:2020-04-04 14:22:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 e7a17cb2-fd0b-4811-a2ca-d701ed45c622 0xc00032bbe7 0xc00032bbe8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-dh2zt {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dh2zt,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-dh2zt true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00032bdb0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00032be00}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-04 14:22:47 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 4 14:22:47.235: INFO: Pod "nginx-deployment-7b8c6f4498-6ssw4" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-6ssw4,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-4551,SelfLink:/api/v1/namespaces/deployment-4551/pods/nginx-deployment-7b8c6f4498-6ssw4,UID:cd2b1aec-6b39-43a5-9c24-7152de464097,ResourceVersion:3601060,Generation:0,CreationTimestamp:2020-04-04 14:22:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 e7a17cb2-fd0b-4811-a2ca-d701ed45c622 0xc002606007 0xc002606008}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-dh2zt {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dh2zt,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-dh2zt true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002606080} {node.kubernetes.io/unreachable Exists NoExecute 0xc0026060a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-04 14:22:34 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-04-04 14:22:38 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-04-04 14:22:38 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-04 14:22:34 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:10.244.2.73,StartTime:2020-04-04 14:22:34 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-04-04 14:22:38 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://d86e01929280eb4aa99444544c270cf4b3e44725150caf195d771eee8dbd76aa}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 4 14:22:47.235: INFO: Pod "nginx-deployment-7b8c6f4498-6t474" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-6t474,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-4551,SelfLink:/api/v1/namespaces/deployment-4551/pods/nginx-deployment-7b8c6f4498-6t474,UID:c9219fdd-3607-42d9-9ca5-699820a94d95,ResourceVersion:3601216,Generation:0,CreationTimestamp:2020-04-04 14:22:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 e7a17cb2-fd0b-4811-a2ca-d701ed45c622 0xc002606177 0xc002606178}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-dh2zt {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dh2zt,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-dh2zt true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0026061f0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002606210}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-04 14:22:46 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 4 14:22:47.235: INFO: Pod "nginx-deployment-7b8c6f4498-89fcr" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-89fcr,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-4551,SelfLink:/api/v1/namespaces/deployment-4551/pods/nginx-deployment-7b8c6f4498-89fcr,UID:8169aee3-eb54-4b34-979c-8fb7b21518ab,ResourceVersion:3601093,Generation:0,CreationTimestamp:2020-04-04 14:22:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 e7a17cb2-fd0b-4811-a2ca-d701ed45c622 0xc002606297 0xc002606298}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-dh2zt {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dh2zt,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-dh2zt true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002606310} {node.kubernetes.io/unreachable Exists NoExecute 0xc002606330}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-04 14:22:34 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-04-04 14:22:42 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-04-04 14:22:42 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-04 14:22:34 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:10.244.1.189,StartTime:2020-04-04 14:22:34 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-04-04 14:22:41 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://210e68797a6a3f9fd636548dc43b1a2b4e50f8a0209e5fef4f6c1755ba47e160}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 4 14:22:47.236: INFO: Pod "nginx-deployment-7b8c6f4498-8pq8k" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-8pq8k,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-4551,SelfLink:/api/v1/namespaces/deployment-4551/pods/nginx-deployment-7b8c6f4498-8pq8k,UID:1bca9185-dd8a-43a1-9665-49aca528ad3a,ResourceVersion:3601237,Generation:0,CreationTimestamp:2020-04-04 14:22:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 e7a17cb2-fd0b-4811-a2ca-d701ed45c622 0xc002606407 0xc002606408}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-dh2zt {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dh2zt,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-dh2zt true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002606480} {node.kubernetes.io/unreachable Exists NoExecute 0xc0026064a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-04 14:22:46 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-04 14:22:46 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-04 14:22:46 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-04 14:22:46 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:,StartTime:2020-04-04 14:22:46 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 4 14:22:47.236: INFO: Pod "nginx-deployment-7b8c6f4498-97zm5" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-97zm5,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-4551,SelfLink:/api/v1/namespaces/deployment-4551/pods/nginx-deployment-7b8c6f4498-97zm5,UID:d184cb5f-329c-4701-8198-9d244cb6ee19,ResourceVersion:3601233,Generation:0,CreationTimestamp:2020-04-04 14:22:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 e7a17cb2-fd0b-4811-a2ca-d701ed45c622 0xc002606567 0xc002606568}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-dh2zt {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dh2zt,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-dh2zt true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0026065e0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002606600}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-04 14:22:47 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 4 14:22:47.236: INFO: Pod "nginx-deployment-7b8c6f4498-987r4" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-987r4,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-4551,SelfLink:/api/v1/namespaces/deployment-4551/pods/nginx-deployment-7b8c6f4498-987r4,UID:48adb0c7-9dc6-4e18-8aec-58af1f55be3f,ResourceVersion:3601089,Generation:0,CreationTimestamp:2020-04-04 14:22:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 e7a17cb2-fd0b-4811-a2ca-d701ed45c622 0xc002606687 0xc002606688}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-dh2zt {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dh2zt,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-dh2zt true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002606700} {node.kubernetes.io/unreachable Exists NoExecute 0xc002606720}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-04 14:22:34 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-04-04 14:22:42 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-04-04 14:22:42 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-04 14:22:34 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:10.244.1.188,StartTime:2020-04-04 14:22:34 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-04-04 14:22:41 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://f5a4c7a65316faec8f1f63bbb0ff05b120e0f64f017eef18b830a324acec5f94}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 4 14:22:47.236: INFO: Pod "nginx-deployment-7b8c6f4498-9g62d" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-9g62d,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-4551,SelfLink:/api/v1/namespaces/deployment-4551/pods/nginx-deployment-7b8c6f4498-9g62d,UID:250b42c8-f4b9-46b3-9303-8eab2e38c700,ResourceVersion:3601200,Generation:0,CreationTimestamp:2020-04-04 14:22:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 e7a17cb2-fd0b-4811-a2ca-d701ed45c622 0xc0026067f7 0xc0026067f8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-dh2zt {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dh2zt,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-dh2zt true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002606870} {node.kubernetes.io/unreachable Exists NoExecute 0xc002606890}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-04 14:22:46 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 4 14:22:47.237: INFO: Pod "nginx-deployment-7b8c6f4498-brnwb" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-brnwb,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-4551,SelfLink:/api/v1/namespaces/deployment-4551/pods/nginx-deployment-7b8c6f4498-brnwb,UID:237a0502-57c8-4abf-be44-a96412d9f75f,ResourceVersion:3601230,Generation:0,CreationTimestamp:2020-04-04 14:22:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 e7a17cb2-fd0b-4811-a2ca-d701ed45c622 0xc002606917 0xc002606918}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-dh2zt {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dh2zt,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-dh2zt true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002606990} {node.kubernetes.io/unreachable Exists NoExecute 0xc0026069b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-04 14:22:47 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 4 14:22:47.237: INFO: Pod "nginx-deployment-7b8c6f4498-c447g" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-c447g,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-4551,SelfLink:/api/v1/namespaces/deployment-4551/pods/nginx-deployment-7b8c6f4498-c447g,UID:b801ce9e-5f68-42a0-ac47-8e6916fe55d3,ResourceVersion:3601111,Generation:0,CreationTimestamp:2020-04-04 14:22:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 e7a17cb2-fd0b-4811-a2ca-d701ed45c622 0xc002606a37 0xc002606a38}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-dh2zt {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dh2zt,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-dh2zt true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002606ab0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002606ad0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-04 14:22:34 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-04-04 14:22:43 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-04-04 14:22:43 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-04 14:22:34 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:10.244.2.75,StartTime:2020-04-04 14:22:34 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-04-04 14:22:42 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://c2ffbb389ec20954a97e5292c8242e62a132163274d013bea86ca963dd2e2277}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 4 14:22:47.237: INFO: Pod "nginx-deployment-7b8c6f4498-gdppk" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-gdppk,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-4551,SelfLink:/api/v1/namespaces/deployment-4551/pods/nginx-deployment-7b8c6f4498-gdppk,UID:3022becd-1d8b-4e93-b942-b73eb28b48a5,ResourceVersion:3601079,Generation:0,CreationTimestamp:2020-04-04 14:22:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 e7a17cb2-fd0b-4811-a2ca-d701ed45c622 0xc002606ba7 0xc002606ba8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-dh2zt {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dh2zt,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-dh2zt true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002606c20} {node.kubernetes.io/unreachable Exists NoExecute 0xc002606c40}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-04 14:22:34 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-04-04 14:22:41 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-04-04 14:22:41 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-04 14:22:34 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:10.244.1.187,StartTime:2020-04-04 14:22:34 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-04-04 14:22:40 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://a42b1f9c9a8460d167e51a84b3f1424b57317c3e7e2881f3cdfc663fc4de1c1b}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 4 14:22:47.237: INFO: Pod "nginx-deployment-7b8c6f4498-hmbq6" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-hmbq6,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-4551,SelfLink:/api/v1/namespaces/deployment-4551/pods/nginx-deployment-7b8c6f4498-hmbq6,UID:37818f0a-779d-4763-9658-c44c73035378,ResourceVersion:3601219,Generation:0,CreationTimestamp:2020-04-04 14:22:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 e7a17cb2-fd0b-4811-a2ca-d701ed45c622 0xc002606d17 0xc002606d18}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-dh2zt {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dh2zt,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-dh2zt true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002606d90} {node.kubernetes.io/unreachable Exists NoExecute 0xc002606dc0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-04 14:22:46 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 4 14:22:47.237: INFO: Pod "nginx-deployment-7b8c6f4498-j84gn" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-j84gn,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-4551,SelfLink:/api/v1/namespaces/deployment-4551/pods/nginx-deployment-7b8c6f4498-j84gn,UID:4fceab09-5228-4e28-8d4c-277a245a4980,ResourceVersion:3601106,Generation:0,CreationTimestamp:2020-04-04 14:22:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 e7a17cb2-fd0b-4811-a2ca-d701ed45c622 0xc002606e47 0xc002606e48}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-dh2zt {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dh2zt,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-dh2zt true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002606ec0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002606ee0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-04 14:22:35 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-04-04 14:22:43 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-04-04 14:22:43 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-04 14:22:34 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:10.244.1.191,StartTime:2020-04-04 14:22:35 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-04-04 14:22:43 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://48143e8e1004a28e40c592cd34e74d0d3523a6045772a9207fc44d28e82a5f0c}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 4 14:22:47.238: INFO: Pod "nginx-deployment-7b8c6f4498-kklb2" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-kklb2,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-4551,SelfLink:/api/v1/namespaces/deployment-4551/pods/nginx-deployment-7b8c6f4498-kklb2,UID:72f0afe3-0bfd-4e86-8ac2-020934557588,ResourceVersion:3601212,Generation:0,CreationTimestamp:2020-04-04 14:22:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 e7a17cb2-fd0b-4811-a2ca-d701ed45c622 0xc002606fb7 0xc002606fb8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-dh2zt {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dh2zt,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-dh2zt true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002607050} {node.kubernetes.io/unreachable Exists NoExecute 0xc002607070}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-04 14:22:46 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 4 14:22:47.238: INFO: Pod "nginx-deployment-7b8c6f4498-lw8vk" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-lw8vk,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-4551,SelfLink:/api/v1/namespaces/deployment-4551/pods/nginx-deployment-7b8c6f4498-lw8vk,UID:2ab89801-7682-4f63-bd9a-5870de67aaf7,ResourceVersion:3601221,Generation:0,CreationTimestamp:2020-04-04 14:22:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 e7a17cb2-fd0b-4811-a2ca-d701ed45c622 0xc0026070f7 0xc0026070f8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-dh2zt {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dh2zt,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-dh2zt true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002607170} {node.kubernetes.io/unreachable Exists NoExecute 0xc002607190}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-04 14:22:46 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 4 14:22:47.238: INFO: Pod "nginx-deployment-7b8c6f4498-swfw4" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-swfw4,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-4551,SelfLink:/api/v1/namespaces/deployment-4551/pods/nginx-deployment-7b8c6f4498-swfw4,UID:a3749d00-b3bd-4146-a65a-1fa0b0356d6d,ResourceVersion:3601232,Generation:0,CreationTimestamp:2020-04-04 14:22:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 e7a17cb2-fd0b-4811-a2ca-d701ed45c622 0xc002607217 0xc002607218}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-dh2zt {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dh2zt,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-dh2zt true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002607290} {node.kubernetes.io/unreachable Exists NoExecute 0xc0026072b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-04 14:22:47 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 4 14:22:47.238: INFO: Pod "nginx-deployment-7b8c6f4498-xnt4n" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-xnt4n,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-4551,SelfLink:/api/v1/namespaces/deployment-4551/pods/nginx-deployment-7b8c6f4498-xnt4n,UID:98d6b87a-322c-4f52-be1e-3358d2d23208,ResourceVersion:3601086,Generation:0,CreationTimestamp:2020-04-04 14:22:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 e7a17cb2-fd0b-4811-a2ca-d701ed45c622 0xc002607337 0xc002607338}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-dh2zt {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dh2zt,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-dh2zt true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0026073b0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0026073d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-04 14:22:34 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-04-04 14:22:41 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-04-04 14:22:41 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-04 14:22:34 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:10.244.2.74,StartTime:2020-04-04 14:22:34 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-04-04 14:22:40 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://a5f3785a75b40282249f71d686d488698031bfce2fda188509085889a0d72750}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 4 14:22:47.238: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-4551" for this suite. Apr 4 14:23:03.455: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 4 14:23:03.526: INFO: namespace deployment-4551 deletion completed in 16.221321755s • [SLOW TEST:28.827 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 4 14:23:03.527: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name projected-secret-test-e3b7d9ad-2125-4e54-a20f-4bb1631f9206 STEP: Creating a pod to test consume secrets Apr 4 14:23:03.801: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-e8c9c129-d806-43f6-8b24-d4780e1e768e" in namespace "projected-2952" to be "success or failure" Apr 4 14:23:03.804: INFO: Pod "pod-projected-secrets-e8c9c129-d806-43f6-8b24-d4780e1e768e": Phase="Pending", Reason="", readiness=false. Elapsed: 3.266634ms Apr 4 14:23:05.807: INFO: Pod "pod-projected-secrets-e8c9c129-d806-43f6-8b24-d4780e1e768e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00633209s Apr 4 14:23:07.841: INFO: Pod "pod-projected-secrets-e8c9c129-d806-43f6-8b24-d4780e1e768e": Phase="Running", Reason="", readiness=true. Elapsed: 4.039694619s Apr 4 14:23:09.931: INFO: Pod "pod-projected-secrets-e8c9c129-d806-43f6-8b24-d4780e1e768e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.129740625s STEP: Saw pod success Apr 4 14:23:09.931: INFO: Pod "pod-projected-secrets-e8c9c129-d806-43f6-8b24-d4780e1e768e" satisfied condition "success or failure" Apr 4 14:23:09.935: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-secrets-e8c9c129-d806-43f6-8b24-d4780e1e768e container secret-volume-test: STEP: delete the pod Apr 4 14:23:09.974: INFO: Waiting for pod pod-projected-secrets-e8c9c129-d806-43f6-8b24-d4780e1e768e to disappear Apr 4 14:23:10.158: INFO: Pod pod-projected-secrets-e8c9c129-d806-43f6-8b24-d4780e1e768e no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 4 14:23:10.158: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2952" for this suite. Apr 4 14:23:16.173: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 4 14:23:16.255: INFO: namespace projected-2952 deletion completed in 6.093224139s • [SLOW TEST:12.727 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 4 14:23:16.255: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 Apr 4 14:23:16.292: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Apr 4 14:23:16.300: INFO: Waiting for terminating namespaces to be deleted... Apr 4 14:23:16.303: INFO: Logging pods the kubelet thinks is on node iruya-worker before test Apr 4 14:23:16.307: INFO: kube-proxy-pmz4p from kube-system started at 2020-03-15 18:24:55 +0000 UTC (1 container statuses recorded) Apr 4 14:23:16.307: INFO: Container kube-proxy ready: true, restart count 0 Apr 4 14:23:16.307: INFO: kindnet-gwz5g from kube-system started at 2020-03-15 18:24:55 +0000 UTC (1 container statuses recorded) Apr 4 14:23:16.307: INFO: Container kindnet-cni ready: true, restart count 0 Apr 4 14:23:16.307: INFO: Logging pods the kubelet thinks is on node iruya-worker2 before test Apr 4 14:23:16.311: INFO: coredns-5d4dd4b4db-gm7vr from kube-system started at 2020-03-15 18:24:52 +0000 UTC (1 container statuses recorded) Apr 4 14:23:16.311: INFO: Container coredns ready: true, restart count 0 Apr 4 14:23:16.311: INFO: coredns-5d4dd4b4db-6jcgz from kube-system started at 2020-03-15 18:24:54 +0000 UTC (1 container statuses recorded) Apr 4 14:23:16.311: INFO: Container coredns ready: true, restart count 0 Apr 4 14:23:16.311: INFO: kube-proxy-vwbcj from kube-system started at 2020-03-15 18:24:42 +0000 UTC (1 container statuses recorded) Apr 4 14:23:16.311: INFO: Container kube-proxy ready: true, restart count 0 Apr 4 14:23:16.311: INFO: kindnet-mgd8b from kube-system started at 2020-03-15 18:24:43 +0000 UTC (1 container statuses recorded) Apr 4 14:23:16.311: INFO: Container kindnet-cni ready: true, restart count 0 [It] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.1602a3deda31041b], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 4 14:23:17.331: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-2577" for this suite. Apr 4 14:23:23.361: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 4 14:23:23.439: INFO: namespace sched-pred-2577 deletion completed in 6.103052248s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72 • [SLOW TEST:7.184 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23 validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 4 14:23:23.440: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Apr 4 14:23:27.546: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 4 14:23:27.562: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-4486" for this suite. Apr 4 14:23:33.608: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 4 14:23:33.718: INFO: namespace container-runtime-4486 deletion completed in 6.152737804s • [SLOW TEST:10.278 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129 should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 4 14:23:33.718: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Apr 4 14:23:33.777: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. Apr 4 14:23:33.783: INFO: Number of nodes with available pods: 0 Apr 4 14:23:33.783: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. Apr 4 14:23:33.841: INFO: Number of nodes with available pods: 0 Apr 4 14:23:33.841: INFO: Node iruya-worker is running more than one daemon pod Apr 4 14:23:34.846: INFO: Number of nodes with available pods: 0 Apr 4 14:23:34.846: INFO: Node iruya-worker is running more than one daemon pod Apr 4 14:23:35.847: INFO: Number of nodes with available pods: 0 Apr 4 14:23:35.847: INFO: Node iruya-worker is running more than one daemon pod Apr 4 14:23:36.845: INFO: Number of nodes with available pods: 1 Apr 4 14:23:36.845: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled Apr 4 14:23:36.876: INFO: Number of nodes with available pods: 1 Apr 4 14:23:36.876: INFO: Number of running nodes: 0, number of available pods: 1 Apr 4 14:23:37.881: INFO: Number of nodes with available pods: 0 Apr 4 14:23:37.881: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate Apr 4 14:23:37.890: INFO: Number of nodes with available pods: 0 Apr 4 14:23:37.890: INFO: Node iruya-worker is running more than one daemon pod Apr 4 14:23:38.894: INFO: Number of nodes with available pods: 0 Apr 4 14:23:38.894: INFO: Node iruya-worker is running more than one daemon pod Apr 4 14:23:39.894: INFO: Number of nodes with available pods: 0 Apr 4 14:23:39.894: INFO: Node iruya-worker is running more than one daemon pod Apr 4 14:23:40.894: INFO: Number of nodes with available pods: 0 Apr 4 14:23:40.894: INFO: Node iruya-worker is running more than one daemon pod Apr 4 14:23:41.895: INFO: Number of nodes with available pods: 0 Apr 4 14:23:41.895: INFO: Node iruya-worker is running more than one daemon pod Apr 4 14:23:42.894: INFO: Number of nodes with available pods: 0 Apr 4 14:23:42.894: INFO: Node iruya-worker is running more than one daemon pod Apr 4 14:23:43.894: INFO: Number of nodes with available pods: 0 Apr 4 14:23:43.894: INFO: Node iruya-worker is running more than one daemon pod Apr 4 14:23:44.894: INFO: Number of nodes with available pods: 0 Apr 4 14:23:44.894: INFO: Node iruya-worker is running more than one daemon pod Apr 4 14:23:45.894: INFO: Number of nodes with available pods: 0 Apr 4 14:23:45.894: INFO: Node iruya-worker is running more than one daemon pod Apr 4 14:23:46.894: INFO: Number of nodes with available pods: 0 Apr 4 14:23:46.894: INFO: Node iruya-worker is running more than one daemon pod Apr 4 14:23:47.894: INFO: Number of nodes with available pods: 0 Apr 4 14:23:47.894: INFO: Node iruya-worker is running more than one daemon pod Apr 4 14:23:48.894: INFO: Number of nodes with available pods: 0 Apr 4 14:23:48.894: INFO: Node iruya-worker is running more than one daemon pod Apr 4 14:23:49.894: INFO: Number of nodes with available pods: 0 Apr 4 14:23:49.894: INFO: Node iruya-worker is running more than one daemon pod Apr 4 14:23:50.895: INFO: Number of nodes with available pods: 0 Apr 4 14:23:50.895: INFO: Node iruya-worker is running more than one daemon pod Apr 4 14:23:51.894: INFO: Number of nodes with available pods: 0 Apr 4 14:23:51.894: INFO: Node iruya-worker is running more than one daemon pod Apr 4 14:23:52.894: INFO: Number of nodes with available pods: 0 Apr 4 14:23:52.894: INFO: Node iruya-worker is running more than one daemon pod Apr 4 14:23:53.904: INFO: Number of nodes with available pods: 0 Apr 4 14:23:53.904: INFO: Node iruya-worker is running more than one daemon pod Apr 4 14:23:54.895: INFO: Number of nodes with available pods: 1 Apr 4 14:23:54.895: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-6805, will wait for the garbage collector to delete the pods Apr 4 14:23:54.959: INFO: Deleting DaemonSet.extensions daemon-set took: 6.30346ms Apr 4 14:23:55.259: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.238124ms Apr 4 14:24:02.263: INFO: Number of nodes with available pods: 0 Apr 4 14:24:02.263: INFO: Number of running nodes: 0, number of available pods: 0 Apr 4 14:24:02.266: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-6805/daemonsets","resourceVersion":"3601741"},"items":null} Apr 4 14:24:02.268: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-6805/pods","resourceVersion":"3601741"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 4 14:24:02.302: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-6805" for this suite. Apr 4 14:24:08.347: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 4 14:24:08.432: INFO: namespace daemonsets-6805 deletion completed in 6.127076537s • [SLOW TEST:34.714 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Proxy server should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 4 14:24:08.433: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Starting the proxy Apr 4 14:24:08.490: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix222504884/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 4 14:24:08.559: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5379" for this suite. Apr 4 14:24:14.584: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 4 14:24:14.659: INFO: namespace kubectl-5379 deletion completed in 6.095874304s • [SLOW TEST:6.226 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Proxy server /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl replace should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 4 14:24:14.659: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1721 [It] should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Apr 4 14:24:14.707: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --labels=run=e2e-test-nginx-pod --namespace=kubectl-6328' Apr 4 14:24:17.075: INFO: stderr: "" Apr 4 14:24:17.075: INFO: stdout: "pod/e2e-test-nginx-pod created\n" STEP: verifying the pod e2e-test-nginx-pod is running STEP: verifying the pod e2e-test-nginx-pod was created Apr 4 14:24:22.126: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-nginx-pod --namespace=kubectl-6328 -o json' Apr 4 14:24:22.217: INFO: stderr: "" Apr 4 14:24:22.217: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2020-04-04T14:24:17Z\",\n \"labels\": {\n \"run\": \"e2e-test-nginx-pod\"\n },\n \"name\": \"e2e-test-nginx-pod\",\n \"namespace\": \"kubectl-6328\",\n \"resourceVersion\": \"3601814\",\n \"selfLink\": \"/api/v1/namespaces/kubectl-6328/pods/e2e-test-nginx-pod\",\n \"uid\": \"a053d5bd-6a09-42b4-a6dd-a3c6c068b758\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/nginx:1.14-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-nginx-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-6gtbd\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"iruya-worker2\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-6gtbd\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-6gtbd\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-04-04T14:24:17Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-04-04T14:24:19Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-04-04T14:24:19Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-04-04T14:24:17Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"containerd://03f4f805d54f8605f637cdbd4e1e1b3f9c82aeb6837c6a2816b53ed9fa17c947\",\n \"image\": \"docker.io/library/nginx:1.14-alpine\",\n \"imageID\": \"docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7\",\n \"lastState\": {},\n \"name\": \"e2e-test-nginx-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2020-04-04T14:24:19Z\"\n }\n }\n }\n ],\n \"hostIP\": \"172.17.0.5\",\n \"phase\": \"Running\",\n \"podIP\": \"10.244.1.206\",\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2020-04-04T14:24:17Z\"\n }\n}\n" STEP: replace the image in the pod Apr 4 14:24:22.217: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-6328' Apr 4 14:24:22.476: INFO: stderr: "" Apr 4 14:24:22.476: INFO: stdout: "pod/e2e-test-nginx-pod replaced\n" STEP: verifying the pod e2e-test-nginx-pod has the right image docker.io/library/busybox:1.29 [AfterEach] [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1726 Apr 4 14:24:22.479: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=kubectl-6328' Apr 4 14:24:31.866: INFO: stderr: "" Apr 4 14:24:31.866: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 4 14:24:31.867: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6328" for this suite. Apr 4 14:24:37.892: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 4 14:24:37.974: INFO: namespace kubectl-6328 deletion completed in 6.103926731s • [SLOW TEST:23.315 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 4 14:24:37.975: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Apr 4 14:24:38.048: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e0cc6ea9-8ddf-4ab5-88df-a0b318319db4" in namespace "downward-api-3328" to be "success or failure" Apr 4 14:24:38.051: INFO: Pod "downwardapi-volume-e0cc6ea9-8ddf-4ab5-88df-a0b318319db4": Phase="Pending", Reason="", readiness=false. Elapsed: 3.050554ms Apr 4 14:24:40.055: INFO: Pod "downwardapi-volume-e0cc6ea9-8ddf-4ab5-88df-a0b318319db4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006834526s Apr 4 14:24:42.059: INFO: Pod "downwardapi-volume-e0cc6ea9-8ddf-4ab5-88df-a0b318319db4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01088824s STEP: Saw pod success Apr 4 14:24:42.059: INFO: Pod "downwardapi-volume-e0cc6ea9-8ddf-4ab5-88df-a0b318319db4" satisfied condition "success or failure" Apr 4 14:24:42.063: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-e0cc6ea9-8ddf-4ab5-88df-a0b318319db4 container client-container: STEP: delete the pod Apr 4 14:24:42.080: INFO: Waiting for pod downwardapi-volume-e0cc6ea9-8ddf-4ab5-88df-a0b318319db4 to disappear Apr 4 14:24:42.093: INFO: Pod downwardapi-volume-e0cc6ea9-8ddf-4ab5-88df-a0b318319db4 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 4 14:24:42.094: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3328" for this suite. Apr 4 14:24:48.129: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 4 14:24:48.209: INFO: namespace downward-api-3328 deletion completed in 6.111980492s • [SLOW TEST:10.234 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 4 14:24:48.209: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod Apr 4 14:24:52.305: INFO: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:send-events-2a095edc-01e3-46bb-a77f-738ddb2c2f48,GenerateName:,Namespace:events-7744,SelfLink:/api/v1/namespaces/events-7744/pods/send-events-2a095edc-01e3-46bb-a77f-738ddb2c2f48,UID:16719278-3d0b-4b33-bce1-a76446c1c1f9,ResourceVersion:3601934,Generation:0,CreationTimestamp:2020-04-04 14:24:48 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: foo,time: 276622464,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-zdnxv {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-zdnxv,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{p gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 [] [] [{ 0 80 TCP }] [] [] {map[] map[]} [{default-token-zdnxv true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002bbec00} {node.kubernetes.io/unreachable Exists NoExecute 0xc002bbec20}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-04 14:24:48 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-04-04 14:24:50 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-04-04 14:24:50 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-04 14:24:48 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:10.244.1.207,StartTime:2020-04-04 14:24:48 +0000 UTC,ContainerStatuses:[{p {nil ContainerStateRunning{StartedAt:2020-04-04 14:24:50 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 containerd://7bf435e997b1f38a801cec4228105da852cd2edb17ac98b0a01f53185cb14e9f}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} STEP: checking for scheduler event about the pod Apr 4 14:24:54.310: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod Apr 4 14:24:56.315: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 4 14:24:56.321: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-7744" for this suite. Apr 4 14:25:34.357: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 4 14:25:34.441: INFO: namespace events-7744 deletion completed in 38.110790162s • [SLOW TEST:46.232 seconds] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 4 14:25:34.442: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Apr 4 14:25:39.039: INFO: Successfully updated pod "pod-update-activedeadlineseconds-30ae1a33-7cc6-43b2-87dc-195c27d4d33c" Apr 4 14:25:39.039: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-30ae1a33-7cc6-43b2-87dc-195c27d4d33c" in namespace "pods-2527" to be "terminated due to deadline exceeded" Apr 4 14:25:39.049: INFO: Pod "pod-update-activedeadlineseconds-30ae1a33-7cc6-43b2-87dc-195c27d4d33c": Phase="Running", Reason="", readiness=true. Elapsed: 10.54889ms Apr 4 14:25:41.054: INFO: Pod "pod-update-activedeadlineseconds-30ae1a33-7cc6-43b2-87dc-195c27d4d33c": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.014959902s Apr 4 14:25:41.054: INFO: Pod "pod-update-activedeadlineseconds-30ae1a33-7cc6-43b2-87dc-195c27d4d33c" satisfied condition "terminated due to deadline exceeded" [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 4 14:25:41.054: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-2527" for this suite. Apr 4 14:25:47.071: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 4 14:25:47.150: INFO: namespace pods-2527 deletion completed in 6.09134102s • [SLOW TEST:12.708 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 4 14:25:47.150: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted Apr 4 14:25:54.115: INFO: 3 pods remaining Apr 4 14:25:54.115: INFO: 0 pods has nil DeletionTimestamp Apr 4 14:25:54.115: INFO: Apr 4 14:25:54.873: INFO: 0 pods remaining Apr 4 14:25:54.873: INFO: 0 pods has nil DeletionTimestamp Apr 4 14:25:54.873: INFO: STEP: Gathering metrics W0404 14:25:55.632509 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Apr 4 14:25:55.632: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 4 14:25:55.632: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-5524" for this suite. Apr 4 14:26:01.775: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 4 14:26:01.870: INFO: namespace gc-5524 deletion completed in 6.234088443s • [SLOW TEST:14.720 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 4 14:26:01.870: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0644 on tmpfs Apr 4 14:26:01.959: INFO: Waiting up to 5m0s for pod "pod-3381248c-755c-4ccc-af20-6f80b3358cb1" in namespace "emptydir-1443" to be "success or failure" Apr 4 14:26:01.962: INFO: Pod "pod-3381248c-755c-4ccc-af20-6f80b3358cb1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.704377ms Apr 4 14:26:03.966: INFO: Pod "pod-3381248c-755c-4ccc-af20-6f80b3358cb1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00627854s Apr 4 14:26:05.970: INFO: Pod "pod-3381248c-755c-4ccc-af20-6f80b3358cb1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010431331s STEP: Saw pod success Apr 4 14:26:05.970: INFO: Pod "pod-3381248c-755c-4ccc-af20-6f80b3358cb1" satisfied condition "success or failure" Apr 4 14:26:05.973: INFO: Trying to get logs from node iruya-worker2 pod pod-3381248c-755c-4ccc-af20-6f80b3358cb1 container test-container: STEP: delete the pod Apr 4 14:26:06.008: INFO: Waiting for pod pod-3381248c-755c-4ccc-af20-6f80b3358cb1 to disappear Apr 4 14:26:06.020: INFO: Pod pod-3381248c-755c-4ccc-af20-6f80b3358cb1 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 4 14:26:06.020: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1443" for this suite. Apr 4 14:26:12.036: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 4 14:26:12.106: INFO: namespace emptydir-1443 deletion completed in 6.082379037s • [SLOW TEST:10.235 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 4 14:26:12.106: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating the pod Apr 4 14:26:16.729: INFO: Successfully updated pod "labelsupdate0f259e64-5520-4468-8506-213139183e16" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 4 14:26:18.768: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1146" for this suite. Apr 4 14:26:40.800: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 4 14:26:40.878: INFO: namespace downward-api-1146 deletion completed in 22.106734579s • [SLOW TEST:28.771 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 4 14:26:40.878: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod test-webserver-5ed295a6-77a1-4485-897e-e182140ad1ea in namespace container-probe-2669 Apr 4 14:26:44.948: INFO: Started pod test-webserver-5ed295a6-77a1-4485-897e-e182140ad1ea in namespace container-probe-2669 STEP: checking the pod's current state and verifying that restartCount is present Apr 4 14:26:44.951: INFO: Initial restart count of pod test-webserver-5ed295a6-77a1-4485-897e-e182140ad1ea is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 4 14:30:45.530: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-2669" for this suite. Apr 4 14:30:51.575: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 4 14:30:51.655: INFO: namespace container-probe-2669 deletion completed in 6.118984784s • [SLOW TEST:250.777 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 4 14:30:51.655: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-1397 [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace statefulset-1397 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-1397 Apr 4 14:30:51.732: INFO: Found 0 stateful pods, waiting for 1 Apr 4 14:31:01.736: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod Apr 4 14:31:01.739: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1397 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Apr 4 14:31:01.990: INFO: stderr: "I0404 14:31:01.876754 2886 log.go:172] (0xc000926420) (0xc0004286e0) Create stream\nI0404 14:31:01.876809 2886 log.go:172] (0xc000926420) (0xc0004286e0) Stream added, broadcasting: 1\nI0404 14:31:01.879608 2886 log.go:172] (0xc000926420) Reply frame received for 1\nI0404 14:31:01.879801 2886 log.go:172] (0xc000926420) (0xc00081a000) Create stream\nI0404 14:31:01.879904 2886 log.go:172] (0xc000926420) (0xc00081a000) Stream added, broadcasting: 3\nI0404 14:31:01.881592 2886 log.go:172] (0xc000926420) Reply frame received for 3\nI0404 14:31:01.881641 2886 log.go:172] (0xc000926420) (0xc00081a0a0) Create stream\nI0404 14:31:01.881652 2886 log.go:172] (0xc000926420) (0xc00081a0a0) Stream added, broadcasting: 5\nI0404 14:31:01.882613 2886 log.go:172] (0xc000926420) Reply frame received for 5\nI0404 14:31:01.949825 2886 log.go:172] (0xc000926420) Data frame received for 5\nI0404 14:31:01.949854 2886 log.go:172] (0xc00081a0a0) (5) Data frame handling\nI0404 14:31:01.949872 2886 log.go:172] (0xc00081a0a0) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0404 14:31:01.982820 2886 log.go:172] (0xc000926420) Data frame received for 5\nI0404 14:31:01.982856 2886 log.go:172] (0xc00081a0a0) (5) Data frame handling\nI0404 14:31:01.982918 2886 log.go:172] (0xc000926420) Data frame received for 3\nI0404 14:31:01.982955 2886 log.go:172] (0xc00081a000) (3) Data frame handling\nI0404 14:31:01.983035 2886 log.go:172] (0xc00081a000) (3) Data frame sent\nI0404 14:31:01.983053 2886 log.go:172] (0xc000926420) Data frame received for 3\nI0404 14:31:01.983063 2886 log.go:172] (0xc00081a000) (3) Data frame handling\nI0404 14:31:01.985028 2886 log.go:172] (0xc000926420) Data frame received for 1\nI0404 14:31:01.985059 2886 log.go:172] (0xc0004286e0) (1) Data frame handling\nI0404 14:31:01.985096 2886 log.go:172] (0xc0004286e0) (1) Data frame sent\nI0404 14:31:01.985275 2886 log.go:172] (0xc000926420) (0xc0004286e0) Stream removed, broadcasting: 1\nI0404 14:31:01.985604 2886 log.go:172] (0xc000926420) Go away received\nI0404 14:31:01.985784 2886 log.go:172] (0xc000926420) (0xc0004286e0) Stream removed, broadcasting: 1\nI0404 14:31:01.985813 2886 log.go:172] (0xc000926420) (0xc00081a000) Stream removed, broadcasting: 3\nI0404 14:31:01.985830 2886 log.go:172] (0xc000926420) (0xc00081a0a0) Stream removed, broadcasting: 5\n" Apr 4 14:31:01.990: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Apr 4 14:31:01.990: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Apr 4 14:31:01.994: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Apr 4 14:31:11.999: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Apr 4 14:31:11.999: INFO: Waiting for statefulset status.replicas updated to 0 Apr 4 14:31:12.020: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999722s Apr 4 14:31:13.024: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.991574647s Apr 4 14:31:14.030: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.987220284s Apr 4 14:31:15.035: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.98177197s Apr 4 14:31:16.047: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.976643503s Apr 4 14:31:17.052: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.964630844s Apr 4 14:31:18.056: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.959682858s Apr 4 14:31:19.061: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.954718071s Apr 4 14:31:20.066: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.950201048s Apr 4 14:31:21.071: INFO: Verifying statefulset ss doesn't scale past 1 for another 945.306542ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-1397 Apr 4 14:31:22.076: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1397 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 4 14:31:22.299: INFO: stderr: "I0404 14:31:22.205042 2907 log.go:172] (0xc00012a6e0) (0xc00011e6e0) Create stream\nI0404 14:31:22.205265 2907 log.go:172] (0xc00012a6e0) (0xc00011e6e0) Stream added, broadcasting: 1\nI0404 14:31:22.208361 2907 log.go:172] (0xc00012a6e0) Reply frame received for 1\nI0404 14:31:22.208399 2907 log.go:172] (0xc00012a6e0) (0xc00098c000) Create stream\nI0404 14:31:22.208414 2907 log.go:172] (0xc00012a6e0) (0xc00098c000) Stream added, broadcasting: 3\nI0404 14:31:22.209493 2907 log.go:172] (0xc00012a6e0) Reply frame received for 3\nI0404 14:31:22.209538 2907 log.go:172] (0xc00012a6e0) (0xc00091e000) Create stream\nI0404 14:31:22.209558 2907 log.go:172] (0xc00012a6e0) (0xc00091e000) Stream added, broadcasting: 5\nI0404 14:31:22.210558 2907 log.go:172] (0xc00012a6e0) Reply frame received for 5\nI0404 14:31:22.292245 2907 log.go:172] (0xc00012a6e0) Data frame received for 5\nI0404 14:31:22.292285 2907 log.go:172] (0xc00091e000) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0404 14:31:22.292326 2907 log.go:172] (0xc00012a6e0) Data frame received for 3\nI0404 14:31:22.292360 2907 log.go:172] (0xc00098c000) (3) Data frame handling\nI0404 14:31:22.292383 2907 log.go:172] (0xc00098c000) (3) Data frame sent\nI0404 14:31:22.292394 2907 log.go:172] (0xc00012a6e0) Data frame received for 3\nI0404 14:31:22.292404 2907 log.go:172] (0xc00098c000) (3) Data frame handling\nI0404 14:31:22.292435 2907 log.go:172] (0xc00091e000) (5) Data frame sent\nI0404 14:31:22.292450 2907 log.go:172] (0xc00012a6e0) Data frame received for 5\nI0404 14:31:22.292460 2907 log.go:172] (0xc00091e000) (5) Data frame handling\nI0404 14:31:22.294348 2907 log.go:172] (0xc00012a6e0) Data frame received for 1\nI0404 14:31:22.294369 2907 log.go:172] (0xc00011e6e0) (1) Data frame handling\nI0404 14:31:22.294387 2907 log.go:172] (0xc00011e6e0) (1) Data frame sent\nI0404 14:31:22.294421 2907 log.go:172] (0xc00012a6e0) (0xc00011e6e0) Stream removed, broadcasting: 1\nI0404 14:31:22.294444 2907 log.go:172] (0xc00012a6e0) Go away received\nI0404 14:31:22.294798 2907 log.go:172] (0xc00012a6e0) (0xc00011e6e0) Stream removed, broadcasting: 1\nI0404 14:31:22.294821 2907 log.go:172] (0xc00012a6e0) (0xc00098c000) Stream removed, broadcasting: 3\nI0404 14:31:22.294834 2907 log.go:172] (0xc00012a6e0) (0xc00091e000) Stream removed, broadcasting: 5\n" Apr 4 14:31:22.299: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Apr 4 14:31:22.299: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Apr 4 14:31:22.344: INFO: Found 1 stateful pods, waiting for 3 Apr 4 14:31:32.348: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Apr 4 14:31:32.348: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Apr 4 14:31:32.348: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod Apr 4 14:31:32.353: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1397 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Apr 4 14:31:32.594: INFO: stderr: "I0404 14:31:32.494876 2927 log.go:172] (0xc000116dc0) (0xc0005a2820) Create stream\nI0404 14:31:32.494949 2927 log.go:172] (0xc000116dc0) (0xc0005a2820) Stream added, broadcasting: 1\nI0404 14:31:32.499650 2927 log.go:172] (0xc000116dc0) Reply frame received for 1\nI0404 14:31:32.499708 2927 log.go:172] (0xc000116dc0) (0xc0005a2000) Create stream\nI0404 14:31:32.499722 2927 log.go:172] (0xc000116dc0) (0xc0005a2000) Stream added, broadcasting: 3\nI0404 14:31:32.500695 2927 log.go:172] (0xc000116dc0) Reply frame received for 3\nI0404 14:31:32.500740 2927 log.go:172] (0xc000116dc0) (0xc00069c140) Create stream\nI0404 14:31:32.500759 2927 log.go:172] (0xc000116dc0) (0xc00069c140) Stream added, broadcasting: 5\nI0404 14:31:32.501781 2927 log.go:172] (0xc000116dc0) Reply frame received for 5\nI0404 14:31:32.588553 2927 log.go:172] (0xc000116dc0) Data frame received for 5\nI0404 14:31:32.588586 2927 log.go:172] (0xc00069c140) (5) Data frame handling\nI0404 14:31:32.588602 2927 log.go:172] (0xc00069c140) (5) Data frame sent\nI0404 14:31:32.588614 2927 log.go:172] (0xc000116dc0) Data frame received for 5\nI0404 14:31:32.588624 2927 log.go:172] (0xc00069c140) (5) Data frame handling\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0404 14:31:32.588668 2927 log.go:172] (0xc000116dc0) Data frame received for 3\nI0404 14:31:32.588690 2927 log.go:172] (0xc0005a2000) (3) Data frame handling\nI0404 14:31:32.588702 2927 log.go:172] (0xc0005a2000) (3) Data frame sent\nI0404 14:31:32.588712 2927 log.go:172] (0xc000116dc0) Data frame received for 3\nI0404 14:31:32.588721 2927 log.go:172] (0xc0005a2000) (3) Data frame handling\nI0404 14:31:32.590717 2927 log.go:172] (0xc000116dc0) Data frame received for 1\nI0404 14:31:32.590740 2927 log.go:172] (0xc0005a2820) (1) Data frame handling\nI0404 14:31:32.590751 2927 log.go:172] (0xc0005a2820) (1) Data frame sent\nI0404 14:31:32.590768 2927 log.go:172] (0xc000116dc0) (0xc0005a2820) Stream removed, broadcasting: 1\nI0404 14:31:32.591070 2927 log.go:172] (0xc000116dc0) Go away received\nI0404 14:31:32.591099 2927 log.go:172] (0xc000116dc0) (0xc0005a2820) Stream removed, broadcasting: 1\nI0404 14:31:32.591110 2927 log.go:172] (0xc000116dc0) (0xc0005a2000) Stream removed, broadcasting: 3\nI0404 14:31:32.591128 2927 log.go:172] (0xc000116dc0) (0xc00069c140) Stream removed, broadcasting: 5\n" Apr 4 14:31:32.594: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Apr 4 14:31:32.594: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Apr 4 14:31:32.594: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1397 ss-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Apr 4 14:31:32.851: INFO: stderr: "I0404 14:31:32.721496 2947 log.go:172] (0xc00011b080) (0xc00058cc80) Create stream\nI0404 14:31:32.721563 2947 log.go:172] (0xc00011b080) (0xc00058cc80) Stream added, broadcasting: 1\nI0404 14:31:32.724331 2947 log.go:172] (0xc00011b080) Reply frame received for 1\nI0404 14:31:32.724401 2947 log.go:172] (0xc00011b080) (0xc000848000) Create stream\nI0404 14:31:32.724437 2947 log.go:172] (0xc00011b080) (0xc000848000) Stream added, broadcasting: 3\nI0404 14:31:32.726552 2947 log.go:172] (0xc00011b080) Reply frame received for 3\nI0404 14:31:32.726663 2947 log.go:172] (0xc00011b080) (0xc000954000) Create stream\nI0404 14:31:32.726693 2947 log.go:172] (0xc00011b080) (0xc000954000) Stream added, broadcasting: 5\nI0404 14:31:32.728771 2947 log.go:172] (0xc00011b080) Reply frame received for 5\nI0404 14:31:32.809626 2947 log.go:172] (0xc00011b080) Data frame received for 5\nI0404 14:31:32.809674 2947 log.go:172] (0xc000954000) (5) Data frame handling\nI0404 14:31:32.809711 2947 log.go:172] (0xc000954000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0404 14:31:32.843933 2947 log.go:172] (0xc00011b080) Data frame received for 3\nI0404 14:31:32.843957 2947 log.go:172] (0xc000848000) (3) Data frame handling\nI0404 14:31:32.843966 2947 log.go:172] (0xc000848000) (3) Data frame sent\nI0404 14:31:32.843995 2947 log.go:172] (0xc00011b080) Data frame received for 3\nI0404 14:31:32.844001 2947 log.go:172] (0xc000848000) (3) Data frame handling\nI0404 14:31:32.844029 2947 log.go:172] (0xc00011b080) Data frame received for 5\nI0404 14:31:32.844059 2947 log.go:172] (0xc000954000) (5) Data frame handling\nI0404 14:31:32.846416 2947 log.go:172] (0xc00011b080) Data frame received for 1\nI0404 14:31:32.846460 2947 log.go:172] (0xc00058cc80) (1) Data frame handling\nI0404 14:31:32.846485 2947 log.go:172] (0xc00058cc80) (1) Data frame sent\nI0404 14:31:32.846515 2947 log.go:172] (0xc00011b080) (0xc00058cc80) Stream removed, broadcasting: 1\nI0404 14:31:32.846558 2947 log.go:172] (0xc00011b080) Go away received\nI0404 14:31:32.847002 2947 log.go:172] (0xc00011b080) (0xc00058cc80) Stream removed, broadcasting: 1\nI0404 14:31:32.847032 2947 log.go:172] (0xc00011b080) (0xc000848000) Stream removed, broadcasting: 3\nI0404 14:31:32.847045 2947 log.go:172] (0xc00011b080) (0xc000954000) Stream removed, broadcasting: 5\n" Apr 4 14:31:32.852: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Apr 4 14:31:32.852: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Apr 4 14:31:32.852: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1397 ss-2 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Apr 4 14:31:33.065: INFO: stderr: "I0404 14:31:32.968435 2967 log.go:172] (0xc000764a50) (0xc00032a820) Create stream\nI0404 14:31:32.968496 2967 log.go:172] (0xc000764a50) (0xc00032a820) Stream added, broadcasting: 1\nI0404 14:31:32.970594 2967 log.go:172] (0xc000764a50) Reply frame received for 1\nI0404 14:31:32.970645 2967 log.go:172] (0xc000764a50) (0xc000778320) Create stream\nI0404 14:31:32.970677 2967 log.go:172] (0xc000764a50) (0xc000778320) Stream added, broadcasting: 3\nI0404 14:31:32.971575 2967 log.go:172] (0xc000764a50) Reply frame received for 3\nI0404 14:31:32.971601 2967 log.go:172] (0xc000764a50) (0xc00032a8c0) Create stream\nI0404 14:31:32.971620 2967 log.go:172] (0xc000764a50) (0xc00032a8c0) Stream added, broadcasting: 5\nI0404 14:31:32.972682 2967 log.go:172] (0xc000764a50) Reply frame received for 5\nI0404 14:31:33.030101 2967 log.go:172] (0xc000764a50) Data frame received for 5\nI0404 14:31:33.030128 2967 log.go:172] (0xc00032a8c0) (5) Data frame handling\nI0404 14:31:33.030140 2967 log.go:172] (0xc00032a8c0) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0404 14:31:33.057955 2967 log.go:172] (0xc000764a50) Data frame received for 3\nI0404 14:31:33.058002 2967 log.go:172] (0xc000778320) (3) Data frame handling\nI0404 14:31:33.058056 2967 log.go:172] (0xc000778320) (3) Data frame sent\nI0404 14:31:33.058074 2967 log.go:172] (0xc000764a50) Data frame received for 3\nI0404 14:31:33.058082 2967 log.go:172] (0xc000778320) (3) Data frame handling\nI0404 14:31:33.058166 2967 log.go:172] (0xc000764a50) Data frame received for 5\nI0404 14:31:33.058181 2967 log.go:172] (0xc00032a8c0) (5) Data frame handling\nI0404 14:31:33.059968 2967 log.go:172] (0xc000764a50) Data frame received for 1\nI0404 14:31:33.060010 2967 log.go:172] (0xc00032a820) (1) Data frame handling\nI0404 14:31:33.060035 2967 log.go:172] (0xc00032a820) (1) Data frame sent\nI0404 14:31:33.060068 2967 log.go:172] (0xc000764a50) (0xc00032a820) Stream removed, broadcasting: 1\nI0404 14:31:33.060105 2967 log.go:172] (0xc000764a50) Go away received\nI0404 14:31:33.060527 2967 log.go:172] (0xc000764a50) (0xc00032a820) Stream removed, broadcasting: 1\nI0404 14:31:33.060547 2967 log.go:172] (0xc000764a50) (0xc000778320) Stream removed, broadcasting: 3\nI0404 14:31:33.060557 2967 log.go:172] (0xc000764a50) (0xc00032a8c0) Stream removed, broadcasting: 5\n" Apr 4 14:31:33.065: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Apr 4 14:31:33.065: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Apr 4 14:31:33.065: INFO: Waiting for statefulset status.replicas updated to 0 Apr 4 14:31:33.074: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 Apr 4 14:31:43.083: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Apr 4 14:31:43.083: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Apr 4 14:31:43.083: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Apr 4 14:31:43.110: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999685s Apr 4 14:31:44.123: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.979349648s Apr 4 14:31:45.128: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.965940363s Apr 4 14:31:46.135: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.961327395s Apr 4 14:31:47.147: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.953909292s Apr 4 14:31:48.153: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.941941601s Apr 4 14:31:49.158: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.936294151s Apr 4 14:31:50.163: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.930862739s Apr 4 14:31:51.196: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.926023859s Apr 4 14:31:52.201: INFO: Verifying statefulset ss doesn't scale past 3 for another 893.087762ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-1397 Apr 4 14:31:53.207: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1397 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 4 14:31:53.436: INFO: stderr: "I0404 14:31:53.344925 2987 log.go:172] (0xc0009fa630) (0xc000382820) Create stream\nI0404 14:31:53.345000 2987 log.go:172] (0xc0009fa630) (0xc000382820) Stream added, broadcasting: 1\nI0404 14:31:53.348513 2987 log.go:172] (0xc0009fa630) Reply frame received for 1\nI0404 14:31:53.348585 2987 log.go:172] (0xc0009fa630) (0xc00074a000) Create stream\nI0404 14:31:53.348625 2987 log.go:172] (0xc0009fa630) (0xc00074a000) Stream added, broadcasting: 3\nI0404 14:31:53.349845 2987 log.go:172] (0xc0009fa630) Reply frame received for 3\nI0404 14:31:53.349872 2987 log.go:172] (0xc0009fa630) (0xc000382000) Create stream\nI0404 14:31:53.349887 2987 log.go:172] (0xc0009fa630) (0xc000382000) Stream added, broadcasting: 5\nI0404 14:31:53.350672 2987 log.go:172] (0xc0009fa630) Reply frame received for 5\nI0404 14:31:53.429901 2987 log.go:172] (0xc0009fa630) Data frame received for 3\nI0404 14:31:53.429960 2987 log.go:172] (0xc00074a000) (3) Data frame handling\nI0404 14:31:53.429974 2987 log.go:172] (0xc00074a000) (3) Data frame sent\nI0404 14:31:53.430006 2987 log.go:172] (0xc0009fa630) Data frame received for 5\nI0404 14:31:53.430020 2987 log.go:172] (0xc000382000) (5) Data frame handling\nI0404 14:31:53.430031 2987 log.go:172] (0xc000382000) (5) Data frame sent\nI0404 14:31:53.430042 2987 log.go:172] (0xc0009fa630) Data frame received for 5\nI0404 14:31:53.430062 2987 log.go:172] (0xc000382000) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0404 14:31:53.430084 2987 log.go:172] (0xc0009fa630) Data frame received for 3\nI0404 14:31:53.430129 2987 log.go:172] (0xc00074a000) (3) Data frame handling\nI0404 14:31:53.431445 2987 log.go:172] (0xc0009fa630) Data frame received for 1\nI0404 14:31:53.431463 2987 log.go:172] (0xc000382820) (1) Data frame handling\nI0404 14:31:53.431476 2987 log.go:172] (0xc000382820) (1) Data frame sent\nI0404 14:31:53.431497 2987 log.go:172] (0xc0009fa630) (0xc000382820) Stream removed, broadcasting: 1\nI0404 14:31:53.431747 2987 log.go:172] (0xc0009fa630) (0xc000382820) Stream removed, broadcasting: 1\nI0404 14:31:53.431764 2987 log.go:172] (0xc0009fa630) (0xc00074a000) Stream removed, broadcasting: 3\nI0404 14:31:53.431772 2987 log.go:172] (0xc0009fa630) (0xc000382000) Stream removed, broadcasting: 5\n" Apr 4 14:31:53.436: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Apr 4 14:31:53.436: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Apr 4 14:31:53.436: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1397 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 4 14:31:53.625: INFO: stderr: "I0404 14:31:53.567505 3007 log.go:172] (0xc0006c8420) (0xc00021ac80) Create stream\nI0404 14:31:53.567565 3007 log.go:172] (0xc0006c8420) (0xc00021ac80) Stream added, broadcasting: 1\nI0404 14:31:53.570239 3007 log.go:172] (0xc0006c8420) Reply frame received for 1\nI0404 14:31:53.570311 3007 log.go:172] (0xc0006c8420) (0xc0009a0000) Create stream\nI0404 14:31:53.570343 3007 log.go:172] (0xc0006c8420) (0xc0009a0000) Stream added, broadcasting: 3\nI0404 14:31:53.571558 3007 log.go:172] (0xc0006c8420) Reply frame received for 3\nI0404 14:31:53.571656 3007 log.go:172] (0xc0006c8420) (0xc0009a00a0) Create stream\nI0404 14:31:53.571732 3007 log.go:172] (0xc0006c8420) (0xc0009a00a0) Stream added, broadcasting: 5\nI0404 14:31:53.572948 3007 log.go:172] (0xc0006c8420) Reply frame received for 5\nI0404 14:31:53.619121 3007 log.go:172] (0xc0006c8420) Data frame received for 3\nI0404 14:31:53.619173 3007 log.go:172] (0xc0009a0000) (3) Data frame handling\nI0404 14:31:53.619193 3007 log.go:172] (0xc0009a0000) (3) Data frame sent\nI0404 14:31:53.619208 3007 log.go:172] (0xc0006c8420) Data frame received for 3\nI0404 14:31:53.619220 3007 log.go:172] (0xc0009a0000) (3) Data frame handling\nI0404 14:31:53.619245 3007 log.go:172] (0xc0006c8420) Data frame received for 5\nI0404 14:31:53.619260 3007 log.go:172] (0xc0009a00a0) (5) Data frame handling\nI0404 14:31:53.619272 3007 log.go:172] (0xc0009a00a0) (5) Data frame sent\nI0404 14:31:53.619279 3007 log.go:172] (0xc0006c8420) Data frame received for 5\nI0404 14:31:53.619289 3007 log.go:172] (0xc0009a00a0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0404 14:31:53.620734 3007 log.go:172] (0xc0006c8420) Data frame received for 1\nI0404 14:31:53.620760 3007 log.go:172] (0xc00021ac80) (1) Data frame handling\nI0404 14:31:53.620779 3007 log.go:172] (0xc00021ac80) (1) Data frame sent\nI0404 14:31:53.620795 3007 log.go:172] (0xc0006c8420) (0xc00021ac80) Stream removed, broadcasting: 1\nI0404 14:31:53.620942 3007 log.go:172] (0xc0006c8420) Go away received\nI0404 14:31:53.621369 3007 log.go:172] (0xc0006c8420) (0xc00021ac80) Stream removed, broadcasting: 1\nI0404 14:31:53.621390 3007 log.go:172] (0xc0006c8420) (0xc0009a0000) Stream removed, broadcasting: 3\nI0404 14:31:53.621401 3007 log.go:172] (0xc0006c8420) (0xc0009a00a0) Stream removed, broadcasting: 5\n" Apr 4 14:31:53.625: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Apr 4 14:31:53.625: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Apr 4 14:31:53.625: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1397 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 4 14:31:53.837: INFO: stderr: "I0404 14:31:53.754602 3028 log.go:172] (0xc000a20630) (0xc000702c80) Create stream\nI0404 14:31:53.754682 3028 log.go:172] (0xc000a20630) (0xc000702c80) Stream added, broadcasting: 1\nI0404 14:31:53.757617 3028 log.go:172] (0xc000a20630) Reply frame received for 1\nI0404 14:31:53.757723 3028 log.go:172] (0xc000a20630) (0xc0008f4000) Create stream\nI0404 14:31:53.759190 3028 log.go:172] (0xc000a20630) (0xc0008f4000) Stream added, broadcasting: 3\nI0404 14:31:53.760103 3028 log.go:172] (0xc000a20630) Reply frame received for 3\nI0404 14:31:53.760146 3028 log.go:172] (0xc000a20630) (0xc00053fcc0) Create stream\nI0404 14:31:53.760155 3028 log.go:172] (0xc000a20630) (0xc00053fcc0) Stream added, broadcasting: 5\nI0404 14:31:53.760883 3028 log.go:172] (0xc000a20630) Reply frame received for 5\nI0404 14:31:53.825089 3028 log.go:172] (0xc000a20630) Data frame received for 3\nI0404 14:31:53.825227 3028 log.go:172] (0xc0008f4000) (3) Data frame handling\nI0404 14:31:53.825252 3028 log.go:172] (0xc0008f4000) (3) Data frame sent\nI0404 14:31:53.825263 3028 log.go:172] (0xc000a20630) Data frame received for 3\nI0404 14:31:53.825273 3028 log.go:172] (0xc0008f4000) (3) Data frame handling\nI0404 14:31:53.829473 3028 log.go:172] (0xc000a20630) Data frame received for 5\nI0404 14:31:53.829507 3028 log.go:172] (0xc00053fcc0) (5) Data frame handling\nI0404 14:31:53.829516 3028 log.go:172] (0xc00053fcc0) (5) Data frame sent\nI0404 14:31:53.829523 3028 log.go:172] (0xc000a20630) Data frame received for 5\nI0404 14:31:53.829528 3028 log.go:172] (0xc00053fcc0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0404 14:31:53.833264 3028 log.go:172] (0xc000a20630) Data frame received for 1\nI0404 14:31:53.833280 3028 log.go:172] (0xc000702c80) (1) Data frame handling\nI0404 14:31:53.833288 3028 log.go:172] (0xc000702c80) (1) Data frame sent\nI0404 14:31:53.833299 3028 log.go:172] (0xc000a20630) (0xc000702c80) Stream removed, broadcasting: 1\nI0404 14:31:53.833444 3028 log.go:172] (0xc000a20630) Go away received\nI0404 14:31:53.833546 3028 log.go:172] (0xc000a20630) (0xc000702c80) Stream removed, broadcasting: 1\nI0404 14:31:53.833564 3028 log.go:172] (0xc000a20630) (0xc0008f4000) Stream removed, broadcasting: 3\nI0404 14:31:53.833572 3028 log.go:172] (0xc000a20630) (0xc00053fcc0) Stream removed, broadcasting: 5\n" Apr 4 14:31:53.837: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Apr 4 14:31:53.837: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Apr 4 14:31:53.837: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 Apr 4 14:32:13.850: INFO: Deleting all statefulset in ns statefulset-1397 Apr 4 14:32:13.854: INFO: Scaling statefulset ss to 0 Apr 4 14:32:13.863: INFO: Waiting for statefulset status.replicas updated to 0 Apr 4 14:32:13.866: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 4 14:32:13.907: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-1397" for this suite. Apr 4 14:32:19.923: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 4 14:32:20.006: INFO: namespace statefulset-1397 deletion completed in 6.095154893s • [SLOW TEST:88.351 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 4 14:32:20.007: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Apr 4 14:32:20.090: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c2b5fa7a-5f09-42d7-a6a1-00fd737c616a" in namespace "projected-7615" to be "success or failure" Apr 4 14:32:20.099: INFO: Pod "downwardapi-volume-c2b5fa7a-5f09-42d7-a6a1-00fd737c616a": Phase="Pending", Reason="", readiness=false. Elapsed: 9.21658ms Apr 4 14:32:22.103: INFO: Pod "downwardapi-volume-c2b5fa7a-5f09-42d7-a6a1-00fd737c616a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01322396s Apr 4 14:32:24.107: INFO: Pod "downwardapi-volume-c2b5fa7a-5f09-42d7-a6a1-00fd737c616a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017194192s STEP: Saw pod success Apr 4 14:32:24.107: INFO: Pod "downwardapi-volume-c2b5fa7a-5f09-42d7-a6a1-00fd737c616a" satisfied condition "success or failure" Apr 4 14:32:24.110: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-c2b5fa7a-5f09-42d7-a6a1-00fd737c616a container client-container: STEP: delete the pod Apr 4 14:32:24.173: INFO: Waiting for pod downwardapi-volume-c2b5fa7a-5f09-42d7-a6a1-00fd737c616a to disappear Apr 4 14:32:24.183: INFO: Pod downwardapi-volume-c2b5fa7a-5f09-42d7-a6a1-00fd737c616a no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 4 14:32:24.183: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7615" for this suite. Apr 4 14:32:30.199: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 4 14:32:30.272: INFO: namespace projected-7615 deletion completed in 6.084906786s • [SLOW TEST:10.265 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run deployment should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 4 14:32:30.272: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1557 [It] should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Apr 4 14:32:30.322: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --generator=deployment/apps.v1 --namespace=kubectl-7740' Apr 4 14:32:30.420: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Apr 4 14:32:30.420: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n" STEP: verifying the deployment e2e-test-nginx-deployment was created STEP: verifying the pod controlled by deployment e2e-test-nginx-deployment was created [AfterEach] [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1562 Apr 4 14:32:32.478: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=kubectl-7740' Apr 4 14:32:32.582: INFO: stderr: "" Apr 4 14:32:32.582: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 4 14:32:32.582: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7740" for this suite. Apr 4 14:33:54.636: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 4 14:33:54.720: INFO: namespace kubectl-7740 deletion completed in 1m22.134883239s • [SLOW TEST:84.448 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 4 14:33:54.721: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0644 on node default medium Apr 4 14:33:54.780: INFO: Waiting up to 5m0s for pod "pod-8952c5a5-6a37-437b-b5db-88439b2eeae5" in namespace "emptydir-1316" to be "success or failure" Apr 4 14:33:54.808: INFO: Pod "pod-8952c5a5-6a37-437b-b5db-88439b2eeae5": Phase="Pending", Reason="", readiness=false. Elapsed: 28.358739ms Apr 4 14:33:56.812: INFO: Pod "pod-8952c5a5-6a37-437b-b5db-88439b2eeae5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032408105s Apr 4 14:33:58.816: INFO: Pod "pod-8952c5a5-6a37-437b-b5db-88439b2eeae5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.036431524s STEP: Saw pod success Apr 4 14:33:58.816: INFO: Pod "pod-8952c5a5-6a37-437b-b5db-88439b2eeae5" satisfied condition "success or failure" Apr 4 14:33:58.819: INFO: Trying to get logs from node iruya-worker2 pod pod-8952c5a5-6a37-437b-b5db-88439b2eeae5 container test-container: STEP: delete the pod Apr 4 14:33:58.870: INFO: Waiting for pod pod-8952c5a5-6a37-437b-b5db-88439b2eeae5 to disappear Apr 4 14:33:58.898: INFO: Pod pod-8952c5a5-6a37-437b-b5db-88439b2eeae5 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 4 14:33:58.898: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1316" for this suite. Apr 4 14:34:04.918: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 4 14:34:04.988: INFO: namespace emptydir-1316 deletion completed in 6.084615645s • [SLOW TEST:10.267 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 4 14:34:04.989: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating the pod Apr 4 14:34:09.616: INFO: Successfully updated pod "annotationupdate62869a1e-506e-4253-8f16-8f401bc9da08" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 4 14:34:11.695: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6201" for this suite. Apr 4 14:34:33.722: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 4 14:34:33.806: INFO: namespace projected-6201 deletion completed in 22.106412745s • [SLOW TEST:28.817 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 4 14:34:33.806: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0666 on node default medium Apr 4 14:34:33.900: INFO: Waiting up to 5m0s for pod "pod-34704d9e-5819-4892-9e30-fe448ea736f6" in namespace "emptydir-6869" to be "success or failure" Apr 4 14:34:33.903: INFO: Pod "pod-34704d9e-5819-4892-9e30-fe448ea736f6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.798049ms Apr 4 14:34:35.907: INFO: Pod "pod-34704d9e-5819-4892-9e30-fe448ea736f6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00695384s Apr 4 14:34:37.912: INFO: Pod "pod-34704d9e-5819-4892-9e30-fe448ea736f6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011206011s STEP: Saw pod success Apr 4 14:34:37.912: INFO: Pod "pod-34704d9e-5819-4892-9e30-fe448ea736f6" satisfied condition "success or failure" Apr 4 14:34:37.914: INFO: Trying to get logs from node iruya-worker2 pod pod-34704d9e-5819-4892-9e30-fe448ea736f6 container test-container: STEP: delete the pod Apr 4 14:34:37.935: INFO: Waiting for pod pod-34704d9e-5819-4892-9e30-fe448ea736f6 to disappear Apr 4 14:34:37.939: INFO: Pod pod-34704d9e-5819-4892-9e30-fe448ea736f6 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 4 14:34:37.939: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6869" for this suite. Apr 4 14:34:43.970: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 4 14:34:44.066: INFO: namespace emptydir-6869 deletion completed in 6.123949919s • [SLOW TEST:10.259 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 4 14:34:44.066: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Apr 4 14:34:44.127: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4da10cfc-354a-4387-a61d-16b66b429420" in namespace "downward-api-5091" to be "success or failure" Apr 4 14:34:44.160: INFO: Pod "downwardapi-volume-4da10cfc-354a-4387-a61d-16b66b429420": Phase="Pending", Reason="", readiness=false. Elapsed: 33.215249ms Apr 4 14:34:46.164: INFO: Pod "downwardapi-volume-4da10cfc-354a-4387-a61d-16b66b429420": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03698148s Apr 4 14:34:48.168: INFO: Pod "downwardapi-volume-4da10cfc-354a-4387-a61d-16b66b429420": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.041095459s STEP: Saw pod success Apr 4 14:34:48.168: INFO: Pod "downwardapi-volume-4da10cfc-354a-4387-a61d-16b66b429420" satisfied condition "success or failure" Apr 4 14:34:48.172: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-4da10cfc-354a-4387-a61d-16b66b429420 container client-container: STEP: delete the pod Apr 4 14:34:48.193: INFO: Waiting for pod downwardapi-volume-4da10cfc-354a-4387-a61d-16b66b429420 to disappear Apr 4 14:34:48.197: INFO: Pod downwardapi-volume-4da10cfc-354a-4387-a61d-16b66b429420 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 4 14:34:48.197: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5091" for this suite. Apr 4 14:34:54.213: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 4 14:34:54.292: INFO: namespace downward-api-5091 deletion completed in 6.092105699s • [SLOW TEST:10.226 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 4 14:34:54.293: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating replication controller my-hostname-basic-decf82cd-4fb0-4493-8ab3-dbfb1347bbd9 Apr 4 14:34:54.413: INFO: Pod name my-hostname-basic-decf82cd-4fb0-4493-8ab3-dbfb1347bbd9: Found 0 pods out of 1 Apr 4 14:34:59.420: INFO: Pod name my-hostname-basic-decf82cd-4fb0-4493-8ab3-dbfb1347bbd9: Found 1 pods out of 1 Apr 4 14:34:59.420: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-decf82cd-4fb0-4493-8ab3-dbfb1347bbd9" are running Apr 4 14:34:59.423: INFO: Pod "my-hostname-basic-decf82cd-4fb0-4493-8ab3-dbfb1347bbd9-h9z4f" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-04 14:34:54 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-04 14:34:57 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-04 14:34:57 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-04 14:34:54 +0000 UTC Reason: Message:}]) Apr 4 14:34:59.423: INFO: Trying to dial the pod Apr 4 14:35:04.435: INFO: Controller my-hostname-basic-decf82cd-4fb0-4493-8ab3-dbfb1347bbd9: Got expected result from replica 1 [my-hostname-basic-decf82cd-4fb0-4493-8ab3-dbfb1347bbd9-h9z4f]: "my-hostname-basic-decf82cd-4fb0-4493-8ab3-dbfb1347bbd9-h9z4f", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 4 14:35:04.435: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-2389" for this suite. Apr 4 14:35:10.454: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 4 14:35:10.533: INFO: namespace replication-controller-2389 deletion completed in 6.094053348s • [SLOW TEST:16.241 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 4 14:35:10.534: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0777 on node default medium Apr 4 14:35:10.614: INFO: Waiting up to 5m0s for pod "pod-f9480cc9-0f7c-4bca-a517-5fc8a4f636bb" in namespace "emptydir-2078" to be "success or failure" Apr 4 14:35:10.619: INFO: Pod "pod-f9480cc9-0f7c-4bca-a517-5fc8a4f636bb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.572166ms Apr 4 14:35:12.623: INFO: Pod "pod-f9480cc9-0f7c-4bca-a517-5fc8a4f636bb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008493248s Apr 4 14:35:14.627: INFO: Pod "pod-f9480cc9-0f7c-4bca-a517-5fc8a4f636bb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012786178s STEP: Saw pod success Apr 4 14:35:14.627: INFO: Pod "pod-f9480cc9-0f7c-4bca-a517-5fc8a4f636bb" satisfied condition "success or failure" Apr 4 14:35:14.631: INFO: Trying to get logs from node iruya-worker pod pod-f9480cc9-0f7c-4bca-a517-5fc8a4f636bb container test-container: STEP: delete the pod Apr 4 14:35:14.648: INFO: Waiting for pod pod-f9480cc9-0f7c-4bca-a517-5fc8a4f636bb to disappear Apr 4 14:35:14.653: INFO: Pod pod-f9480cc9-0f7c-4bca-a517-5fc8a4f636bb no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 4 14:35:14.653: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2078" for this suite. Apr 4 14:35:20.704: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 4 14:35:20.779: INFO: namespace emptydir-2078 deletion completed in 6.123435754s • [SLOW TEST:10.246 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSApr 4 14:35:20.780: INFO: Running AfterSuite actions on all nodes Apr 4 14:35:20.780: INFO: Running AfterSuite actions on node 1 Apr 4 14:35:20.780: INFO: Skipping dumping logs from cluster Ran 215 of 4412 Specs in 5978.933 seconds SUCCESS! -- 215 Passed | 0 Failed | 0 Pending | 4197 Skipped PASS