I0407 12:55:38.885855 6 e2e.go:243] Starting e2e run "32c6632e-30c8-403d-ba4a-6086075e4cf4" on Ginkgo node 1 Running Suite: Kubernetes e2e suite =================================== Random Seed: 1586264138 - Will randomize all specs Will run 215 of 4412 specs Apr 7 12:55:39.070: INFO: >>> kubeConfig: /root/.kube/config Apr 7 12:55:39.073: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Apr 7 12:55:39.098: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Apr 7 12:55:39.124: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Apr 7 12:55:39.124: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Apr 7 12:55:39.124: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Apr 7 12:55:39.131: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) Apr 7 12:55:39.131: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Apr 7 12:55:39.131: INFO: e2e test version: v1.15.11 Apr 7 12:55:39.132: INFO: kube-apiserver version: v1.15.7 SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 7 12:55:39.132: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred Apr 7 12:55:39.210: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 Apr 7 12:55:39.211: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Apr 7 12:55:39.218: INFO: Waiting for terminating namespaces to be deleted... Apr 7 12:55:39.220: INFO: Logging pods the kubelet thinks is on node iruya-worker before test Apr 7 12:55:39.226: INFO: kube-proxy-pmz4p from kube-system started at 2020-03-15 18:24:55 +0000 UTC (1 container statuses recorded) Apr 7 12:55:39.226: INFO: Container kube-proxy ready: true, restart count 0 Apr 7 12:55:39.226: INFO: kindnet-gwz5g from kube-system started at 2020-03-15 18:24:55 +0000 UTC (1 container statuses recorded) Apr 7 12:55:39.226: INFO: Container kindnet-cni ready: true, restart count 0 Apr 7 12:55:39.226: INFO: Logging pods the kubelet thinks is on node iruya-worker2 before test Apr 7 12:55:39.232: INFO: kube-proxy-vwbcj from kube-system started at 2020-03-15 18:24:42 +0000 UTC (1 container statuses recorded) Apr 7 12:55:39.232: INFO: Container kube-proxy ready: true, restart count 0 Apr 7 12:55:39.232: INFO: kindnet-mgd8b from kube-system started at 2020-03-15 18:24:43 +0000 UTC (1 container statuses recorded) Apr 7 12:55:39.232: INFO: Container kindnet-cni ready: true, restart count 0 Apr 7 12:55:39.232: INFO: coredns-5d4dd4b4db-gm7vr from kube-system started at 2020-03-15 18:24:52 +0000 UTC (1 container statuses recorded) Apr 7 12:55:39.232: INFO: Container coredns ready: true, restart count 0 Apr 7 12:55:39.232: INFO: coredns-5d4dd4b4db-6jcgz from kube-system started at 2020-03-15 18:24:54 +0000 UTC (1 container statuses recorded) Apr 7 12:55:39.232: INFO: Container coredns ready: true, restart count 0 [It] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-4efd15d9-fa1f-4dbc-a76a-1fea54c3e1e0 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-4efd15d9-fa1f-4dbc-a76a-1fea54c3e1e0 off the node iruya-worker2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-4efd15d9-fa1f-4dbc-a76a-1fea54c3e1e0 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 7 12:55:47.392: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-4909" for this suite. Apr 7 12:56:05.418: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 7 12:56:05.485: INFO: namespace sched-pred-4909 deletion completed in 18.090136751s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72 • [SLOW TEST:26.354 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23 validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 7 12:56:05.486: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Apr 7 12:56:05.571: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9af7d7c1-9498-4115-8b71-65f27b1bd4cf" in namespace "downward-api-493" to be "success or failure" Apr 7 12:56:05.590: INFO: Pod "downwardapi-volume-9af7d7c1-9498-4115-8b71-65f27b1bd4cf": Phase="Pending", Reason="", readiness=false. Elapsed: 19.537248ms Apr 7 12:56:07.594: INFO: Pod "downwardapi-volume-9af7d7c1-9498-4115-8b71-65f27b1bd4cf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023597293s Apr 7 12:56:09.599: INFO: Pod "downwardapi-volume-9af7d7c1-9498-4115-8b71-65f27b1bd4cf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.02820062s STEP: Saw pod success Apr 7 12:56:09.599: INFO: Pod "downwardapi-volume-9af7d7c1-9498-4115-8b71-65f27b1bd4cf" satisfied condition "success or failure" Apr 7 12:56:09.602: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-9af7d7c1-9498-4115-8b71-65f27b1bd4cf container client-container: STEP: delete the pod Apr 7 12:56:09.627: INFO: Waiting for pod downwardapi-volume-9af7d7c1-9498-4115-8b71-65f27b1bd4cf to disappear Apr 7 12:56:09.637: INFO: Pod downwardapi-volume-9af7d7c1-9498-4115-8b71-65f27b1bd4cf no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 7 12:56:09.638: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-493" for this suite. Apr 7 12:56:15.671: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 7 12:56:15.745: INFO: namespace downward-api-493 deletion completed in 6.104378888s • [SLOW TEST:10.259 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 7 12:56:15.745: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Apr 7 12:56:23.883: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Apr 7 12:56:23.933: INFO: Pod pod-with-prestop-http-hook still exists Apr 7 12:56:25.933: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Apr 7 12:56:25.937: INFO: Pod pod-with-prestop-http-hook still exists Apr 7 12:56:27.933: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Apr 7 12:56:27.937: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 7 12:56:27.944: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-1847" for this suite. Apr 7 12:56:49.962: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 7 12:56:50.075: INFO: namespace container-lifecycle-hook-1847 deletion completed in 22.127384725s • [SLOW TEST:34.330 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 7 12:56:50.075: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-map-8b461032-a97d-46ee-a137-d05a9feed9f0 STEP: Creating a pod to test consume configMaps Apr 7 12:56:50.143: INFO: Waiting up to 5m0s for pod "pod-configmaps-e5125e39-2d4e-46fe-86e8-92cadb97b18d" in namespace "configmap-4787" to be "success or failure" Apr 7 12:56:50.179: INFO: Pod "pod-configmaps-e5125e39-2d4e-46fe-86e8-92cadb97b18d": Phase="Pending", Reason="", readiness=false. Elapsed: 35.380883ms Apr 7 12:56:52.183: INFO: Pod "pod-configmaps-e5125e39-2d4e-46fe-86e8-92cadb97b18d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.039583842s Apr 7 12:56:54.187: INFO: Pod "pod-configmaps-e5125e39-2d4e-46fe-86e8-92cadb97b18d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.044031653s STEP: Saw pod success Apr 7 12:56:54.187: INFO: Pod "pod-configmaps-e5125e39-2d4e-46fe-86e8-92cadb97b18d" satisfied condition "success or failure" Apr 7 12:56:54.190: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-e5125e39-2d4e-46fe-86e8-92cadb97b18d container configmap-volume-test: STEP: delete the pod Apr 7 12:56:54.215: INFO: Waiting for pod pod-configmaps-e5125e39-2d4e-46fe-86e8-92cadb97b18d to disappear Apr 7 12:56:54.219: INFO: Pod pod-configmaps-e5125e39-2d4e-46fe-86e8-92cadb97b18d no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 7 12:56:54.219: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4787" for this suite. Apr 7 12:57:00.235: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 7 12:57:00.314: INFO: namespace configmap-4787 deletion completed in 6.092207859s • [SLOW TEST:10.239 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 7 12:57:00.315: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod liveness-dd426dab-22b4-4c4e-a0c9-08d248e88279 in namespace container-probe-3229 Apr 7 12:57:04.386: INFO: Started pod liveness-dd426dab-22b4-4c4e-a0c9-08d248e88279 in namespace container-probe-3229 STEP: checking the pod's current state and verifying that restartCount is present Apr 7 12:57:04.388: INFO: Initial restart count of pod liveness-dd426dab-22b4-4c4e-a0c9-08d248e88279 is 0 Apr 7 12:57:18.421: INFO: Restart count of pod container-probe-3229/liveness-dd426dab-22b4-4c4e-a0c9-08d248e88279 is now 1 (14.032763717s elapsed) Apr 7 12:57:38.467: INFO: Restart count of pod container-probe-3229/liveness-dd426dab-22b4-4c4e-a0c9-08d248e88279 is now 2 (34.078073223s elapsed) Apr 7 12:57:58.509: INFO: Restart count of pod container-probe-3229/liveness-dd426dab-22b4-4c4e-a0c9-08d248e88279 is now 3 (54.120246627s elapsed) Apr 7 12:58:18.551: INFO: Restart count of pod container-probe-3229/liveness-dd426dab-22b4-4c4e-a0c9-08d248e88279 is now 4 (1m14.16290018s elapsed) Apr 7 12:59:22.715: INFO: Restart count of pod container-probe-3229/liveness-dd426dab-22b4-4c4e-a0c9-08d248e88279 is now 5 (2m18.326752507s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 7 12:59:22.754: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-3229" for this suite. Apr 7 12:59:28.806: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 7 12:59:28.888: INFO: namespace container-probe-3229 deletion completed in 6.120134253s • [SLOW TEST:148.574 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 7 12:59:28.889: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test substitution in container's args Apr 7 12:59:28.971: INFO: Waiting up to 5m0s for pod "var-expansion-e6609a49-0a1a-4cd3-bc5d-63b6f16968cc" in namespace "var-expansion-979" to be "success or failure" Apr 7 12:59:28.973: INFO: Pod "var-expansion-e6609a49-0a1a-4cd3-bc5d-63b6f16968cc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.106035ms Apr 7 12:59:30.978: INFO: Pod "var-expansion-e6609a49-0a1a-4cd3-bc5d-63b6f16968cc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006608859s Apr 7 12:59:32.982: INFO: Pod "var-expansion-e6609a49-0a1a-4cd3-bc5d-63b6f16968cc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011092774s STEP: Saw pod success Apr 7 12:59:32.982: INFO: Pod "var-expansion-e6609a49-0a1a-4cd3-bc5d-63b6f16968cc" satisfied condition "success or failure" Apr 7 12:59:32.986: INFO: Trying to get logs from node iruya-worker2 pod var-expansion-e6609a49-0a1a-4cd3-bc5d-63b6f16968cc container dapi-container: STEP: delete the pod Apr 7 12:59:33.005: INFO: Waiting for pod var-expansion-e6609a49-0a1a-4cd3-bc5d-63b6f16968cc to disappear Apr 7 12:59:33.019: INFO: Pod var-expansion-e6609a49-0a1a-4cd3-bc5d-63b6f16968cc no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 7 12:59:33.019: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-979" for this suite. Apr 7 12:59:39.036: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 7 12:59:39.114: INFO: namespace var-expansion-979 deletion completed in 6.091489709s • [SLOW TEST:10.225 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 7 12:59:39.115: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Apr 7 12:59:47.229: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 7 12:59:47.234: INFO: Pod pod-with-poststart-exec-hook still exists Apr 7 12:59:49.235: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 7 12:59:49.239: INFO: Pod pod-with-poststart-exec-hook still exists Apr 7 12:59:51.235: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 7 12:59:51.239: INFO: Pod pod-with-poststart-exec-hook still exists Apr 7 12:59:53.235: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 7 12:59:53.239: INFO: Pod pod-with-poststart-exec-hook still exists Apr 7 12:59:55.235: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 7 12:59:55.239: INFO: Pod pod-with-poststart-exec-hook still exists Apr 7 12:59:57.235: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 7 12:59:57.238: INFO: Pod pod-with-poststart-exec-hook still exists Apr 7 12:59:59.235: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 7 12:59:59.239: INFO: Pod pod-with-poststart-exec-hook still exists Apr 7 13:00:01.235: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 7 13:00:01.238: INFO: Pod pod-with-poststart-exec-hook still exists Apr 7 13:00:03.235: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 7 13:00:03.238: INFO: Pod pod-with-poststart-exec-hook still exists Apr 7 13:00:05.235: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 7 13:00:05.239: INFO: Pod pod-with-poststart-exec-hook still exists Apr 7 13:00:07.235: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 7 13:00:07.239: INFO: Pod pod-with-poststart-exec-hook still exists Apr 7 13:00:09.235: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 7 13:00:09.238: INFO: Pod pod-with-poststart-exec-hook still exists Apr 7 13:00:11.235: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 7 13:00:11.239: INFO: Pod pod-with-poststart-exec-hook still exists Apr 7 13:00:13.235: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 7 13:00:13.239: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 7 13:00:13.239: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-6541" for this suite. Apr 7 13:00:35.257: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 7 13:00:35.341: INFO: namespace container-lifecycle-hook-6541 deletion completed in 22.097866735s • [SLOW TEST:56.226 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 7 13:00:35.341: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod busybox-2f0cb64a-e0f8-40ec-a1e4-b374895e6c7b in namespace container-probe-9338 Apr 7 13:00:39.418: INFO: Started pod busybox-2f0cb64a-e0f8-40ec-a1e4-b374895e6c7b in namespace container-probe-9338 STEP: checking the pod's current state and verifying that restartCount is present Apr 7 13:00:39.421: INFO: Initial restart count of pod busybox-2f0cb64a-e0f8-40ec-a1e4-b374895e6c7b is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 7 13:04:39.966: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-9338" for this suite. Apr 7 13:04:46.011: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 7 13:04:46.092: INFO: namespace container-probe-9338 deletion completed in 6.120776006s • [SLOW TEST:250.751 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 7 13:04:46.093: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod Apr 7 13:04:50.185: INFO: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:send-events-e47b8f35-9d9c-4077-8c30-60a0236e9b50,GenerateName:,Namespace:events-8632,SelfLink:/api/v1/namespaces/events-8632/pods/send-events-e47b8f35-9d9c-4077-8c30-60a0236e9b50,UID:1ef8512a-174f-44d6-8f82-ecf027664308,ResourceVersion:4120423,Generation:0,CreationTimestamp:2020-04-07 13:04:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: foo,time: 165368525,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-8p4lm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-8p4lm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{p gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 [] [] [{ 0 80 TCP }] [] [] {map[] map[]} [{default-token-8p4lm true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0021de1c0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0021de1e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-07 13:04:46 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-04-07 13:04:49 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-04-07 13:04:49 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-07 13:04:46 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:10.244.1.73,StartTime:2020-04-07 13:04:46 +0000 UTC,ContainerStatuses:[{p {nil ContainerStateRunning{StartedAt:2020-04-07 13:04:48 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 containerd://ea156ed77c4e82d33b0361860bb910a4043222153eb94ad22786c6cda41b961d}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} STEP: checking for scheduler event about the pod Apr 7 13:04:52.191: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod Apr 7 13:04:54.196: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 7 13:04:54.202: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-8632" for this suite. Apr 7 13:05:32.250: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 7 13:05:32.347: INFO: namespace events-8632 deletion completed in 38.135182022s • [SLOW TEST:46.255 seconds] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 7 13:05:32.348: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Performing setup for networking test in namespace pod-network-test-4893 STEP: creating a selector STEP: Creating the service pods in kubernetes Apr 7 13:05:32.410: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Apr 7 13:05:58.584: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.2.244 8081 | grep -v '^\s*$'] Namespace:pod-network-test-4893 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 7 13:05:58.584: INFO: >>> kubeConfig: /root/.kube/config I0407 13:05:58.618945 6 log.go:172] (0xc0006d96b0) (0xc00177bcc0) Create stream I0407 13:05:58.618982 6 log.go:172] (0xc0006d96b0) (0xc00177bcc0) Stream added, broadcasting: 1 I0407 13:05:58.621666 6 log.go:172] (0xc0006d96b0) Reply frame received for 1 I0407 13:05:58.621719 6 log.go:172] (0xc0006d96b0) (0xc001944aa0) Create stream I0407 13:05:58.621738 6 log.go:172] (0xc0006d96b0) (0xc001944aa0) Stream added, broadcasting: 3 I0407 13:05:58.622838 6 log.go:172] (0xc0006d96b0) Reply frame received for 3 I0407 13:05:58.622878 6 log.go:172] (0xc0006d96b0) (0xc001816d20) Create stream I0407 13:05:58.622890 6 log.go:172] (0xc0006d96b0) (0xc001816d20) Stream added, broadcasting: 5 I0407 13:05:58.623811 6 log.go:172] (0xc0006d96b0) Reply frame received for 5 I0407 13:05:59.689860 6 log.go:172] (0xc0006d96b0) Data frame received for 5 I0407 13:05:59.689896 6 log.go:172] (0xc001816d20) (5) Data frame handling I0407 13:05:59.689931 6 log.go:172] (0xc0006d96b0) Data frame received for 3 I0407 13:05:59.689946 6 log.go:172] (0xc001944aa0) (3) Data frame handling I0407 13:05:59.689963 6 log.go:172] (0xc001944aa0) (3) Data frame sent I0407 13:05:59.689977 6 log.go:172] (0xc0006d96b0) Data frame received for 3 I0407 13:05:59.689989 6 log.go:172] (0xc001944aa0) (3) Data frame handling I0407 13:05:59.692367 6 log.go:172] (0xc0006d96b0) Data frame received for 1 I0407 13:05:59.692414 6 log.go:172] (0xc00177bcc0) (1) Data frame handling I0407 13:05:59.692434 6 log.go:172] (0xc00177bcc0) (1) Data frame sent I0407 13:05:59.692459 6 log.go:172] (0xc0006d96b0) (0xc00177bcc0) Stream removed, broadcasting: 1 I0407 13:05:59.692491 6 log.go:172] (0xc0006d96b0) Go away received I0407 13:05:59.693289 6 log.go:172] (0xc0006d96b0) (0xc00177bcc0) Stream removed, broadcasting: 1 I0407 13:05:59.693318 6 log.go:172] (0xc0006d96b0) (0xc001944aa0) Stream removed, broadcasting: 3 I0407 13:05:59.693338 6 log.go:172] (0xc0006d96b0) (0xc001816d20) Stream removed, broadcasting: 5 Apr 7 13:05:59.693: INFO: Found all expected endpoints: [netserver-0] Apr 7 13:05:59.697: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.1.74 8081 | grep -v '^\s*$'] Namespace:pod-network-test-4893 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 7 13:05:59.697: INFO: >>> kubeConfig: /root/.kube/config I0407 13:05:59.736120 6 log.go:172] (0xc000d74fd0) (0xc001945040) Create stream I0407 13:05:59.736155 6 log.go:172] (0xc000d74fd0) (0xc001945040) Stream added, broadcasting: 1 I0407 13:05:59.738892 6 log.go:172] (0xc000d74fd0) Reply frame received for 1 I0407 13:05:59.738941 6 log.go:172] (0xc000d74fd0) (0xc00177bea0) Create stream I0407 13:05:59.738963 6 log.go:172] (0xc000d74fd0) (0xc00177bea0) Stream added, broadcasting: 3 I0407 13:05:59.740016 6 log.go:172] (0xc000d74fd0) Reply frame received for 3 I0407 13:05:59.740048 6 log.go:172] (0xc000d74fd0) (0xc000911180) Create stream I0407 13:05:59.740061 6 log.go:172] (0xc000d74fd0) (0xc000911180) Stream added, broadcasting: 5 I0407 13:05:59.740959 6 log.go:172] (0xc000d74fd0) Reply frame received for 5 I0407 13:06:00.817793 6 log.go:172] (0xc000d74fd0) Data frame received for 3 I0407 13:06:00.817839 6 log.go:172] (0xc00177bea0) (3) Data frame handling I0407 13:06:00.817871 6 log.go:172] (0xc00177bea0) (3) Data frame sent I0407 13:06:00.818119 6 log.go:172] (0xc000d74fd0) Data frame received for 5 I0407 13:06:00.818166 6 log.go:172] (0xc000911180) (5) Data frame handling I0407 13:06:00.818662 6 log.go:172] (0xc000d74fd0) Data frame received for 3 I0407 13:06:00.818680 6 log.go:172] (0xc00177bea0) (3) Data frame handling I0407 13:06:00.820329 6 log.go:172] (0xc000d74fd0) Data frame received for 1 I0407 13:06:00.820355 6 log.go:172] (0xc001945040) (1) Data frame handling I0407 13:06:00.820375 6 log.go:172] (0xc001945040) (1) Data frame sent I0407 13:06:00.820391 6 log.go:172] (0xc000d74fd0) (0xc001945040) Stream removed, broadcasting: 1 I0407 13:06:00.820459 6 log.go:172] (0xc000d74fd0) (0xc001945040) Stream removed, broadcasting: 1 I0407 13:06:00.820469 6 log.go:172] (0xc000d74fd0) (0xc00177bea0) Stream removed, broadcasting: 3 I0407 13:06:00.820599 6 log.go:172] (0xc000d74fd0) (0xc000911180) Stream removed, broadcasting: 5 I0407 13:06:00.820719 6 log.go:172] (0xc000d74fd0) Go away received Apr 7 13:06:00.820: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 7 13:06:00.820: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-4893" for this suite. Apr 7 13:06:22.838: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 7 13:06:22.924: INFO: namespace pod-network-test-4893 deletion completed in 22.100123445s • [SLOW TEST:50.577 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 7 13:06:22.925: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-71fe3f69-7763-4df0-9f87-fa23ced5002f STEP: Creating a pod to test consume configMaps Apr 7 13:06:23.022: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-1da54541-6cf2-4ccd-80d1-3d228aaff125" in namespace "projected-81" to be "success or failure" Apr 7 13:06:23.043: INFO: Pod "pod-projected-configmaps-1da54541-6cf2-4ccd-80d1-3d228aaff125": Phase="Pending", Reason="", readiness=false. Elapsed: 20.756958ms Apr 7 13:06:25.047: INFO: Pod "pod-projected-configmaps-1da54541-6cf2-4ccd-80d1-3d228aaff125": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024483321s Apr 7 13:06:27.050: INFO: Pod "pod-projected-configmaps-1da54541-6cf2-4ccd-80d1-3d228aaff125": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.02773638s STEP: Saw pod success Apr 7 13:06:27.050: INFO: Pod "pod-projected-configmaps-1da54541-6cf2-4ccd-80d1-3d228aaff125" satisfied condition "success or failure" Apr 7 13:06:27.052: INFO: Trying to get logs from node iruya-worker pod pod-projected-configmaps-1da54541-6cf2-4ccd-80d1-3d228aaff125 container projected-configmap-volume-test: STEP: delete the pod Apr 7 13:06:27.070: INFO: Waiting for pod pod-projected-configmaps-1da54541-6cf2-4ccd-80d1-3d228aaff125 to disappear Apr 7 13:06:27.074: INFO: Pod pod-projected-configmaps-1da54541-6cf2-4ccd-80d1-3d228aaff125 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 7 13:06:27.075: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-81" for this suite. Apr 7 13:06:33.101: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 7 13:06:33.176: INFO: namespace projected-81 deletion completed in 6.09857934s • [SLOW TEST:10.252 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 7 13:06:33.177: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 7 13:06:37.268: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-9834" for this suite. Apr 7 13:07:15.288: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 7 13:07:15.364: INFO: namespace kubelet-test-9834 deletion completed in 38.092334342s • [SLOW TEST:42.187 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a read only busybox container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:187 should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 7 13:07:15.364: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Apr 7 13:07:15.424: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b10b38eb-b40d-49b7-993b-4740af72480b" in namespace "projected-9623" to be "success or failure" Apr 7 13:07:15.444: INFO: Pod "downwardapi-volume-b10b38eb-b40d-49b7-993b-4740af72480b": Phase="Pending", Reason="", readiness=false. Elapsed: 19.817035ms Apr 7 13:07:17.458: INFO: Pod "downwardapi-volume-b10b38eb-b40d-49b7-993b-4740af72480b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033654443s Apr 7 13:07:19.462: INFO: Pod "downwardapi-volume-b10b38eb-b40d-49b7-993b-4740af72480b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.038284071s STEP: Saw pod success Apr 7 13:07:19.462: INFO: Pod "downwardapi-volume-b10b38eb-b40d-49b7-993b-4740af72480b" satisfied condition "success or failure" Apr 7 13:07:19.465: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-b10b38eb-b40d-49b7-993b-4740af72480b container client-container: STEP: delete the pod Apr 7 13:07:19.492: INFO: Waiting for pod downwardapi-volume-b10b38eb-b40d-49b7-993b-4740af72480b to disappear Apr 7 13:07:19.500: INFO: Pod downwardapi-volume-b10b38eb-b40d-49b7-993b-4740af72480b no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 7 13:07:19.500: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9623" for this suite. Apr 7 13:07:25.517: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 7 13:07:25.623: INFO: namespace projected-9623 deletion completed in 6.118528052s • [SLOW TEST:10.259 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 7 13:07:25.623: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-9f328d05-5997-4865-aeba-4a198d4664ef STEP: Creating a pod to test consume secrets Apr 7 13:07:25.708: INFO: Waiting up to 5m0s for pod "pod-secrets-39228dc8-8a2c-4ff7-877e-c5e4cda8f24e" in namespace "secrets-5792" to be "success or failure" Apr 7 13:07:25.710: INFO: Pod "pod-secrets-39228dc8-8a2c-4ff7-877e-c5e4cda8f24e": Phase="Pending", Reason="", readiness=false. Elapsed: 1.898857ms Apr 7 13:07:27.714: INFO: Pod "pod-secrets-39228dc8-8a2c-4ff7-877e-c5e4cda8f24e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00620965s Apr 7 13:07:29.718: INFO: Pod "pod-secrets-39228dc8-8a2c-4ff7-877e-c5e4cda8f24e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010235458s STEP: Saw pod success Apr 7 13:07:29.718: INFO: Pod "pod-secrets-39228dc8-8a2c-4ff7-877e-c5e4cda8f24e" satisfied condition "success or failure" Apr 7 13:07:29.720: INFO: Trying to get logs from node iruya-worker2 pod pod-secrets-39228dc8-8a2c-4ff7-877e-c5e4cda8f24e container secret-volume-test: STEP: delete the pod Apr 7 13:07:29.753: INFO: Waiting for pod pod-secrets-39228dc8-8a2c-4ff7-877e-c5e4cda8f24e to disappear Apr 7 13:07:29.769: INFO: Pod pod-secrets-39228dc8-8a2c-4ff7-877e-c5e4cda8f24e no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 7 13:07:29.769: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-5792" for this suite. Apr 7 13:07:35.786: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 7 13:07:35.864: INFO: namespace secrets-5792 deletion completed in 6.091866683s • [SLOW TEST:10.241 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 7 13:07:35.864: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars Apr 7 13:07:35.917: INFO: Waiting up to 5m0s for pod "downward-api-e94c4e01-89ce-4d5e-a3d3-a74e9c012d0b" in namespace "downward-api-6223" to be "success or failure" Apr 7 13:07:35.920: INFO: Pod "downward-api-e94c4e01-89ce-4d5e-a3d3-a74e9c012d0b": Phase="Pending", Reason="", readiness=false. Elapsed: 3.744212ms Apr 7 13:07:37.924: INFO: Pod "downward-api-e94c4e01-89ce-4d5e-a3d3-a74e9c012d0b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007694936s Apr 7 13:07:39.928: INFO: Pod "downward-api-e94c4e01-89ce-4d5e-a3d3-a74e9c012d0b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011678558s STEP: Saw pod success Apr 7 13:07:39.928: INFO: Pod "downward-api-e94c4e01-89ce-4d5e-a3d3-a74e9c012d0b" satisfied condition "success or failure" Apr 7 13:07:39.931: INFO: Trying to get logs from node iruya-worker2 pod downward-api-e94c4e01-89ce-4d5e-a3d3-a74e9c012d0b container dapi-container: STEP: delete the pod Apr 7 13:07:39.958: INFO: Waiting for pod downward-api-e94c4e01-89ce-4d5e-a3d3-a74e9c012d0b to disappear Apr 7 13:07:39.978: INFO: Pod downward-api-e94c4e01-89ce-4d5e-a3d3-a74e9c012d0b no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 7 13:07:39.979: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6223" for this suite. Apr 7 13:07:45.996: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 7 13:07:46.070: INFO: namespace downward-api-6223 deletion completed in 6.087439945s • [SLOW TEST:10.206 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 7 13:07:46.070: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0666 on tmpfs Apr 7 13:07:46.126: INFO: Waiting up to 5m0s for pod "pod-9b10180b-7fde-4566-8f1f-c2c430c4e36c" in namespace "emptydir-6854" to be "success or failure" Apr 7 13:07:46.136: INFO: Pod "pod-9b10180b-7fde-4566-8f1f-c2c430c4e36c": Phase="Pending", Reason="", readiness=false. Elapsed: 9.879392ms Apr 7 13:07:48.140: INFO: Pod "pod-9b10180b-7fde-4566-8f1f-c2c430c4e36c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014127466s Apr 7 13:07:50.145: INFO: Pod "pod-9b10180b-7fde-4566-8f1f-c2c430c4e36c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018872536s STEP: Saw pod success Apr 7 13:07:50.145: INFO: Pod "pod-9b10180b-7fde-4566-8f1f-c2c430c4e36c" satisfied condition "success or failure" Apr 7 13:07:50.148: INFO: Trying to get logs from node iruya-worker2 pod pod-9b10180b-7fde-4566-8f1f-c2c430c4e36c container test-container: STEP: delete the pod Apr 7 13:07:50.168: INFO: Waiting for pod pod-9b10180b-7fde-4566-8f1f-c2c430c4e36c to disappear Apr 7 13:07:50.178: INFO: Pod pod-9b10180b-7fde-4566-8f1f-c2c430c4e36c no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 7 13:07:50.178: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6854" for this suite. Apr 7 13:07:56.225: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 7 13:07:56.300: INFO: namespace emptydir-6854 deletion completed in 6.119774762s • [SLOW TEST:10.230 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 7 13:07:56.301: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-projected-all-test-volume-8f577785-7ac6-4660-a5f4-dcd296968f06 STEP: Creating secret with name secret-projected-all-test-volume-747fd9f2-3634-4091-8ae6-955fdf528526 STEP: Creating a pod to test Check all projections for projected volume plugin Apr 7 13:07:56.383: INFO: Waiting up to 5m0s for pod "projected-volume-24798795-4a83-43d2-925d-3a1f4b1c13a8" in namespace "projected-8111" to be "success or failure" Apr 7 13:07:56.394: INFO: Pod "projected-volume-24798795-4a83-43d2-925d-3a1f4b1c13a8": Phase="Pending", Reason="", readiness=false. Elapsed: 11.028305ms Apr 7 13:07:58.399: INFO: Pod "projected-volume-24798795-4a83-43d2-925d-3a1f4b1c13a8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015373014s Apr 7 13:08:00.402: INFO: Pod "projected-volume-24798795-4a83-43d2-925d-3a1f4b1c13a8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019188484s STEP: Saw pod success Apr 7 13:08:00.403: INFO: Pod "projected-volume-24798795-4a83-43d2-925d-3a1f4b1c13a8" satisfied condition "success or failure" Apr 7 13:08:00.406: INFO: Trying to get logs from node iruya-worker2 pod projected-volume-24798795-4a83-43d2-925d-3a1f4b1c13a8 container projected-all-volume-test: STEP: delete the pod Apr 7 13:08:00.424: INFO: Waiting for pod projected-volume-24798795-4a83-43d2-925d-3a1f4b1c13a8 to disappear Apr 7 13:08:00.429: INFO: Pod projected-volume-24798795-4a83-43d2-925d-3a1f4b1c13a8 no longer exists [AfterEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 7 13:08:00.429: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8111" for this suite. Apr 7 13:08:06.469: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 7 13:08:06.553: INFO: namespace projected-8111 deletion completed in 6.101163782s • [SLOW TEST:10.252 seconds] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:31 should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 7 13:08:06.554: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test env composition Apr 7 13:08:06.627: INFO: Waiting up to 5m0s for pod "var-expansion-2224cb7b-3ab0-444b-a75a-0b4af482d4b1" in namespace "var-expansion-5313" to be "success or failure" Apr 7 13:08:06.633: INFO: Pod "var-expansion-2224cb7b-3ab0-444b-a75a-0b4af482d4b1": Phase="Pending", Reason="", readiness=false. Elapsed: 5.857836ms Apr 7 13:08:08.650: INFO: Pod "var-expansion-2224cb7b-3ab0-444b-a75a-0b4af482d4b1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022576756s Apr 7 13:08:10.654: INFO: Pod "var-expansion-2224cb7b-3ab0-444b-a75a-0b4af482d4b1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.026717017s STEP: Saw pod success Apr 7 13:08:10.654: INFO: Pod "var-expansion-2224cb7b-3ab0-444b-a75a-0b4af482d4b1" satisfied condition "success or failure" Apr 7 13:08:10.657: INFO: Trying to get logs from node iruya-worker2 pod var-expansion-2224cb7b-3ab0-444b-a75a-0b4af482d4b1 container dapi-container: STEP: delete the pod Apr 7 13:08:10.693: INFO: Waiting for pod var-expansion-2224cb7b-3ab0-444b-a75a-0b4af482d4b1 to disappear Apr 7 13:08:10.699: INFO: Pod var-expansion-2224cb7b-3ab0-444b-a75a-0b4af482d4b1 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 7 13:08:10.699: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-5313" for this suite. Apr 7 13:08:16.714: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 7 13:08:16.815: INFO: namespace var-expansion-5313 deletion completed in 6.112860341s • [SLOW TEST:10.261 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 7 13:08:16.815: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification Apr 7 13:08:16.873: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-3541,SelfLink:/api/v1/namespaces/watch-3541/configmaps/e2e-watch-test-configmap-a,UID:b7df872c-ef10-4fa4-93d7-2093aff77060,ResourceVersion:4121102,Generation:0,CreationTimestamp:2020-04-07 13:08:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} Apr 7 13:08:16.873: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-3541,SelfLink:/api/v1/namespaces/watch-3541/configmaps/e2e-watch-test-configmap-a,UID:b7df872c-ef10-4fa4-93d7-2093aff77060,ResourceVersion:4121102,Generation:0,CreationTimestamp:2020-04-07 13:08:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: modifying configmap A and ensuring the correct watchers observe the notification Apr 7 13:08:26.882: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-3541,SelfLink:/api/v1/namespaces/watch-3541/configmaps/e2e-watch-test-configmap-a,UID:b7df872c-ef10-4fa4-93d7-2093aff77060,ResourceVersion:4121123,Generation:0,CreationTimestamp:2020-04-07 13:08:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} Apr 7 13:08:26.882: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-3541,SelfLink:/api/v1/namespaces/watch-3541/configmaps/e2e-watch-test-configmap-a,UID:b7df872c-ef10-4fa4-93d7-2093aff77060,ResourceVersion:4121123,Generation:0,CreationTimestamp:2020-04-07 13:08:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying configmap A again and ensuring the correct watchers observe the notification Apr 7 13:08:36.891: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-3541,SelfLink:/api/v1/namespaces/watch-3541/configmaps/e2e-watch-test-configmap-a,UID:b7df872c-ef10-4fa4-93d7-2093aff77060,ResourceVersion:4121144,Generation:0,CreationTimestamp:2020-04-07 13:08:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Apr 7 13:08:36.891: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-3541,SelfLink:/api/v1/namespaces/watch-3541/configmaps/e2e-watch-test-configmap-a,UID:b7df872c-ef10-4fa4-93d7-2093aff77060,ResourceVersion:4121144,Generation:0,CreationTimestamp:2020-04-07 13:08:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: deleting configmap A and ensuring the correct watchers observe the notification Apr 7 13:08:46.898: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-3541,SelfLink:/api/v1/namespaces/watch-3541/configmaps/e2e-watch-test-configmap-a,UID:b7df872c-ef10-4fa4-93d7-2093aff77060,ResourceVersion:4121164,Generation:0,CreationTimestamp:2020-04-07 13:08:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Apr 7 13:08:46.898: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-3541,SelfLink:/api/v1/namespaces/watch-3541/configmaps/e2e-watch-test-configmap-a,UID:b7df872c-ef10-4fa4-93d7-2093aff77060,ResourceVersion:4121164,Generation:0,CreationTimestamp:2020-04-07 13:08:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification Apr 7 13:08:56.906: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-3541,SelfLink:/api/v1/namespaces/watch-3541/configmaps/e2e-watch-test-configmap-b,UID:24dcb6b1-8783-4872-a7ea-2d6f649dd04a,ResourceVersion:4121185,Generation:0,CreationTimestamp:2020-04-07 13:08:56 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} Apr 7 13:08:56.906: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-3541,SelfLink:/api/v1/namespaces/watch-3541/configmaps/e2e-watch-test-configmap-b,UID:24dcb6b1-8783-4872-a7ea-2d6f649dd04a,ResourceVersion:4121185,Generation:0,CreationTimestamp:2020-04-07 13:08:56 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: deleting configmap B and ensuring the correct watchers observe the notification Apr 7 13:09:06.912: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-3541,SelfLink:/api/v1/namespaces/watch-3541/configmaps/e2e-watch-test-configmap-b,UID:24dcb6b1-8783-4872-a7ea-2d6f649dd04a,ResourceVersion:4121205,Generation:0,CreationTimestamp:2020-04-07 13:08:56 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} Apr 7 13:09:06.912: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-3541,SelfLink:/api/v1/namespaces/watch-3541/configmaps/e2e-watch-test-configmap-b,UID:24dcb6b1-8783-4872-a7ea-2d6f649dd04a,ResourceVersion:4121205,Generation:0,CreationTimestamp:2020-04-07 13:08:56 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 7 13:09:16.912: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-3541" for this suite. Apr 7 13:09:22.930: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 7 13:09:23.055: INFO: namespace watch-3541 deletion completed in 6.137206672s • [SLOW TEST:66.240 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 7 13:09:23.055: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-projected-l2g4 STEP: Creating a pod to test atomic-volume-subpath Apr 7 13:09:23.127: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-l2g4" in namespace "subpath-1271" to be "success or failure" Apr 7 13:09:23.131: INFO: Pod "pod-subpath-test-projected-l2g4": Phase="Pending", Reason="", readiness=false. Elapsed: 3.805225ms Apr 7 13:09:25.135: INFO: Pod "pod-subpath-test-projected-l2g4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007467124s Apr 7 13:09:27.139: INFO: Pod "pod-subpath-test-projected-l2g4": Phase="Running", Reason="", readiness=true. Elapsed: 4.011926697s Apr 7 13:09:29.143: INFO: Pod "pod-subpath-test-projected-l2g4": Phase="Running", Reason="", readiness=true. Elapsed: 6.016108891s Apr 7 13:09:31.147: INFO: Pod "pod-subpath-test-projected-l2g4": Phase="Running", Reason="", readiness=true. Elapsed: 8.019857946s Apr 7 13:09:33.151: INFO: Pod "pod-subpath-test-projected-l2g4": Phase="Running", Reason="", readiness=true. Elapsed: 10.024253626s Apr 7 13:09:35.156: INFO: Pod "pod-subpath-test-projected-l2g4": Phase="Running", Reason="", readiness=true. Elapsed: 12.028748007s Apr 7 13:09:37.160: INFO: Pod "pod-subpath-test-projected-l2g4": Phase="Running", Reason="", readiness=true. Elapsed: 14.033145746s Apr 7 13:09:39.164: INFO: Pod "pod-subpath-test-projected-l2g4": Phase="Running", Reason="", readiness=true. Elapsed: 16.036816493s Apr 7 13:09:41.168: INFO: Pod "pod-subpath-test-projected-l2g4": Phase="Running", Reason="", readiness=true. Elapsed: 18.040659969s Apr 7 13:09:43.172: INFO: Pod "pod-subpath-test-projected-l2g4": Phase="Running", Reason="", readiness=true. Elapsed: 20.045045421s Apr 7 13:09:45.176: INFO: Pod "pod-subpath-test-projected-l2g4": Phase="Running", Reason="", readiness=true. Elapsed: 22.049345162s Apr 7 13:09:47.181: INFO: Pod "pod-subpath-test-projected-l2g4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.053704967s STEP: Saw pod success Apr 7 13:09:47.181: INFO: Pod "pod-subpath-test-projected-l2g4" satisfied condition "success or failure" Apr 7 13:09:47.184: INFO: Trying to get logs from node iruya-worker pod pod-subpath-test-projected-l2g4 container test-container-subpath-projected-l2g4: STEP: delete the pod Apr 7 13:09:47.240: INFO: Waiting for pod pod-subpath-test-projected-l2g4 to disappear Apr 7 13:09:47.246: INFO: Pod pod-subpath-test-projected-l2g4 no longer exists STEP: Deleting pod pod-subpath-test-projected-l2g4 Apr 7 13:09:47.246: INFO: Deleting pod "pod-subpath-test-projected-l2g4" in namespace "subpath-1271" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 7 13:09:47.249: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-1271" for this suite. Apr 7 13:09:53.267: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 7 13:09:53.336: INFO: namespace subpath-1271 deletion completed in 6.084141271s • [SLOW TEST:30.281 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 7 13:09:53.336: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Apr 7 13:09:53.425: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 7 13:09:57.464: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-274" for this suite. Apr 7 13:10:35.484: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 7 13:10:35.560: INFO: namespace pods-274 deletion completed in 38.092175583s • [SLOW TEST:42.223 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl replace should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 7 13:10:35.560: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1721 [It] should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Apr 7 13:10:35.609: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --labels=run=e2e-test-nginx-pod --namespace=kubectl-8226' Apr 7 13:10:37.702: INFO: stderr: "" Apr 7 13:10:37.703: INFO: stdout: "pod/e2e-test-nginx-pod created\n" STEP: verifying the pod e2e-test-nginx-pod is running STEP: verifying the pod e2e-test-nginx-pod was created Apr 7 13:10:42.753: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-nginx-pod --namespace=kubectl-8226 -o json' Apr 7 13:10:42.838: INFO: stderr: "" Apr 7 13:10:42.838: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2020-04-07T13:10:37Z\",\n \"labels\": {\n \"run\": \"e2e-test-nginx-pod\"\n },\n \"name\": \"e2e-test-nginx-pod\",\n \"namespace\": \"kubectl-8226\",\n \"resourceVersion\": \"4121458\",\n \"selfLink\": \"/api/v1/namespaces/kubectl-8226/pods/e2e-test-nginx-pod\",\n \"uid\": \"81d2d136-769e-4e4f-8d3f-674853cb47da\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/nginx:1.14-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-nginx-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-2fjg8\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"iruya-worker\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-2fjg8\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-2fjg8\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-04-07T13:10:37Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-04-07T13:10:40Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-04-07T13:10:40Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-04-07T13:10:37Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"containerd://b52e90f985faa25927a60886cbcc1f027f696b42d701331043985dcbb4329148\",\n \"image\": \"docker.io/library/nginx:1.14-alpine\",\n \"imageID\": \"docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7\",\n \"lastState\": {},\n \"name\": \"e2e-test-nginx-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2020-04-07T13:10:40Z\"\n }\n }\n }\n ],\n \"hostIP\": \"172.17.0.6\",\n \"phase\": \"Running\",\n \"podIP\": \"10.244.2.249\",\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2020-04-07T13:10:37Z\"\n }\n}\n" STEP: replace the image in the pod Apr 7 13:10:42.838: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-8226' Apr 7 13:10:43.151: INFO: stderr: "" Apr 7 13:10:43.151: INFO: stdout: "pod/e2e-test-nginx-pod replaced\n" STEP: verifying the pod e2e-test-nginx-pod has the right image docker.io/library/busybox:1.29 [AfterEach] [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1726 Apr 7 13:10:43.159: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=kubectl-8226' Apr 7 13:10:52.184: INFO: stderr: "" Apr 7 13:10:52.184: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 7 13:10:52.184: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8226" for this suite. Apr 7 13:10:58.206: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 7 13:10:58.283: INFO: namespace kubectl-8226 deletion completed in 6.088731962s • [SLOW TEST:22.723 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 7 13:10:58.283: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: validating cluster-info Apr 7 13:10:58.370: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info' Apr 7 13:10:58.481: INFO: stderr: "" Apr 7 13:10:58.481: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32769\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32769/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 7 13:10:58.482: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4122" for this suite. Apr 7 13:11:04.556: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 7 13:11:04.640: INFO: namespace kubectl-4122 deletion completed in 6.152250692s • [SLOW TEST:6.356 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl cluster-info /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 7 13:11:04.640: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Given a Pod with a 'name' label pod-adoption-release is created STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change Apr 7 13:11:09.751: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 7 13:11:10.790: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-1960" for this suite. Apr 7 13:11:32.825: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 7 13:11:32.900: INFO: namespace replicaset-1960 deletion completed in 22.105704094s • [SLOW TEST:28.260 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 7 13:11:32.901: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 Apr 7 13:11:32.932: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Apr 7 13:11:32.952: INFO: Waiting for terminating namespaces to be deleted... Apr 7 13:11:32.955: INFO: Logging pods the kubelet thinks is on node iruya-worker before test Apr 7 13:11:32.961: INFO: kube-proxy-pmz4p from kube-system started at 2020-03-15 18:24:55 +0000 UTC (1 container statuses recorded) Apr 7 13:11:32.961: INFO: Container kube-proxy ready: true, restart count 0 Apr 7 13:11:32.961: INFO: kindnet-gwz5g from kube-system started at 2020-03-15 18:24:55 +0000 UTC (1 container statuses recorded) Apr 7 13:11:32.961: INFO: Container kindnet-cni ready: true, restart count 0 Apr 7 13:11:32.961: INFO: Logging pods the kubelet thinks is on node iruya-worker2 before test Apr 7 13:11:32.966: INFO: kube-proxy-vwbcj from kube-system started at 2020-03-15 18:24:42 +0000 UTC (1 container statuses recorded) Apr 7 13:11:32.966: INFO: Container kube-proxy ready: true, restart count 0 Apr 7 13:11:32.966: INFO: kindnet-mgd8b from kube-system started at 2020-03-15 18:24:43 +0000 UTC (1 container statuses recorded) Apr 7 13:11:32.966: INFO: Container kindnet-cni ready: true, restart count 0 Apr 7 13:11:32.966: INFO: coredns-5d4dd4b4db-gm7vr from kube-system started at 2020-03-15 18:24:52 +0000 UTC (1 container statuses recorded) Apr 7 13:11:32.966: INFO: Container coredns ready: true, restart count 0 Apr 7 13:11:32.966: INFO: coredns-5d4dd4b4db-6jcgz from kube-system started at 2020-03-15 18:24:54 +0000 UTC (1 container statuses recorded) Apr 7 13:11:32.967: INFO: Container coredns ready: true, restart count 0 [It] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.16038bb29bf19246], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 7 13:11:34.005: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-6074" for this suite. Apr 7 13:11:40.024: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 7 13:11:40.111: INFO: namespace sched-pred-6074 deletion completed in 6.100939502s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72 • [SLOW TEST:7.211 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23 validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Proxy server should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 7 13:11:40.112: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: starting the proxy server Apr 7 13:11:40.155: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter' STEP: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 7 13:11:40.238: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9" for this suite. Apr 7 13:11:46.258: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 7 13:11:46.341: INFO: namespace kubectl-9 deletion completed in 6.098530251s • [SLOW TEST:6.229 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Proxy server /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 7 13:11:46.342: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted Apr 7 13:11:53.220: INFO: 0 pods remaining Apr 7 13:11:53.220: INFO: 0 pods has nil DeletionTimestamp Apr 7 13:11:53.220: INFO: STEP: Gathering metrics W0407 13:11:54.331969 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Apr 7 13:11:54.332: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 7 13:11:54.332: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-9057" for this suite. Apr 7 13:12:00.774: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 7 13:12:00.860: INFO: namespace gc-9057 deletion completed in 6.309876211s • [SLOW TEST:14.518 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 7 13:12:00.860: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-4613 [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace statefulset-4613 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-4613 Apr 7 13:12:01.042: INFO: Found 0 stateful pods, waiting for 1 Apr 7 13:12:11.047: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod Apr 7 13:12:11.066: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4613 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Apr 7 13:12:11.314: INFO: stderr: "I0407 13:12:11.191449 170 log.go:172] (0xc0009f06e0) (0xc000366aa0) Create stream\nI0407 13:12:11.191512 170 log.go:172] (0xc0009f06e0) (0xc000366aa0) Stream added, broadcasting: 1\nI0407 13:12:11.193996 170 log.go:172] (0xc0009f06e0) Reply frame received for 1\nI0407 13:12:11.194030 170 log.go:172] (0xc0009f06e0) (0xc000366b40) Create stream\nI0407 13:12:11.194040 170 log.go:172] (0xc0009f06e0) (0xc000366b40) Stream added, broadcasting: 3\nI0407 13:12:11.194992 170 log.go:172] (0xc0009f06e0) Reply frame received for 3\nI0407 13:12:11.195027 170 log.go:172] (0xc0009f06e0) (0xc00091e000) Create stream\nI0407 13:12:11.195039 170 log.go:172] (0xc0009f06e0) (0xc00091e000) Stream added, broadcasting: 5\nI0407 13:12:11.196063 170 log.go:172] (0xc0009f06e0) Reply frame received for 5\nI0407 13:12:11.279108 170 log.go:172] (0xc0009f06e0) Data frame received for 5\nI0407 13:12:11.279137 170 log.go:172] (0xc00091e000) (5) Data frame handling\nI0407 13:12:11.279158 170 log.go:172] (0xc00091e000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0407 13:12:11.306235 170 log.go:172] (0xc0009f06e0) Data frame received for 3\nI0407 13:12:11.306264 170 log.go:172] (0xc000366b40) (3) Data frame handling\nI0407 13:12:11.306282 170 log.go:172] (0xc000366b40) (3) Data frame sent\nI0407 13:12:11.306521 170 log.go:172] (0xc0009f06e0) Data frame received for 3\nI0407 13:12:11.306543 170 log.go:172] (0xc000366b40) (3) Data frame handling\nI0407 13:12:11.306723 170 log.go:172] (0xc0009f06e0) Data frame received for 5\nI0407 13:12:11.306738 170 log.go:172] (0xc00091e000) (5) Data frame handling\nI0407 13:12:11.309979 170 log.go:172] (0xc0009f06e0) Data frame received for 1\nI0407 13:12:11.309997 170 log.go:172] (0xc000366aa0) (1) Data frame handling\nI0407 13:12:11.310005 170 log.go:172] (0xc000366aa0) (1) Data frame sent\nI0407 13:12:11.310016 170 log.go:172] (0xc0009f06e0) (0xc000366aa0) Stream removed, broadcasting: 1\nI0407 13:12:11.310076 170 log.go:172] (0xc0009f06e0) Go away received\nI0407 13:12:11.310311 170 log.go:172] (0xc0009f06e0) (0xc000366aa0) Stream removed, broadcasting: 1\nI0407 13:12:11.310326 170 log.go:172] (0xc0009f06e0) (0xc000366b40) Stream removed, broadcasting: 3\nI0407 13:12:11.310333 170 log.go:172] (0xc0009f06e0) (0xc00091e000) Stream removed, broadcasting: 5\n" Apr 7 13:12:11.314: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Apr 7 13:12:11.314: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Apr 7 13:12:11.318: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Apr 7 13:12:21.322: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Apr 7 13:12:21.322: INFO: Waiting for statefulset status.replicas updated to 0 Apr 7 13:12:21.337: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999655s Apr 7 13:12:22.342: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.994490276s Apr 7 13:12:23.347: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.990192498s Apr 7 13:12:24.352: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.985245472s Apr 7 13:12:25.356: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.980213907s Apr 7 13:12:26.361: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.976165764s Apr 7 13:12:27.366: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.970830271s Apr 7 13:12:28.371: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.965550649s Apr 7 13:12:29.376: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.96050242s Apr 7 13:12:30.381: INFO: Verifying statefulset ss doesn't scale past 1 for another 955.640218ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-4613 Apr 7 13:12:31.385: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4613 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 7 13:12:31.607: INFO: stderr: "I0407 13:12:31.522889 191 log.go:172] (0xc0006fab00) (0xc0005fe6e0) Create stream\nI0407 13:12:31.522938 191 log.go:172] (0xc0006fab00) (0xc0005fe6e0) Stream added, broadcasting: 1\nI0407 13:12:31.525641 191 log.go:172] (0xc0006fab00) Reply frame received for 1\nI0407 13:12:31.525692 191 log.go:172] (0xc0006fab00) (0xc000948000) Create stream\nI0407 13:12:31.525708 191 log.go:172] (0xc0006fab00) (0xc000948000) Stream added, broadcasting: 3\nI0407 13:12:31.526886 191 log.go:172] (0xc0006fab00) Reply frame received for 3\nI0407 13:12:31.526922 191 log.go:172] (0xc0006fab00) (0xc0005fe780) Create stream\nI0407 13:12:31.526933 191 log.go:172] (0xc0006fab00) (0xc0005fe780) Stream added, broadcasting: 5\nI0407 13:12:31.528184 191 log.go:172] (0xc0006fab00) Reply frame received for 5\nI0407 13:12:31.600068 191 log.go:172] (0xc0006fab00) Data frame received for 5\nI0407 13:12:31.600109 191 log.go:172] (0xc0005fe780) (5) Data frame handling\nI0407 13:12:31.600123 191 log.go:172] (0xc0005fe780) (5) Data frame sent\nI0407 13:12:31.600134 191 log.go:172] (0xc0006fab00) Data frame received for 5\nI0407 13:12:31.600144 191 log.go:172] (0xc0005fe780) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0407 13:12:31.600170 191 log.go:172] (0xc0006fab00) Data frame received for 3\nI0407 13:12:31.600192 191 log.go:172] (0xc000948000) (3) Data frame handling\nI0407 13:12:31.600235 191 log.go:172] (0xc000948000) (3) Data frame sent\nI0407 13:12:31.600284 191 log.go:172] (0xc0006fab00) Data frame received for 3\nI0407 13:12:31.600305 191 log.go:172] (0xc000948000) (3) Data frame handling\nI0407 13:12:31.602002 191 log.go:172] (0xc0006fab00) Data frame received for 1\nI0407 13:12:31.602041 191 log.go:172] (0xc0005fe6e0) (1) Data frame handling\nI0407 13:12:31.602085 191 log.go:172] (0xc0005fe6e0) (1) Data frame sent\nI0407 13:12:31.602211 191 log.go:172] (0xc0006fab00) (0xc0005fe6e0) Stream removed, broadcasting: 1\nI0407 13:12:31.602271 191 log.go:172] (0xc0006fab00) Go away received\nI0407 13:12:31.602723 191 log.go:172] (0xc0006fab00) (0xc0005fe6e0) Stream removed, broadcasting: 1\nI0407 13:12:31.602759 191 log.go:172] (0xc0006fab00) (0xc000948000) Stream removed, broadcasting: 3\nI0407 13:12:31.602775 191 log.go:172] (0xc0006fab00) (0xc0005fe780) Stream removed, broadcasting: 5\n" Apr 7 13:12:31.607: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Apr 7 13:12:31.607: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Apr 7 13:12:31.610: INFO: Found 1 stateful pods, waiting for 3 Apr 7 13:12:41.630: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Apr 7 13:12:41.630: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Apr 7 13:12:41.630: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod Apr 7 13:12:41.636: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4613 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Apr 7 13:12:41.864: INFO: stderr: "I0407 13:12:41.766075 213 log.go:172] (0xc00096a370) (0xc00055c820) Create stream\nI0407 13:12:41.766127 213 log.go:172] (0xc00096a370) (0xc00055c820) Stream added, broadcasting: 1\nI0407 13:12:41.769044 213 log.go:172] (0xc00096a370) Reply frame received for 1\nI0407 13:12:41.769086 213 log.go:172] (0xc00096a370) (0xc00055c000) Create stream\nI0407 13:12:41.769100 213 log.go:172] (0xc00096a370) (0xc00055c000) Stream added, broadcasting: 3\nI0407 13:12:41.770191 213 log.go:172] (0xc00096a370) Reply frame received for 3\nI0407 13:12:41.770232 213 log.go:172] (0xc00096a370) (0xc0005f41e0) Create stream\nI0407 13:12:41.770242 213 log.go:172] (0xc00096a370) (0xc0005f41e0) Stream added, broadcasting: 5\nI0407 13:12:41.771051 213 log.go:172] (0xc00096a370) Reply frame received for 5\nI0407 13:12:41.858043 213 log.go:172] (0xc00096a370) Data frame received for 5\nI0407 13:12:41.858104 213 log.go:172] (0xc0005f41e0) (5) Data frame handling\nI0407 13:12:41.858129 213 log.go:172] (0xc0005f41e0) (5) Data frame sent\nI0407 13:12:41.858158 213 log.go:172] (0xc00096a370) Data frame received for 5\nI0407 13:12:41.858178 213 log.go:172] (0xc0005f41e0) (5) Data frame handling\nI0407 13:12:41.858196 213 log.go:172] (0xc00096a370) Data frame received for 3\nI0407 13:12:41.858208 213 log.go:172] (0xc00055c000) (3) Data frame handling\nI0407 13:12:41.858228 213 log.go:172] (0xc00055c000) (3) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0407 13:12:41.858254 213 log.go:172] (0xc00096a370) Data frame received for 3\nI0407 13:12:41.858341 213 log.go:172] (0xc00055c000) (3) Data frame handling\nI0407 13:12:41.859532 213 log.go:172] (0xc00096a370) Data frame received for 1\nI0407 13:12:41.859566 213 log.go:172] (0xc00055c820) (1) Data frame handling\nI0407 13:12:41.859583 213 log.go:172] (0xc00055c820) (1) Data frame sent\nI0407 13:12:41.859598 213 log.go:172] (0xc00096a370) (0xc00055c820) Stream removed, broadcasting: 1\nI0407 13:12:41.859626 213 log.go:172] (0xc00096a370) Go away received\nI0407 13:12:41.859995 213 log.go:172] (0xc00096a370) (0xc00055c820) Stream removed, broadcasting: 1\nI0407 13:12:41.860022 213 log.go:172] (0xc00096a370) (0xc00055c000) Stream removed, broadcasting: 3\nI0407 13:12:41.860041 213 log.go:172] (0xc00096a370) (0xc0005f41e0) Stream removed, broadcasting: 5\n" Apr 7 13:12:41.865: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Apr 7 13:12:41.865: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Apr 7 13:12:41.865: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4613 ss-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Apr 7 13:12:42.105: INFO: stderr: "I0407 13:12:41.992771 234 log.go:172] (0xc000116fd0) (0xc000200aa0) Create stream\nI0407 13:12:41.992839 234 log.go:172] (0xc000116fd0) (0xc000200aa0) Stream added, broadcasting: 1\nI0407 13:12:41.997514 234 log.go:172] (0xc000116fd0) Reply frame received for 1\nI0407 13:12:41.997571 234 log.go:172] (0xc000116fd0) (0xc0002001e0) Create stream\nI0407 13:12:41.997586 234 log.go:172] (0xc000116fd0) (0xc0002001e0) Stream added, broadcasting: 3\nI0407 13:12:41.998858 234 log.go:172] (0xc000116fd0) Reply frame received for 3\nI0407 13:12:41.998902 234 log.go:172] (0xc000116fd0) (0xc000200280) Create stream\nI0407 13:12:41.998936 234 log.go:172] (0xc000116fd0) (0xc000200280) Stream added, broadcasting: 5\nI0407 13:12:42.000031 234 log.go:172] (0xc000116fd0) Reply frame received for 5\nI0407 13:12:42.064636 234 log.go:172] (0xc000116fd0) Data frame received for 5\nI0407 13:12:42.064662 234 log.go:172] (0xc000200280) (5) Data frame handling\nI0407 13:12:42.064677 234 log.go:172] (0xc000200280) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0407 13:12:42.096628 234 log.go:172] (0xc000116fd0) Data frame received for 3\nI0407 13:12:42.096659 234 log.go:172] (0xc0002001e0) (3) Data frame handling\nI0407 13:12:42.096674 234 log.go:172] (0xc0002001e0) (3) Data frame sent\nI0407 13:12:42.096883 234 log.go:172] (0xc000116fd0) Data frame received for 5\nI0407 13:12:42.096995 234 log.go:172] (0xc000200280) (5) Data frame handling\nI0407 13:12:42.097097 234 log.go:172] (0xc000116fd0) Data frame received for 3\nI0407 13:12:42.097181 234 log.go:172] (0xc0002001e0) (3) Data frame handling\nI0407 13:12:42.099693 234 log.go:172] (0xc000116fd0) Data frame received for 1\nI0407 13:12:42.099718 234 log.go:172] (0xc000200aa0) (1) Data frame handling\nI0407 13:12:42.099746 234 log.go:172] (0xc000200aa0) (1) Data frame sent\nI0407 13:12:42.099771 234 log.go:172] (0xc000116fd0) (0xc000200aa0) Stream removed, broadcasting: 1\nI0407 13:12:42.099797 234 log.go:172] (0xc000116fd0) Go away received\nI0407 13:12:42.100294 234 log.go:172] (0xc000116fd0) (0xc000200aa0) Stream removed, broadcasting: 1\nI0407 13:12:42.100318 234 log.go:172] (0xc000116fd0) (0xc0002001e0) Stream removed, broadcasting: 3\nI0407 13:12:42.100330 234 log.go:172] (0xc000116fd0) (0xc000200280) Stream removed, broadcasting: 5\n" Apr 7 13:12:42.105: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Apr 7 13:12:42.105: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Apr 7 13:12:42.105: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4613 ss-2 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Apr 7 13:12:42.344: INFO: stderr: "I0407 13:12:42.236470 254 log.go:172] (0xc000a7a420) (0xc0005ac6e0) Create stream\nI0407 13:12:42.236527 254 log.go:172] (0xc000a7a420) (0xc0005ac6e0) Stream added, broadcasting: 1\nI0407 13:12:42.239675 254 log.go:172] (0xc000a7a420) Reply frame received for 1\nI0407 13:12:42.239715 254 log.go:172] (0xc000a7a420) (0xc00010e460) Create stream\nI0407 13:12:42.239723 254 log.go:172] (0xc000a7a420) (0xc00010e460) Stream added, broadcasting: 3\nI0407 13:12:42.240483 254 log.go:172] (0xc000a7a420) Reply frame received for 3\nI0407 13:12:42.240514 254 log.go:172] (0xc000a7a420) (0xc0005ac000) Create stream\nI0407 13:12:42.240524 254 log.go:172] (0xc000a7a420) (0xc0005ac000) Stream added, broadcasting: 5\nI0407 13:12:42.241262 254 log.go:172] (0xc000a7a420) Reply frame received for 5\nI0407 13:12:42.313519 254 log.go:172] (0xc000a7a420) Data frame received for 5\nI0407 13:12:42.313562 254 log.go:172] (0xc0005ac000) (5) Data frame handling\nI0407 13:12:42.313583 254 log.go:172] (0xc0005ac000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0407 13:12:42.336678 254 log.go:172] (0xc000a7a420) Data frame received for 3\nI0407 13:12:42.336785 254 log.go:172] (0xc00010e460) (3) Data frame handling\nI0407 13:12:42.336817 254 log.go:172] (0xc00010e460) (3) Data frame sent\nI0407 13:12:42.336874 254 log.go:172] (0xc000a7a420) Data frame received for 3\nI0407 13:12:42.336932 254 log.go:172] (0xc00010e460) (3) Data frame handling\nI0407 13:12:42.336965 254 log.go:172] (0xc000a7a420) Data frame received for 5\nI0407 13:12:42.336983 254 log.go:172] (0xc0005ac000) (5) Data frame handling\nI0407 13:12:42.338981 254 log.go:172] (0xc000a7a420) Data frame received for 1\nI0407 13:12:42.339017 254 log.go:172] (0xc0005ac6e0) (1) Data frame handling\nI0407 13:12:42.339042 254 log.go:172] (0xc0005ac6e0) (1) Data frame sent\nI0407 13:12:42.339078 254 log.go:172] (0xc000a7a420) (0xc0005ac6e0) Stream removed, broadcasting: 1\nI0407 13:12:42.339109 254 log.go:172] (0xc000a7a420) Go away received\nI0407 13:12:42.339559 254 log.go:172] (0xc000a7a420) (0xc0005ac6e0) Stream removed, broadcasting: 1\nI0407 13:12:42.339586 254 log.go:172] (0xc000a7a420) (0xc00010e460) Stream removed, broadcasting: 3\nI0407 13:12:42.339598 254 log.go:172] (0xc000a7a420) (0xc0005ac000) Stream removed, broadcasting: 5\n" Apr 7 13:12:42.344: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Apr 7 13:12:42.344: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Apr 7 13:12:42.344: INFO: Waiting for statefulset status.replicas updated to 0 Apr 7 13:12:42.347: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 3 Apr 7 13:12:52.356: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Apr 7 13:12:52.356: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Apr 7 13:12:52.356: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Apr 7 13:12:52.384: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999165s Apr 7 13:12:53.390: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.97870032s Apr 7 13:12:54.396: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.972979254s Apr 7 13:12:55.401: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.96743865s Apr 7 13:12:56.407: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.961926366s Apr 7 13:12:57.413: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.955585555s Apr 7 13:12:58.418: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.950241962s Apr 7 13:12:59.423: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.944816603s Apr 7 13:13:00.429: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.939772896s Apr 7 13:13:01.433: INFO: Verifying statefulset ss doesn't scale past 3 for another 934.573376ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-4613 Apr 7 13:13:02.439: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4613 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 7 13:13:02.670: INFO: stderr: "I0407 13:13:02.567567 274 log.go:172] (0xc000a56420) (0xc00062eb40) Create stream\nI0407 13:13:02.567621 274 log.go:172] (0xc000a56420) (0xc00062eb40) Stream added, broadcasting: 1\nI0407 13:13:02.572000 274 log.go:172] (0xc000a56420) Reply frame received for 1\nI0407 13:13:02.572042 274 log.go:172] (0xc000a56420) (0xc00062e320) Create stream\nI0407 13:13:02.572054 274 log.go:172] (0xc000a56420) (0xc00062e320) Stream added, broadcasting: 3\nI0407 13:13:02.572964 274 log.go:172] (0xc000a56420) Reply frame received for 3\nI0407 13:13:02.573018 274 log.go:172] (0xc000a56420) (0xc00018a000) Create stream\nI0407 13:13:02.573041 274 log.go:172] (0xc000a56420) (0xc00018a000) Stream added, broadcasting: 5\nI0407 13:13:02.574446 274 log.go:172] (0xc000a56420) Reply frame received for 5\nI0407 13:13:02.656240 274 log.go:172] (0xc000a56420) Data frame received for 5\nI0407 13:13:02.656272 274 log.go:172] (0xc000a56420) Data frame received for 3\nI0407 13:13:02.656302 274 log.go:172] (0xc00062e320) (3) Data frame handling\nI0407 13:13:02.656313 274 log.go:172] (0xc00062e320) (3) Data frame sent\nI0407 13:13:02.656322 274 log.go:172] (0xc000a56420) Data frame received for 3\nI0407 13:13:02.656329 274 log.go:172] (0xc00062e320) (3) Data frame handling\nI0407 13:13:02.656369 274 log.go:172] (0xc00018a000) (5) Data frame handling\nI0407 13:13:02.656438 274 log.go:172] (0xc00018a000) (5) Data frame sent\nI0407 13:13:02.656471 274 log.go:172] (0xc000a56420) Data frame received for 5\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0407 13:13:02.656491 274 log.go:172] (0xc00018a000) (5) Data frame handling\nI0407 13:13:02.666748 274 log.go:172] (0xc000a56420) Data frame received for 1\nI0407 13:13:02.666774 274 log.go:172] (0xc00062eb40) (1) Data frame handling\nI0407 13:13:02.666786 274 log.go:172] (0xc00062eb40) (1) Data frame sent\nI0407 13:13:02.666801 274 log.go:172] (0xc000a56420) (0xc00062eb40) Stream removed, broadcasting: 1\nI0407 13:13:02.666818 274 log.go:172] (0xc000a56420) Go away received\nI0407 13:13:02.667293 274 log.go:172] (0xc000a56420) (0xc00062eb40) Stream removed, broadcasting: 1\nI0407 13:13:02.667331 274 log.go:172] (0xc000a56420) (0xc00062e320) Stream removed, broadcasting: 3\nI0407 13:13:02.667351 274 log.go:172] (0xc000a56420) (0xc00018a000) Stream removed, broadcasting: 5\n" Apr 7 13:13:02.670: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Apr 7 13:13:02.670: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Apr 7 13:13:02.670: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4613 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 7 13:13:02.859: INFO: stderr: "I0407 13:13:02.789675 294 log.go:172] (0xc0008d0420) (0xc0006ce6e0) Create stream\nI0407 13:13:02.789728 294 log.go:172] (0xc0008d0420) (0xc0006ce6e0) Stream added, broadcasting: 1\nI0407 13:13:02.791944 294 log.go:172] (0xc0008d0420) Reply frame received for 1\nI0407 13:13:02.791985 294 log.go:172] (0xc0008d0420) (0xc0005d6140) Create stream\nI0407 13:13:02.791996 294 log.go:172] (0xc0008d0420) (0xc0005d6140) Stream added, broadcasting: 3\nI0407 13:13:02.793449 294 log.go:172] (0xc0008d0420) Reply frame received for 3\nI0407 13:13:02.793502 294 log.go:172] (0xc0008d0420) (0xc00094c000) Create stream\nI0407 13:13:02.793528 294 log.go:172] (0xc0008d0420) (0xc00094c000) Stream added, broadcasting: 5\nI0407 13:13:02.794607 294 log.go:172] (0xc0008d0420) Reply frame received for 5\nI0407 13:13:02.852933 294 log.go:172] (0xc0008d0420) Data frame received for 5\nI0407 13:13:02.852973 294 log.go:172] (0xc00094c000) (5) Data frame handling\nI0407 13:13:02.852987 294 log.go:172] (0xc00094c000) (5) Data frame sent\nI0407 13:13:02.852999 294 log.go:172] (0xc0008d0420) Data frame received for 5\nI0407 13:13:02.853008 294 log.go:172] (0xc00094c000) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0407 13:13:02.853039 294 log.go:172] (0xc0008d0420) Data frame received for 3\nI0407 13:13:02.853073 294 log.go:172] (0xc0005d6140) (3) Data frame handling\nI0407 13:13:02.853104 294 log.go:172] (0xc0005d6140) (3) Data frame sent\nI0407 13:13:02.853291 294 log.go:172] (0xc0008d0420) Data frame received for 3\nI0407 13:13:02.853322 294 log.go:172] (0xc0005d6140) (3) Data frame handling\nI0407 13:13:02.855281 294 log.go:172] (0xc0008d0420) Data frame received for 1\nI0407 13:13:02.855300 294 log.go:172] (0xc0006ce6e0) (1) Data frame handling\nI0407 13:13:02.855310 294 log.go:172] (0xc0006ce6e0) (1) Data frame sent\nI0407 13:13:02.855321 294 log.go:172] (0xc0008d0420) (0xc0006ce6e0) Stream removed, broadcasting: 1\nI0407 13:13:02.855337 294 log.go:172] (0xc0008d0420) Go away received\nI0407 13:13:02.855734 294 log.go:172] (0xc0008d0420) (0xc0006ce6e0) Stream removed, broadcasting: 1\nI0407 13:13:02.855757 294 log.go:172] (0xc0008d0420) (0xc0005d6140) Stream removed, broadcasting: 3\nI0407 13:13:02.855769 294 log.go:172] (0xc0008d0420) (0xc00094c000) Stream removed, broadcasting: 5\n" Apr 7 13:13:02.859: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Apr 7 13:13:02.860: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Apr 7 13:13:02.860: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4613 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 7 13:13:03.073: INFO: stderr: "I0407 13:13:03.003158 313 log.go:172] (0xc000620a50) (0xc0003b6820) Create stream\nI0407 13:13:03.003242 313 log.go:172] (0xc000620a50) (0xc0003b6820) Stream added, broadcasting: 1\nI0407 13:13:03.005810 313 log.go:172] (0xc000620a50) Reply frame received for 1\nI0407 13:13:03.005848 313 log.go:172] (0xc000620a50) (0xc000922000) Create stream\nI0407 13:13:03.005859 313 log.go:172] (0xc000620a50) (0xc000922000) Stream added, broadcasting: 3\nI0407 13:13:03.006662 313 log.go:172] (0xc000620a50) Reply frame received for 3\nI0407 13:13:03.006684 313 log.go:172] (0xc000620a50) (0xc0003b68c0) Create stream\nI0407 13:13:03.006691 313 log.go:172] (0xc000620a50) (0xc0003b68c0) Stream added, broadcasting: 5\nI0407 13:13:03.007736 313 log.go:172] (0xc000620a50) Reply frame received for 5\nI0407 13:13:03.065373 313 log.go:172] (0xc000620a50) Data frame received for 3\nI0407 13:13:03.065415 313 log.go:172] (0xc000922000) (3) Data frame handling\nI0407 13:13:03.065437 313 log.go:172] (0xc000922000) (3) Data frame sent\nI0407 13:13:03.065455 313 log.go:172] (0xc000620a50) Data frame received for 3\nI0407 13:13:03.065475 313 log.go:172] (0xc000922000) (3) Data frame handling\nI0407 13:13:03.065522 313 log.go:172] (0xc000620a50) Data frame received for 5\nI0407 13:13:03.065563 313 log.go:172] (0xc0003b68c0) (5) Data frame handling\nI0407 13:13:03.065595 313 log.go:172] (0xc0003b68c0) (5) Data frame sent\nI0407 13:13:03.065609 313 log.go:172] (0xc000620a50) Data frame received for 5\nI0407 13:13:03.065620 313 log.go:172] (0xc0003b68c0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0407 13:13:03.067124 313 log.go:172] (0xc000620a50) Data frame received for 1\nI0407 13:13:03.067169 313 log.go:172] (0xc0003b6820) (1) Data frame handling\nI0407 13:13:03.067201 313 log.go:172] (0xc0003b6820) (1) Data frame sent\nI0407 13:13:03.067228 313 log.go:172] (0xc000620a50) (0xc0003b6820) Stream removed, broadcasting: 1\nI0407 13:13:03.067289 313 log.go:172] (0xc000620a50) Go away received\nI0407 13:13:03.067736 313 log.go:172] (0xc000620a50) (0xc0003b6820) Stream removed, broadcasting: 1\nI0407 13:13:03.067770 313 log.go:172] (0xc000620a50) (0xc000922000) Stream removed, broadcasting: 3\nI0407 13:13:03.067790 313 log.go:172] (0xc000620a50) (0xc0003b68c0) Stream removed, broadcasting: 5\n" Apr 7 13:13:03.073: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Apr 7 13:13:03.073: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Apr 7 13:13:03.073: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 Apr 7 13:13:23.101: INFO: Deleting all statefulset in ns statefulset-4613 Apr 7 13:13:23.108: INFO: Scaling statefulset ss to 0 Apr 7 13:13:23.116: INFO: Waiting for statefulset status.replicas updated to 0 Apr 7 13:13:23.118: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 7 13:13:23.172: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-4613" for this suite. Apr 7 13:13:29.189: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 7 13:13:29.274: INFO: namespace statefulset-4613 deletion completed in 6.09826241s • [SLOW TEST:88.414 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 7 13:13:29.274: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Apr 7 13:13:29.429: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 7 13:13:29.448: INFO: Number of nodes with available pods: 0 Apr 7 13:13:29.448: INFO: Node iruya-worker is running more than one daemon pod Apr 7 13:13:30.453: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 7 13:13:30.456: INFO: Number of nodes with available pods: 0 Apr 7 13:13:30.456: INFO: Node iruya-worker is running more than one daemon pod Apr 7 13:13:31.465: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 7 13:13:31.468: INFO: Number of nodes with available pods: 0 Apr 7 13:13:31.468: INFO: Node iruya-worker is running more than one daemon pod Apr 7 13:13:32.452: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 7 13:13:32.456: INFO: Number of nodes with available pods: 0 Apr 7 13:13:32.456: INFO: Node iruya-worker is running more than one daemon pod Apr 7 13:13:33.453: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 7 13:13:33.457: INFO: Number of nodes with available pods: 1 Apr 7 13:13:33.457: INFO: Node iruya-worker is running more than one daemon pod Apr 7 13:13:34.452: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 7 13:13:34.455: INFO: Number of nodes with available pods: 2 Apr 7 13:13:34.455: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. Apr 7 13:13:34.483: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 7 13:13:34.511: INFO: Number of nodes with available pods: 2 Apr 7 13:13:34.511: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-5965, will wait for the garbage collector to delete the pods Apr 7 13:13:35.591: INFO: Deleting DaemonSet.extensions daemon-set took: 6.91143ms Apr 7 13:13:35.692: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.303709ms Apr 7 13:13:39.295: INFO: Number of nodes with available pods: 0 Apr 7 13:13:39.295: INFO: Number of running nodes: 0, number of available pods: 0 Apr 7 13:13:39.300: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-5965/daemonsets","resourceVersion":"4122316"},"items":null} Apr 7 13:13:39.302: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-5965/pods","resourceVersion":"4122316"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 7 13:13:39.310: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-5965" for this suite. Apr 7 13:13:45.327: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 7 13:13:45.399: INFO: namespace daemonsets-5965 deletion completed in 6.086195069s • [SLOW TEST:16.125 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 7 13:13:45.399: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0644 on tmpfs Apr 7 13:13:45.444: INFO: Waiting up to 5m0s for pod "pod-6481dc42-299d-4e81-afe6-5eb75ec45c6c" in namespace "emptydir-5219" to be "success or failure" Apr 7 13:13:45.455: INFO: Pod "pod-6481dc42-299d-4e81-afe6-5eb75ec45c6c": Phase="Pending", Reason="", readiness=false. Elapsed: 11.21372ms Apr 7 13:13:47.460: INFO: Pod "pod-6481dc42-299d-4e81-afe6-5eb75ec45c6c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01543074s Apr 7 13:13:49.464: INFO: Pod "pod-6481dc42-299d-4e81-afe6-5eb75ec45c6c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019680125s STEP: Saw pod success Apr 7 13:13:49.464: INFO: Pod "pod-6481dc42-299d-4e81-afe6-5eb75ec45c6c" satisfied condition "success or failure" Apr 7 13:13:49.467: INFO: Trying to get logs from node iruya-worker pod pod-6481dc42-299d-4e81-afe6-5eb75ec45c6c container test-container: STEP: delete the pod Apr 7 13:13:49.487: INFO: Waiting for pod pod-6481dc42-299d-4e81-afe6-5eb75ec45c6c to disappear Apr 7 13:13:49.516: INFO: Pod pod-6481dc42-299d-4e81-afe6-5eb75ec45c6c no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 7 13:13:49.516: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5219" for this suite. Apr 7 13:13:55.531: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 7 13:13:55.630: INFO: namespace emptydir-5219 deletion completed in 6.110678646s • [SLOW TEST:10.231 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 7 13:13:55.630: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W0407 13:14:26.279468 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Apr 7 13:14:26.279: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 7 13:14:26.279: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-2734" for this suite. Apr 7 13:14:32.298: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 7 13:14:32.387: INFO: namespace gc-2734 deletion completed in 6.10462212s • [SLOW TEST:36.757 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 7 13:14:32.387: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-map-d6ea51ca-545c-4f25-9988-c7d3b3960439 STEP: Creating a pod to test consume secrets Apr 7 13:14:32.615: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-f8cdadbb-c26b-46e1-84c0-01cc86064830" in namespace "projected-8461" to be "success or failure" Apr 7 13:14:32.634: INFO: Pod "pod-projected-secrets-f8cdadbb-c26b-46e1-84c0-01cc86064830": Phase="Pending", Reason="", readiness=false. Elapsed: 18.779739ms Apr 7 13:14:34.638: INFO: Pod "pod-projected-secrets-f8cdadbb-c26b-46e1-84c0-01cc86064830": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022834561s Apr 7 13:14:36.642: INFO: Pod "pod-projected-secrets-f8cdadbb-c26b-46e1-84c0-01cc86064830": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.027419534s STEP: Saw pod success Apr 7 13:14:36.642: INFO: Pod "pod-projected-secrets-f8cdadbb-c26b-46e1-84c0-01cc86064830" satisfied condition "success or failure" Apr 7 13:14:36.645: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-secrets-f8cdadbb-c26b-46e1-84c0-01cc86064830 container projected-secret-volume-test: STEP: delete the pod Apr 7 13:14:36.729: INFO: Waiting for pod pod-projected-secrets-f8cdadbb-c26b-46e1-84c0-01cc86064830 to disappear Apr 7 13:14:36.744: INFO: Pod pod-projected-secrets-f8cdadbb-c26b-46e1-84c0-01cc86064830 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 7 13:14:36.745: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8461" for this suite. Apr 7 13:14:42.778: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 7 13:14:42.857: INFO: namespace projected-8461 deletion completed in 6.093900056s • [SLOW TEST:10.470 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 7 13:14:42.858: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Apr 7 13:14:42.930: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace STEP: Creating rc "condition-test" that asks for more than the allowed pod quota STEP: Checking rc "condition-test" has the desired failure condition set STEP: Scaling down rc "condition-test" to satisfy pod quota Apr 7 13:14:45.011: INFO: Updating replication controller "condition-test" STEP: Checking rc "condition-test" has no failure condition set [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 7 13:14:46.144: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-4496" for this suite. Apr 7 13:14:52.201: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 7 13:14:52.382: INFO: namespace replication-controller-4496 deletion completed in 6.234709923s • [SLOW TEST:9.524 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl version should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 7 13:14:52.382: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Apr 7 13:14:52.450: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version' Apr 7 13:14:52.602: INFO: stderr: "" Apr 7 13:14:52.602: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"15\", GitVersion:\"v1.15.11\", GitCommit:\"d94a81c724ea8e1ccc9002d89b7fe81d58f89ede\", GitTreeState:\"clean\", BuildDate:\"2020-04-05T10:39:42Z\", GoVersion:\"go1.12.14\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"15\", GitVersion:\"v1.15.7\", GitCommit:\"6c143d35bb11d74970e7bc0b6c45b6bfdffc0bd4\", GitTreeState:\"clean\", BuildDate:\"2020-01-14T00:28:37Z\", GoVersion:\"go1.12.12\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 7 13:14:52.602: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6816" for this suite. Apr 7 13:14:58.619: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 7 13:14:58.693: INFO: namespace kubectl-6816 deletion completed in 6.087053565s • [SLOW TEST:6.311 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl version /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 7 13:14:58.694: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 7 13:15:28.188: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-3905" for this suite. Apr 7 13:15:34.201: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 7 13:15:34.279: INFO: namespace container-runtime-3905 deletion completed in 6.087813846s • [SLOW TEST:35.585 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 when starting a container that exits /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:39 should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 7 13:15:34.280: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test use defaults Apr 7 13:15:34.352: INFO: Waiting up to 5m0s for pod "client-containers-c286075b-e885-491a-9d76-da33e2b289a0" in namespace "containers-6996" to be "success or failure" Apr 7 13:15:34.362: INFO: Pod "client-containers-c286075b-e885-491a-9d76-da33e2b289a0": Phase="Pending", Reason="", readiness=false. Elapsed: 9.867287ms Apr 7 13:15:36.368: INFO: Pod "client-containers-c286075b-e885-491a-9d76-da33e2b289a0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015759253s Apr 7 13:15:38.373: INFO: Pod "client-containers-c286075b-e885-491a-9d76-da33e2b289a0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.020168571s STEP: Saw pod success Apr 7 13:15:38.373: INFO: Pod "client-containers-c286075b-e885-491a-9d76-da33e2b289a0" satisfied condition "success or failure" Apr 7 13:15:38.376: INFO: Trying to get logs from node iruya-worker2 pod client-containers-c286075b-e885-491a-9d76-da33e2b289a0 container test-container: STEP: delete the pod Apr 7 13:15:38.410: INFO: Waiting for pod client-containers-c286075b-e885-491a-9d76-da33e2b289a0 to disappear Apr 7 13:15:38.422: INFO: Pod client-containers-c286075b-e885-491a-9d76-da33e2b289a0 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 7 13:15:38.422: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-6996" for this suite. Apr 7 13:15:44.438: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 7 13:15:44.509: INFO: namespace containers-6996 deletion completed in 6.082922121s • [SLOW TEST:10.229 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 7 13:15:44.509: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:179 [It] should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 7 13:15:44.590: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-4878" for this suite. Apr 7 13:16:06.611: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 7 13:16:06.738: INFO: namespace pods-4878 deletion completed in 22.144356432s • [SLOW TEST:22.229 seconds] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 7 13:16:06.739: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Apr 7 13:16:10.851: INFO: Waiting up to 5m0s for pod "client-envvars-1c2b9839-e2fe-45f8-beeb-e9856aaa6a12" in namespace "pods-1155" to be "success or failure" Apr 7 13:16:10.890: INFO: Pod "client-envvars-1c2b9839-e2fe-45f8-beeb-e9856aaa6a12": Phase="Pending", Reason="", readiness=false. Elapsed: 39.25498ms Apr 7 13:16:12.903: INFO: Pod "client-envvars-1c2b9839-e2fe-45f8-beeb-e9856aaa6a12": Phase="Pending", Reason="", readiness=false. Elapsed: 2.051848897s Apr 7 13:16:14.906: INFO: Pod "client-envvars-1c2b9839-e2fe-45f8-beeb-e9856aaa6a12": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.055234596s STEP: Saw pod success Apr 7 13:16:14.906: INFO: Pod "client-envvars-1c2b9839-e2fe-45f8-beeb-e9856aaa6a12" satisfied condition "success or failure" Apr 7 13:16:14.908: INFO: Trying to get logs from node iruya-worker pod client-envvars-1c2b9839-e2fe-45f8-beeb-e9856aaa6a12 container env3cont: STEP: delete the pod Apr 7 13:16:14.968: INFO: Waiting for pod client-envvars-1c2b9839-e2fe-45f8-beeb-e9856aaa6a12 to disappear Apr 7 13:16:14.976: INFO: Pod client-envvars-1c2b9839-e2fe-45f8-beeb-e9856aaa6a12 no longer exists [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 7 13:16:14.976: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-1155" for this suite. Apr 7 13:16:52.991: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 7 13:16:53.099: INFO: namespace pods-1155 deletion completed in 38.118867738s • [SLOW TEST:46.360 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 7 13:16:53.099: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name s-test-opt-del-0bf94d8f-e5b3-4cb1-acdf-a3c8037cfce6 STEP: Creating secret with name s-test-opt-upd-226267bb-338c-42f3-9fc3-75385e918c37 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-0bf94d8f-e5b3-4cb1-acdf-a3c8037cfce6 STEP: Updating secret s-test-opt-upd-226267bb-338c-42f3-9fc3-75385e918c37 STEP: Creating secret with name s-test-opt-create-79fe056b-9f0e-4254-95db-45088ca8e0f8 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 7 13:18:05.581: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7295" for this suite. Apr 7 13:18:27.596: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 7 13:18:27.670: INFO: namespace projected-7295 deletion completed in 22.085260489s • [SLOW TEST:94.571 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 7 13:18:27.671: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [It] should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a replication controller Apr 7 13:18:27.735: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-347' Apr 7 13:18:27.965: INFO: stderr: "" Apr 7 13:18:27.965: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Apr 7 13:18:27.965: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-347' Apr 7 13:18:28.060: INFO: stderr: "" Apr 7 13:18:28.060: INFO: stdout: "update-demo-nautilus-8hllr update-demo-nautilus-rsr94 " Apr 7 13:18:28.060: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8hllr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-347' Apr 7 13:18:28.150: INFO: stderr: "" Apr 7 13:18:28.150: INFO: stdout: "" Apr 7 13:18:28.150: INFO: update-demo-nautilus-8hllr is created but not running Apr 7 13:18:33.150: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-347' Apr 7 13:18:33.252: INFO: stderr: "" Apr 7 13:18:33.252: INFO: stdout: "update-demo-nautilus-8hllr update-demo-nautilus-rsr94 " Apr 7 13:18:33.252: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8hllr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-347' Apr 7 13:18:33.341: INFO: stderr: "" Apr 7 13:18:33.341: INFO: stdout: "true" Apr 7 13:18:33.342: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8hllr -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-347' Apr 7 13:18:33.433: INFO: stderr: "" Apr 7 13:18:33.433: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 7 13:18:33.433: INFO: validating pod update-demo-nautilus-8hllr Apr 7 13:18:33.437: INFO: got data: { "image": "nautilus.jpg" } Apr 7 13:18:33.437: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 7 13:18:33.437: INFO: update-demo-nautilus-8hllr is verified up and running Apr 7 13:18:33.437: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-rsr94 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-347' Apr 7 13:18:33.526: INFO: stderr: "" Apr 7 13:18:33.526: INFO: stdout: "true" Apr 7 13:18:33.526: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-rsr94 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-347' Apr 7 13:18:33.614: INFO: stderr: "" Apr 7 13:18:33.614: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 7 13:18:33.614: INFO: validating pod update-demo-nautilus-rsr94 Apr 7 13:18:33.618: INFO: got data: { "image": "nautilus.jpg" } Apr 7 13:18:33.618: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 7 13:18:33.618: INFO: update-demo-nautilus-rsr94 is verified up and running STEP: scaling down the replication controller Apr 7 13:18:33.620: INFO: scanned /root for discovery docs: Apr 7 13:18:33.620: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-347' Apr 7 13:18:34.753: INFO: stderr: "" Apr 7 13:18:34.753: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Apr 7 13:18:34.753: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-347' Apr 7 13:18:34.843: INFO: stderr: "" Apr 7 13:18:34.843: INFO: stdout: "update-demo-nautilus-8hllr update-demo-nautilus-rsr94 " STEP: Replicas for name=update-demo: expected=1 actual=2 Apr 7 13:18:39.844: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-347' Apr 7 13:18:39.950: INFO: stderr: "" Apr 7 13:18:39.951: INFO: stdout: "update-demo-nautilus-8hllr update-demo-nautilus-rsr94 " STEP: Replicas for name=update-demo: expected=1 actual=2 Apr 7 13:18:44.951: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-347' Apr 7 13:18:45.044: INFO: stderr: "" Apr 7 13:18:45.044: INFO: stdout: "update-demo-nautilus-8hllr " Apr 7 13:18:45.045: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8hllr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-347' Apr 7 13:18:45.136: INFO: stderr: "" Apr 7 13:18:45.136: INFO: stdout: "true" Apr 7 13:18:45.136: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8hllr -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-347' Apr 7 13:18:45.228: INFO: stderr: "" Apr 7 13:18:45.228: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 7 13:18:45.228: INFO: validating pod update-demo-nautilus-8hllr Apr 7 13:18:45.230: INFO: got data: { "image": "nautilus.jpg" } Apr 7 13:18:45.231: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 7 13:18:45.231: INFO: update-demo-nautilus-8hllr is verified up and running STEP: scaling up the replication controller Apr 7 13:18:45.232: INFO: scanned /root for discovery docs: Apr 7 13:18:45.232: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-347' Apr 7 13:18:46.430: INFO: stderr: "" Apr 7 13:18:46.430: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Apr 7 13:18:46.430: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-347' Apr 7 13:18:46.531: INFO: stderr: "" Apr 7 13:18:46.531: INFO: stdout: "update-demo-nautilus-8hllr update-demo-nautilus-x8v2n " Apr 7 13:18:46.532: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8hllr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-347' Apr 7 13:18:46.622: INFO: stderr: "" Apr 7 13:18:46.622: INFO: stdout: "true" Apr 7 13:18:46.622: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8hllr -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-347' Apr 7 13:18:46.738: INFO: stderr: "" Apr 7 13:18:46.738: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 7 13:18:46.738: INFO: validating pod update-demo-nautilus-8hllr Apr 7 13:18:46.741: INFO: got data: { "image": "nautilus.jpg" } Apr 7 13:18:46.741: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 7 13:18:46.742: INFO: update-demo-nautilus-8hllr is verified up and running Apr 7 13:18:46.742: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-x8v2n -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-347' Apr 7 13:18:46.831: INFO: stderr: "" Apr 7 13:18:46.831: INFO: stdout: "" Apr 7 13:18:46.831: INFO: update-demo-nautilus-x8v2n is created but not running Apr 7 13:18:51.831: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-347' Apr 7 13:18:51.931: INFO: stderr: "" Apr 7 13:18:51.931: INFO: stdout: "update-demo-nautilus-8hllr update-demo-nautilus-x8v2n " Apr 7 13:18:51.931: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8hllr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-347' Apr 7 13:18:52.019: INFO: stderr: "" Apr 7 13:18:52.019: INFO: stdout: "true" Apr 7 13:18:52.020: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8hllr -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-347' Apr 7 13:18:52.110: INFO: stderr: "" Apr 7 13:18:52.110: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 7 13:18:52.110: INFO: validating pod update-demo-nautilus-8hllr Apr 7 13:18:52.112: INFO: got data: { "image": "nautilus.jpg" } Apr 7 13:18:52.112: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 7 13:18:52.112: INFO: update-demo-nautilus-8hllr is verified up and running Apr 7 13:18:52.112: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-x8v2n -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-347' Apr 7 13:18:52.210: INFO: stderr: "" Apr 7 13:18:52.210: INFO: stdout: "true" Apr 7 13:18:52.210: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-x8v2n -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-347' Apr 7 13:18:52.314: INFO: stderr: "" Apr 7 13:18:52.314: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 7 13:18:52.314: INFO: validating pod update-demo-nautilus-x8v2n Apr 7 13:18:52.318: INFO: got data: { "image": "nautilus.jpg" } Apr 7 13:18:52.318: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 7 13:18:52.318: INFO: update-demo-nautilus-x8v2n is verified up and running STEP: using delete to clean up resources Apr 7 13:18:52.318: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-347' Apr 7 13:18:52.419: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 7 13:18:52.419: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Apr 7 13:18:52.419: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-347' Apr 7 13:18:52.517: INFO: stderr: "No resources found.\n" Apr 7 13:18:52.517: INFO: stdout: "" Apr 7 13:18:52.517: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-347 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Apr 7 13:18:52.635: INFO: stderr: "" Apr 7 13:18:52.635: INFO: stdout: "update-demo-nautilus-8hllr\nupdate-demo-nautilus-x8v2n\n" Apr 7 13:18:53.136: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-347' Apr 7 13:18:53.235: INFO: stderr: "No resources found.\n" Apr 7 13:18:53.236: INFO: stdout: "" Apr 7 13:18:53.236: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-347 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Apr 7 13:18:53.463: INFO: stderr: "" Apr 7 13:18:53.463: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 7 13:18:53.463: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-347" for this suite. Apr 7 13:19:15.479: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 7 13:19:15.559: INFO: namespace kubectl-347 deletion completed in 22.089039626s • [SLOW TEST:47.888 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 7 13:19:15.559: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W0407 13:19:25.641414 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Apr 7 13:19:25.641: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 7 13:19:25.641: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-1102" for this suite. Apr 7 13:19:31.675: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 7 13:19:31.777: INFO: namespace gc-1102 deletion completed in 6.131861232s • [SLOW TEST:16.218 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl logs should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 7 13:19:31.777: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1292 STEP: creating an rc Apr 7 13:19:31.843: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4357' Apr 7 13:19:32.101: INFO: stderr: "" Apr 7 13:19:32.101: INFO: stdout: "replicationcontroller/redis-master created\n" [It] should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Waiting for Redis master to start. Apr 7 13:19:33.104: INFO: Selector matched 1 pods for map[app:redis] Apr 7 13:19:33.104: INFO: Found 0 / 1 Apr 7 13:19:34.105: INFO: Selector matched 1 pods for map[app:redis] Apr 7 13:19:34.105: INFO: Found 0 / 1 Apr 7 13:19:35.105: INFO: Selector matched 1 pods for map[app:redis] Apr 7 13:19:35.105: INFO: Found 1 / 1 Apr 7 13:19:35.105: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Apr 7 13:19:35.108: INFO: Selector matched 1 pods for map[app:redis] Apr 7 13:19:35.108: INFO: ForEach: Found 1 pods from the filter. Now looping through them. STEP: checking for a matching strings Apr 7 13:19:35.108: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-ntwb9 redis-master --namespace=kubectl-4357' Apr 7 13:19:35.228: INFO: stderr: "" Apr 7 13:19:35.228: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 07 Apr 13:19:34.381 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 07 Apr 13:19:34.381 # Server started, Redis version 3.2.12\n1:M 07 Apr 13:19:34.381 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 07 Apr 13:19:34.381 * The server is now ready to accept connections on port 6379\n" STEP: limiting log lines Apr 7 13:19:35.228: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-ntwb9 redis-master --namespace=kubectl-4357 --tail=1' Apr 7 13:19:35.335: INFO: stderr: "" Apr 7 13:19:35.335: INFO: stdout: "1:M 07 Apr 13:19:34.381 * The server is now ready to accept connections on port 6379\n" STEP: limiting log bytes Apr 7 13:19:35.335: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-ntwb9 redis-master --namespace=kubectl-4357 --limit-bytes=1' Apr 7 13:19:35.431: INFO: stderr: "" Apr 7 13:19:35.431: INFO: stdout: " " STEP: exposing timestamps Apr 7 13:19:35.431: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-ntwb9 redis-master --namespace=kubectl-4357 --tail=1 --timestamps' Apr 7 13:19:35.531: INFO: stderr: "" Apr 7 13:19:35.532: INFO: stdout: "2020-04-07T13:19:34.381629327Z 1:M 07 Apr 13:19:34.381 * The server is now ready to accept connections on port 6379\n" STEP: restricting to a time range Apr 7 13:19:38.032: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-ntwb9 redis-master --namespace=kubectl-4357 --since=1s' Apr 7 13:19:38.154: INFO: stderr: "" Apr 7 13:19:38.154: INFO: stdout: "" Apr 7 13:19:38.154: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-ntwb9 redis-master --namespace=kubectl-4357 --since=24h' Apr 7 13:19:38.249: INFO: stderr: "" Apr 7 13:19:38.249: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 07 Apr 13:19:34.381 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 07 Apr 13:19:34.381 # Server started, Redis version 3.2.12\n1:M 07 Apr 13:19:34.381 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 07 Apr 13:19:34.381 * The server is now ready to accept connections on port 6379\n" [AfterEach] [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1298 STEP: using delete to clean up resources Apr 7 13:19:38.249: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-4357' Apr 7 13:19:38.357: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 7 13:19:38.357: INFO: stdout: "replicationcontroller \"redis-master\" force deleted\n" Apr 7 13:19:38.357: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=nginx --no-headers --namespace=kubectl-4357' Apr 7 13:19:38.467: INFO: stderr: "No resources found.\n" Apr 7 13:19:38.467: INFO: stdout: "" Apr 7 13:19:38.467: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=nginx --namespace=kubectl-4357 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Apr 7 13:19:38.584: INFO: stderr: "" Apr 7 13:19:38.584: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 7 13:19:38.584: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4357" for this suite. Apr 7 13:20:00.605: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 7 13:20:00.689: INFO: namespace kubectl-4357 deletion completed in 22.100460356s • [SLOW TEST:28.912 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 7 13:20:00.689: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-map-686129dd-1fcc-47d4-82ca-02d93fb62979 STEP: Creating a pod to test consume configMaps Apr 7 13:20:00.781: INFO: Waiting up to 5m0s for pod "pod-configmaps-75392dc0-e928-4235-b53e-fe5345e868ce" in namespace "configmap-3429" to be "success or failure" Apr 7 13:20:00.788: INFO: Pod "pod-configmaps-75392dc0-e928-4235-b53e-fe5345e868ce": Phase="Pending", Reason="", readiness=false. Elapsed: 6.347936ms Apr 7 13:20:02.792: INFO: Pod "pod-configmaps-75392dc0-e928-4235-b53e-fe5345e868ce": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010704032s Apr 7 13:20:04.797: INFO: Pod "pod-configmaps-75392dc0-e928-4235-b53e-fe5345e868ce": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01530443s STEP: Saw pod success Apr 7 13:20:04.797: INFO: Pod "pod-configmaps-75392dc0-e928-4235-b53e-fe5345e868ce" satisfied condition "success or failure" Apr 7 13:20:04.800: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-75392dc0-e928-4235-b53e-fe5345e868ce container configmap-volume-test: STEP: delete the pod Apr 7 13:20:04.819: INFO: Waiting for pod pod-configmaps-75392dc0-e928-4235-b53e-fe5345e868ce to disappear Apr 7 13:20:04.822: INFO: Pod pod-configmaps-75392dc0-e928-4235-b53e-fe5345e868ce no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 7 13:20:04.822: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3429" for this suite. Apr 7 13:20:10.856: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 7 13:20:10.950: INFO: namespace configmap-3429 deletion completed in 6.125886353s • [SLOW TEST:10.261 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 7 13:20:10.951: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Given a Pod with a 'name' label pod-adoption is created STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 7 13:20:16.031: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-5472" for this suite. Apr 7 13:20:38.048: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 7 13:20:38.133: INFO: namespace replication-controller-5472 deletion completed in 22.09729943s • [SLOW TEST:27.181 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 7 13:20:38.133: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars Apr 7 13:20:38.237: INFO: Waiting up to 5m0s for pod "downward-api-1b6c2a4d-2bf2-463e-a131-a24f48d82842" in namespace "downward-api-2289" to be "success or failure" Apr 7 13:20:38.240: INFO: Pod "downward-api-1b6c2a4d-2bf2-463e-a131-a24f48d82842": Phase="Pending", Reason="", readiness=false. Elapsed: 2.533947ms Apr 7 13:20:40.243: INFO: Pod "downward-api-1b6c2a4d-2bf2-463e-a131-a24f48d82842": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006138253s Apr 7 13:20:42.248: INFO: Pod "downward-api-1b6c2a4d-2bf2-463e-a131-a24f48d82842": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010919102s STEP: Saw pod success Apr 7 13:20:42.248: INFO: Pod "downward-api-1b6c2a4d-2bf2-463e-a131-a24f48d82842" satisfied condition "success or failure" Apr 7 13:20:42.252: INFO: Trying to get logs from node iruya-worker pod downward-api-1b6c2a4d-2bf2-463e-a131-a24f48d82842 container dapi-container: STEP: delete the pod Apr 7 13:20:42.280: INFO: Waiting for pod downward-api-1b6c2a4d-2bf2-463e-a131-a24f48d82842 to disappear Apr 7 13:20:42.291: INFO: Pod downward-api-1b6c2a4d-2bf2-463e-a131-a24f48d82842 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 7 13:20:42.291: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2289" for this suite. Apr 7 13:20:48.307: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 7 13:20:48.387: INFO: namespace downward-api-2289 deletion completed in 6.092930807s • [SLOW TEST:10.254 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 7 13:20:48.388: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod busybox-45c7d86c-d07d-45ba-8ebc-7cae704ae91a in namespace container-probe-8557 Apr 7 13:20:52.469: INFO: Started pod busybox-45c7d86c-d07d-45ba-8ebc-7cae704ae91a in namespace container-probe-8557 STEP: checking the pod's current state and verifying that restartCount is present Apr 7 13:20:52.472: INFO: Initial restart count of pod busybox-45c7d86c-d07d-45ba-8ebc-7cae704ae91a is 0 Apr 7 13:21:44.586: INFO: Restart count of pod container-probe-8557/busybox-45c7d86c-d07d-45ba-8ebc-7cae704ae91a is now 1 (52.11339131s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 7 13:21:44.604: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-8557" for this suite. Apr 7 13:21:50.639: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 7 13:21:50.719: INFO: namespace container-probe-8557 deletion completed in 6.109287723s • [SLOW TEST:62.331 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 7 13:21:50.719: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating the pod Apr 7 13:21:55.356: INFO: Successfully updated pod "labelsupdate3b4360ba-ccc5-48c1-a876-8310c84d07f0" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 7 13:21:57.380: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8995" for this suite. Apr 7 13:22:19.398: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 7 13:22:19.470: INFO: namespace downward-api-8995 deletion completed in 22.085936188s • [SLOW TEST:28.751 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl rolling-update should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 7 13:22:19.470: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1516 [It] should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Apr 7 13:22:19.539: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=kubectl-3455' Apr 7 13:22:21.906: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Apr 7 13:22:21.906: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n" STEP: verifying the rc e2e-test-nginx-rc was created STEP: rolling-update to same image controller Apr 7 13:22:21.935: INFO: scanned /root for discovery docs: Apr 7 13:22:21.936: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-nginx-rc --update-period=1s --image=docker.io/library/nginx:1.14-alpine --image-pull-policy=IfNotPresent --namespace=kubectl-3455' Apr 7 13:22:37.812: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" Apr 7 13:22:37.812: INFO: stdout: "Created e2e-test-nginx-rc-ccf6065b8551ef04e32870791b9c90f6\nScaling up e2e-test-nginx-rc-ccf6065b8551ef04e32870791b9c90f6 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-ccf6065b8551ef04e32870791b9c90f6 up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-ccf6065b8551ef04e32870791b9c90f6 to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n" Apr 7 13:22:37.812: INFO: stdout: "Created e2e-test-nginx-rc-ccf6065b8551ef04e32870791b9c90f6\nScaling up e2e-test-nginx-rc-ccf6065b8551ef04e32870791b9c90f6 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-ccf6065b8551ef04e32870791b9c90f6 up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-ccf6065b8551ef04e32870791b9c90f6 to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n" STEP: waiting for all containers in run=e2e-test-nginx-rc pods to come up. Apr 7 13:22:37.812: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-3455' Apr 7 13:22:37.913: INFO: stderr: "" Apr 7 13:22:37.913: INFO: stdout: "e2e-test-nginx-rc-ccf6065b8551ef04e32870791b9c90f6-2bk29 " Apr 7 13:22:37.913: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-ccf6065b8551ef04e32870791b9c90f6-2bk29 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-nginx-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3455' Apr 7 13:22:38.000: INFO: stderr: "" Apr 7 13:22:38.000: INFO: stdout: "true" Apr 7 13:22:38.000: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-ccf6065b8551ef04e32870791b9c90f6-2bk29 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-nginx-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3455' Apr 7 13:22:38.098: INFO: stderr: "" Apr 7 13:22:38.098: INFO: stdout: "docker.io/library/nginx:1.14-alpine" Apr 7 13:22:38.098: INFO: e2e-test-nginx-rc-ccf6065b8551ef04e32870791b9c90f6-2bk29 is verified up and running [AfterEach] [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1522 Apr 7 13:22:38.098: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=kubectl-3455' Apr 7 13:22:38.211: INFO: stderr: "" Apr 7 13:22:38.211: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 7 13:22:38.211: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3455" for this suite. Apr 7 13:22:44.248: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 7 13:22:44.329: INFO: namespace kubectl-3455 deletion completed in 6.098071725s • [SLOW TEST:24.859 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run deployment should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 7 13:22:44.329: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1557 [It] should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Apr 7 13:22:44.422: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --generator=deployment/apps.v1 --namespace=kubectl-2154' Apr 7 13:22:44.560: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Apr 7 13:22:44.560: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n" STEP: verifying the deployment e2e-test-nginx-deployment was created STEP: verifying the pod controlled by deployment e2e-test-nginx-deployment was created [AfterEach] [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1562 Apr 7 13:22:46.612: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=kubectl-2154' Apr 7 13:22:46.786: INFO: stderr: "" Apr 7 13:22:46.786: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 7 13:22:46.786: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2154" for this suite. Apr 7 13:24:46.836: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 7 13:24:46.919: INFO: namespace kubectl-2154 deletion completed in 2m0.122290392s • [SLOW TEST:122.590 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run --rm job should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 7 13:24:46.920: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: executing a command with run --rm and attach with stdin Apr 7 13:24:46.992: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-1897 run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed'' Apr 7 13:24:50.846: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\nI0407 13:24:50.768654 1312 log.go:172] (0xc000a0c210) (0xc00069e280) Create stream\nI0407 13:24:50.768738 1312 log.go:172] (0xc000a0c210) (0xc00069e280) Stream added, broadcasting: 1\nI0407 13:24:50.771696 1312 log.go:172] (0xc000a0c210) Reply frame received for 1\nI0407 13:24:50.771746 1312 log.go:172] (0xc000a0c210) (0xc0003aa000) Create stream\nI0407 13:24:50.771760 1312 log.go:172] (0xc000a0c210) (0xc0003aa000) Stream added, broadcasting: 3\nI0407 13:24:50.772801 1312 log.go:172] (0xc000a0c210) Reply frame received for 3\nI0407 13:24:50.772855 1312 log.go:172] (0xc000a0c210) (0xc00069e320) Create stream\nI0407 13:24:50.772869 1312 log.go:172] (0xc000a0c210) (0xc00069e320) Stream added, broadcasting: 5\nI0407 13:24:50.774072 1312 log.go:172] (0xc000a0c210) Reply frame received for 5\nI0407 13:24:50.774136 1312 log.go:172] (0xc000a0c210) (0xc00069e3c0) Create stream\nI0407 13:24:50.774165 1312 log.go:172] (0xc000a0c210) (0xc00069e3c0) Stream added, broadcasting: 7\nI0407 13:24:50.775275 1312 log.go:172] (0xc000a0c210) Reply frame received for 7\nI0407 13:24:50.775488 1312 log.go:172] (0xc0003aa000) (3) Writing data frame\nI0407 13:24:50.775652 1312 log.go:172] (0xc0003aa000) (3) Writing data frame\nI0407 13:24:50.776514 1312 log.go:172] (0xc000a0c210) Data frame received for 5\nI0407 13:24:50.776539 1312 log.go:172] (0xc00069e320) (5) Data frame handling\nI0407 13:24:50.776556 1312 log.go:172] (0xc00069e320) (5) Data frame sent\nI0407 13:24:50.777461 1312 log.go:172] (0xc000a0c210) Data frame received for 5\nI0407 13:24:50.777481 1312 log.go:172] (0xc00069e320) (5) Data frame handling\nI0407 13:24:50.777504 1312 log.go:172] (0xc00069e320) (5) Data frame sent\nI0407 13:24:50.823029 1312 log.go:172] (0xc000a0c210) Data frame received for 5\nI0407 13:24:50.823082 1312 log.go:172] (0xc00069e320) (5) Data frame handling\nI0407 13:24:50.823110 1312 log.go:172] (0xc000a0c210) Data frame received for 7\nI0407 13:24:50.823125 1312 log.go:172] (0xc00069e3c0) (7) Data frame handling\nI0407 13:24:50.823236 1312 log.go:172] (0xc000a0c210) Data frame received for 1\nI0407 13:24:50.823273 1312 log.go:172] (0xc00069e280) (1) Data frame handling\nI0407 13:24:50.823301 1312 log.go:172] (0xc00069e280) (1) Data frame sent\nI0407 13:24:50.823343 1312 log.go:172] (0xc000a0c210) (0xc00069e280) Stream removed, broadcasting: 1\nI0407 13:24:50.823429 1312 log.go:172] (0xc000a0c210) (0xc00069e280) Stream removed, broadcasting: 1\nI0407 13:24:50.823462 1312 log.go:172] (0xc000a0c210) (0xc0003aa000) Stream removed, broadcasting: 3\nI0407 13:24:50.823499 1312 log.go:172] (0xc000a0c210) (0xc00069e320) Stream removed, broadcasting: 5\nI0407 13:24:50.823888 1312 log.go:172] (0xc000a0c210) (0xc00069e3c0) Stream removed, broadcasting: 7\nI0407 13:24:50.823998 1312 log.go:172] (0xc000a0c210) Go away received\n" Apr 7 13:24:50.846: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n" STEP: verifying the job e2e-test-rm-busybox-job was deleted [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 7 13:24:52.852: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1897" for this suite. Apr 7 13:24:58.868: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 7 13:24:58.952: INFO: namespace kubectl-1897 deletion completed in 6.096714544s • [SLOW TEST:12.033 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run --rm job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 7 13:24:58.953: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Apr 7 13:24:58.979: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) Apr 7 13:24:59.014: INFO: Pod name sample-pod: Found 0 pods out of 1 Apr 7 13:25:04.018: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Apr 7 13:25:04.018: INFO: Creating deployment "test-rolling-update-deployment" Apr 7 13:25:04.022: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has Apr 7 13:25:04.029: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created Apr 7 13:25:06.036: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected Apr 7 13:25:06.039: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721862704, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721862704, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721862704, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721862704, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-79f6b9d75c\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 7 13:25:08.044: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 Apr 7 13:25:08.054: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment,GenerateName:,Namespace:deployment-2558,SelfLink:/apis/apps/v1/namespaces/deployment-2558/deployments/test-rolling-update-deployment,UID:12bd9ffc-6cb2-45a3-ab84-b450dd6818a2,ResourceVersion:4124547,Generation:1,CreationTimestamp:2020-04-07 13:25:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-04-07 13:25:04 +0000 UTC 2020-04-07 13:25:04 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-04-07 13:25:06 +0000 UTC 2020-04-07 13:25:04 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rolling-update-deployment-79f6b9d75c" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},} Apr 7 13:25:08.058: INFO: New ReplicaSet "test-rolling-update-deployment-79f6b9d75c" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-79f6b9d75c,GenerateName:,Namespace:deployment-2558,SelfLink:/apis/apps/v1/namespaces/deployment-2558/replicasets/test-rolling-update-deployment-79f6b9d75c,UID:8e87fb66-0a44-4a5a-9e6b-46ddf095f273,ResourceVersion:4124536,Generation:1,CreationTimestamp:2020-04-07 13:25:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment 12bd9ffc-6cb2-45a3-ab84-b450dd6818a2 0xc0021deb37 0xc0021deb38}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} Apr 7 13:25:08.058: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": Apr 7 13:25:08.058: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-controller,GenerateName:,Namespace:deployment-2558,SelfLink:/apis/apps/v1/namespaces/deployment-2558/replicasets/test-rolling-update-controller,UID:a1570a6b-21d6-4248-9c97-2e1b9c2d442b,ResourceVersion:4124546,Generation:2,CreationTimestamp:2020-04-07 13:24:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305832,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment 12bd9ffc-6cb2-45a3-ab84-b450dd6818a2 0xc0021dea67 0xc0021dea68}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Apr 7 13:25:08.061: INFO: Pod "test-rolling-update-deployment-79f6b9d75c-w74tz" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-79f6b9d75c-w74tz,GenerateName:test-rolling-update-deployment-79f6b9d75c-,Namespace:deployment-2558,SelfLink:/api/v1/namespaces/deployment-2558/pods/test-rolling-update-deployment-79f6b9d75c-w74tz,UID:896fc73c-e3ac-4662-914f-26efbff3b253,ResourceVersion:4124535,Generation:0,CreationTimestamp:2020-04-07 13:25:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rolling-update-deployment-79f6b9d75c 8e87fb66-0a44-4a5a-9e6b-46ddf095f273 0xc0021df7d7 0xc0021df7d8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-c74fx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-c74fx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-c74fx true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0021df8b0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0021df8d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-07 13:25:04 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-04-07 13:25:06 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-04-07 13:25:06 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-07 13:25:04 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:10.244.2.21,StartTime:2020-04-07 13:25:04 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-04-07 13:25:06 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 containerd://147f74a8335473b0d651a0650cb9ee5c72a0229d12e8359f01963e75e1155520}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 7 13:25:08.061: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-2558" for this suite. Apr 7 13:25:14.101: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 7 13:25:14.171: INFO: namespace deployment-2558 deletion completed in 6.107392707s • [SLOW TEST:15.218 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 7 13:25:14.172: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-6108.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-6108.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6108.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-6108.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-6108.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6108.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe /etc/hosts STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 7 13:25:21.511: INFO: DNS probes using dns-6108/dns-test-ba8065f7-1fb2-4696-a2b2-9492685232fb succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 7 13:25:21.535: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-6108" for this suite. Apr 7 13:25:27.600: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 7 13:25:27.683: INFO: namespace dns-6108 deletion completed in 6.133841633s • [SLOW TEST:13.511 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 7 13:25:27.683: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 7 13:25:33.877: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-861" for this suite. Apr 7 13:25:39.893: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 7 13:25:39.967: INFO: namespace namespaces-861 deletion completed in 6.086447935s STEP: Destroying namespace "nsdeletetest-6742" for this suite. Apr 7 13:25:39.969: INFO: Namespace nsdeletetest-6742 was already deleted STEP: Destroying namespace "nsdeletetest-287" for this suite. Apr 7 13:25:46.059: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 7 13:25:46.138: INFO: namespace nsdeletetest-287 deletion completed in 6.16914539s • [SLOW TEST:18.455 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 7 13:25:46.139: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88 [It] should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating service multi-endpoint-test in namespace services-5457 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-5457 to expose endpoints map[] Apr 7 13:25:46.276: INFO: successfully validated that service multi-endpoint-test in namespace services-5457 exposes endpoints map[] (24.019341ms elapsed) STEP: Creating pod pod1 in namespace services-5457 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-5457 to expose endpoints map[pod1:[100]] Apr 7 13:25:50.346: INFO: successfully validated that service multi-endpoint-test in namespace services-5457 exposes endpoints map[pod1:[100]] (4.039620221s elapsed) STEP: Creating pod pod2 in namespace services-5457 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-5457 to expose endpoints map[pod1:[100] pod2:[101]] Apr 7 13:25:54.412: INFO: successfully validated that service multi-endpoint-test in namespace services-5457 exposes endpoints map[pod1:[100] pod2:[101]] (4.062377392s elapsed) STEP: Deleting pod pod1 in namespace services-5457 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-5457 to expose endpoints map[pod2:[101]] Apr 7 13:25:55.444: INFO: successfully validated that service multi-endpoint-test in namespace services-5457 exposes endpoints map[pod2:[101]] (1.027059958s elapsed) STEP: Deleting pod pod2 in namespace services-5457 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-5457 to expose endpoints map[] Apr 7 13:25:56.461: INFO: successfully validated that service multi-endpoint-test in namespace services-5457 exposes endpoints map[] (1.011739428s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 7 13:25:56.501: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-5457" for this suite. Apr 7 13:26:18.514: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 7 13:26:18.595: INFO: namespace services-5457 deletion completed in 22.08948698s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92 • [SLOW TEST:32.456 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl label should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 7 13:26:18.596: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1210 STEP: creating the pod Apr 7 13:26:18.643: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5701' Apr 7 13:26:18.905: INFO: stderr: "" Apr 7 13:26:18.906: INFO: stdout: "pod/pause created\n" Apr 7 13:26:18.906: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] Apr 7 13:26:18.906: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-5701" to be "running and ready" Apr 7 13:26:18.908: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.811922ms Apr 7 13:26:20.912: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006474297s Apr 7 13:26:22.917: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 4.011127526s Apr 7 13:26:22.917: INFO: Pod "pause" satisfied condition "running and ready" Apr 7 13:26:22.917: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: adding the label testing-label with value testing-label-value to a pod Apr 7 13:26:22.917: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-5701' Apr 7 13:26:23.011: INFO: stderr: "" Apr 7 13:26:23.011: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value Apr 7 13:26:23.011: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-5701' Apr 7 13:26:23.112: INFO: stderr: "" Apr 7 13:26:23.112: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 5s testing-label-value\n" STEP: removing the label testing-label of a pod Apr 7 13:26:23.112: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-5701' Apr 7 13:26:23.211: INFO: stderr: "" Apr 7 13:26:23.211: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label Apr 7 13:26:23.211: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-5701' Apr 7 13:26:23.303: INFO: stderr: "" Apr 7 13:26:23.303: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 5s \n" [AfterEach] [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1217 STEP: using delete to clean up resources Apr 7 13:26:23.303: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-5701' Apr 7 13:26:23.447: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 7 13:26:23.447: INFO: stdout: "pod \"pause\" force deleted\n" Apr 7 13:26:23.447: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-5701' Apr 7 13:26:23.544: INFO: stderr: "No resources found.\n" Apr 7 13:26:23.544: INFO: stdout: "" Apr 7 13:26:23.544: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-5701 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Apr 7 13:26:23.687: INFO: stderr: "" Apr 7 13:26:23.687: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 7 13:26:23.687: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5701" for this suite. Apr 7 13:26:29.745: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 7 13:26:29.847: INFO: namespace kubectl-5701 deletion completed in 6.121686802s • [SLOW TEST:11.252 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 7 13:26:29.848: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Apr 7 13:26:29.923: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ce99478f-575f-44fd-872a-b16ea4f8fea0" in namespace "projected-5149" to be "success or failure" Apr 7 13:26:29.940: INFO: Pod "downwardapi-volume-ce99478f-575f-44fd-872a-b16ea4f8fea0": Phase="Pending", Reason="", readiness=false. Elapsed: 17.451055ms Apr 7 13:26:31.946: INFO: Pod "downwardapi-volume-ce99478f-575f-44fd-872a-b16ea4f8fea0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023035142s Apr 7 13:26:33.950: INFO: Pod "downwardapi-volume-ce99478f-575f-44fd-872a-b16ea4f8fea0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.027265377s STEP: Saw pod success Apr 7 13:26:33.950: INFO: Pod "downwardapi-volume-ce99478f-575f-44fd-872a-b16ea4f8fea0" satisfied condition "success or failure" Apr 7 13:26:33.954: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-ce99478f-575f-44fd-872a-b16ea4f8fea0 container client-container: STEP: delete the pod Apr 7 13:26:33.989: INFO: Waiting for pod downwardapi-volume-ce99478f-575f-44fd-872a-b16ea4f8fea0 to disappear Apr 7 13:26:34.017: INFO: Pod downwardapi-volume-ce99478f-575f-44fd-872a-b16ea4f8fea0 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 7 13:26:34.018: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5149" for this suite. Apr 7 13:26:40.036: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 7 13:26:40.112: INFO: namespace projected-5149 deletion completed in 6.090192191s • [SLOW TEST:10.264 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 7 13:26:40.112: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Apr 7 13:26:48.226: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Apr 7 13:26:48.245: INFO: Pod pod-with-poststart-http-hook still exists Apr 7 13:26:50.246: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Apr 7 13:26:50.250: INFO: Pod pod-with-poststart-http-hook still exists Apr 7 13:26:52.246: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Apr 7 13:26:52.269: INFO: Pod pod-with-poststart-http-hook still exists Apr 7 13:26:54.246: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Apr 7 13:26:54.250: INFO: Pod pod-with-poststart-http-hook still exists Apr 7 13:26:56.246: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Apr 7 13:26:56.250: INFO: Pod pod-with-poststart-http-hook still exists Apr 7 13:26:58.246: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Apr 7 13:26:58.250: INFO: Pod pod-with-poststart-http-hook still exists Apr 7 13:27:00.246: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Apr 7 13:27:00.251: INFO: Pod pod-with-poststart-http-hook still exists Apr 7 13:27:02.246: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Apr 7 13:27:02.250: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 7 13:27:02.250: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-4967" for this suite. Apr 7 13:27:24.267: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 7 13:27:24.351: INFO: namespace container-lifecycle-hook-4967 deletion completed in 22.096264577s • [SLOW TEST:44.239 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 7 13:27:24.352: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0644 on tmpfs Apr 7 13:27:24.412: INFO: Waiting up to 5m0s for pod "pod-1c48fef4-90a8-474e-83e8-b587c5619003" in namespace "emptydir-4419" to be "success or failure" Apr 7 13:27:24.424: INFO: Pod "pod-1c48fef4-90a8-474e-83e8-b587c5619003": Phase="Pending", Reason="", readiness=false. Elapsed: 11.958609ms Apr 7 13:27:26.427: INFO: Pod "pod-1c48fef4-90a8-474e-83e8-b587c5619003": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015730618s Apr 7 13:27:28.432: INFO: Pod "pod-1c48fef4-90a8-474e-83e8-b587c5619003": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.020215985s STEP: Saw pod success Apr 7 13:27:28.432: INFO: Pod "pod-1c48fef4-90a8-474e-83e8-b587c5619003" satisfied condition "success or failure" Apr 7 13:27:28.435: INFO: Trying to get logs from node iruya-worker2 pod pod-1c48fef4-90a8-474e-83e8-b587c5619003 container test-container: STEP: delete the pod Apr 7 13:27:28.454: INFO: Waiting for pod pod-1c48fef4-90a8-474e-83e8-b587c5619003 to disappear Apr 7 13:27:28.473: INFO: Pod pod-1c48fef4-90a8-474e-83e8-b587c5619003 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 7 13:27:28.473: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4419" for this suite. Apr 7 13:27:34.498: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 7 13:27:34.589: INFO: namespace emptydir-4419 deletion completed in 6.11361687s • [SLOW TEST:10.238 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 7 13:27:34.590: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications Apr 7 13:27:34.665: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-9782,SelfLink:/api/v1/namespaces/watch-9782/configmaps/e2e-watch-test-watch-closed,UID:51241b48-dc56-42a8-ae5e-6aa80820da72,ResourceVersion:4125108,Generation:0,CreationTimestamp:2020-04-07 13:27:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} Apr 7 13:27:34.665: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-9782,SelfLink:/api/v1/namespaces/watch-9782/configmaps/e2e-watch-test-watch-closed,UID:51241b48-dc56-42a8-ae5e-6aa80820da72,ResourceVersion:4125109,Generation:0,CreationTimestamp:2020-04-07 13:27:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed Apr 7 13:27:34.675: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-9782,SelfLink:/api/v1/namespaces/watch-9782/configmaps/e2e-watch-test-watch-closed,UID:51241b48-dc56-42a8-ae5e-6aa80820da72,ResourceVersion:4125110,Generation:0,CreationTimestamp:2020-04-07 13:27:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Apr 7 13:27:34.675: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-9782,SelfLink:/api/v1/namespaces/watch-9782/configmaps/e2e-watch-test-watch-closed,UID:51241b48-dc56-42a8-ae5e-6aa80820da72,ResourceVersion:4125111,Generation:0,CreationTimestamp:2020-04-07 13:27:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 7 13:27:34.675: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-9782" for this suite. Apr 7 13:27:40.695: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 7 13:27:40.769: INFO: namespace watch-9782 deletion completed in 6.089274921s • [SLOW TEST:6.180 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 7 13:27:40.770: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0666 on node default medium Apr 7 13:27:40.841: INFO: Waiting up to 5m0s for pod "pod-ec79cbb4-259f-4663-9f3a-7884aed28ad3" in namespace "emptydir-6716" to be "success or failure" Apr 7 13:27:40.848: INFO: Pod "pod-ec79cbb4-259f-4663-9f3a-7884aed28ad3": Phase="Pending", Reason="", readiness=false. Elapsed: 6.211703ms Apr 7 13:27:42.851: INFO: Pod "pod-ec79cbb4-259f-4663-9f3a-7884aed28ad3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010091318s Apr 7 13:27:44.856: INFO: Pod "pod-ec79cbb4-259f-4663-9f3a-7884aed28ad3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014813226s STEP: Saw pod success Apr 7 13:27:44.856: INFO: Pod "pod-ec79cbb4-259f-4663-9f3a-7884aed28ad3" satisfied condition "success or failure" Apr 7 13:27:44.860: INFO: Trying to get logs from node iruya-worker2 pod pod-ec79cbb4-259f-4663-9f3a-7884aed28ad3 container test-container: STEP: delete the pod Apr 7 13:27:44.894: INFO: Waiting for pod pod-ec79cbb4-259f-4663-9f3a-7884aed28ad3 to disappear Apr 7 13:27:44.898: INFO: Pod pod-ec79cbb4-259f-4663-9f3a-7884aed28ad3 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 7 13:27:44.898: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6716" for this suite. Apr 7 13:27:50.914: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 7 13:27:50.986: INFO: namespace emptydir-6716 deletion completed in 6.085335484s • [SLOW TEST:10.216 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 7 13:27:50.986: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Apr 7 13:27:51.072: INFO: Pod name cleanup-pod: Found 0 pods out of 1 Apr 7 13:27:56.076: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Apr 7 13:27:56.077: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 Apr 7 13:27:56.103: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment,GenerateName:,Namespace:deployment-2881,SelfLink:/apis/apps/v1/namespaces/deployment-2881/deployments/test-cleanup-deployment,UID:d34a10cc-e68e-4946-a8a2-65394fb1e022,ResourceVersion:4125195,Generation:1,CreationTimestamp:2020-04-07 13:27:56 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[],ReadyReplicas:0,CollisionCount:nil,},} Apr 7 13:27:56.118: INFO: New ReplicaSet "test-cleanup-deployment-55bbcbc84c" of Deployment "test-cleanup-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment-55bbcbc84c,GenerateName:,Namespace:deployment-2881,SelfLink:/apis/apps/v1/namespaces/deployment-2881/replicasets/test-cleanup-deployment-55bbcbc84c,UID:cd3153bb-457e-4604-927f-df0417364adc,ResourceVersion:4125197,Generation:1,CreationTimestamp:2020-04-07 13:27:56 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-cleanup-deployment d34a10cc-e68e-4946-a8a2-65394fb1e022 0xc002f96277 0xc002f96278}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:0,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Apr 7 13:27:56.118: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment": Apr 7 13:27:56.118: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller,GenerateName:,Namespace:deployment-2881,SelfLink:/apis/apps/v1/namespaces/deployment-2881/replicasets/test-cleanup-controller,UID:c79d952a-cd1e-48d7-878c-62bf1f54ee8c,ResourceVersion:4125196,Generation:1,CreationTimestamp:2020-04-07 13:27:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 Deployment test-cleanup-deployment d34a10cc-e68e-4946-a8a2-65394fb1e022 0xc002f961a7 0xc002f961a8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} Apr 7 13:27:56.154: INFO: Pod "test-cleanup-controller-jgrm7" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller-jgrm7,GenerateName:test-cleanup-controller-,Namespace:deployment-2881,SelfLink:/api/v1/namespaces/deployment-2881/pods/test-cleanup-controller-jgrm7,UID:d9e52d89-b51e-40f8-a24c-fb344b1ddb82,ResourceVersion:4125190,Generation:0,CreationTimestamp:2020-04-07 13:27:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-cleanup-controller c79d952a-cd1e-48d7-878c-62bf1f54ee8c 0xc0026705f7 0xc0026705f8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-wgclw {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-wgclw,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-wgclw true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002670670} {node.kubernetes.io/unreachable Exists NoExecute 0xc002670690}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-07 13:27:51 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-04-07 13:27:54 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-04-07 13:27:54 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-07 13:27:51 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:10.244.2.27,StartTime:2020-04-07 13:27:51 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-04-07 13:27:53 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://0bde7795a2a9b04a2f37787811bff273a6d12a7c807c4811bdc98e094545c3e4}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 7 13:27:56.154: INFO: Pod "test-cleanup-deployment-55bbcbc84c-mg4f7" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment-55bbcbc84c-mg4f7,GenerateName:test-cleanup-deployment-55bbcbc84c-,Namespace:deployment-2881,SelfLink:/api/v1/namespaces/deployment-2881/pods/test-cleanup-deployment-55bbcbc84c-mg4f7,UID:afdd0d95-065f-4428-8637-1415022b80d5,ResourceVersion:4125202,Generation:0,CreationTimestamp:2020-04-07 13:27:56 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-cleanup-deployment-55bbcbc84c cd3153bb-457e-4604-927f-df0417364adc 0xc002670777 0xc002670778}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-wgclw {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-wgclw,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-wgclw true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0026707f0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002670810}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-07 13:27:56 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 7 13:27:56.154: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-2881" for this suite. Apr 7 13:28:02.249: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 7 13:28:02.353: INFO: namespace deployment-2881 deletion completed in 6.140165333s • [SLOW TEST:11.366 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl patch should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 7 13:28:02.353: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating Redis RC Apr 7 13:28:02.396: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5418' Apr 7 13:28:02.662: INFO: stderr: "" Apr 7 13:28:02.662: INFO: stdout: "replicationcontroller/redis-master created\n" STEP: Waiting for Redis master to start. Apr 7 13:28:03.667: INFO: Selector matched 1 pods for map[app:redis] Apr 7 13:28:03.667: INFO: Found 0 / 1 Apr 7 13:28:04.667: INFO: Selector matched 1 pods for map[app:redis] Apr 7 13:28:04.667: INFO: Found 0 / 1 Apr 7 13:28:05.667: INFO: Selector matched 1 pods for map[app:redis] Apr 7 13:28:05.667: INFO: Found 1 / 1 Apr 7 13:28:05.667: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods Apr 7 13:28:05.671: INFO: Selector matched 1 pods for map[app:redis] Apr 7 13:28:05.671: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Apr 7 13:28:05.671: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod redis-master-sfbxj --namespace=kubectl-5418 -p {"metadata":{"annotations":{"x":"y"}}}' Apr 7 13:28:05.772: INFO: stderr: "" Apr 7 13:28:05.772: INFO: stdout: "pod/redis-master-sfbxj patched\n" STEP: checking annotations Apr 7 13:28:05.775: INFO: Selector matched 1 pods for map[app:redis] Apr 7 13:28:05.775: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 7 13:28:05.775: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5418" for this suite. Apr 7 13:28:27.792: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 7 13:28:27.873: INFO: namespace kubectl-5418 deletion completed in 22.094115204s • [SLOW TEST:25.519 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl patch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 7 13:28:27.873: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44 [It] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod Apr 7 13:28:27.955: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 7 13:28:33.953: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-6381" for this suite. Apr 7 13:28:56.010: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 7 13:28:56.094: INFO: namespace init-container-6381 deletion completed in 22.128449148s • [SLOW TEST:28.221 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 7 13:28:56.095: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-upd-92a9115c-9ccd-40c4-9af0-25a445d07170 STEP: Creating the pod STEP: Updating configmap configmap-test-upd-92a9115c-9ccd-40c4-9af0-25a445d07170 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 7 13:29:02.213: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2791" for this suite. Apr 7 13:29:24.244: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 7 13:29:24.323: INFO: namespace configmap-2791 deletion completed in 22.106099508s • [SLOW TEST:28.229 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 7 13:29:24.323: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-0644e1eb-e058-489b-9888-1d0cdd28a8cf STEP: Creating a pod to test consume secrets Apr 7 13:29:24.508: INFO: Waiting up to 5m0s for pod "pod-secrets-63f23f38-9079-411b-8fbb-d95bbf57c1ee" in namespace "secrets-7036" to be "success or failure" Apr 7 13:29:24.535: INFO: Pod "pod-secrets-63f23f38-9079-411b-8fbb-d95bbf57c1ee": Phase="Pending", Reason="", readiness=false. Elapsed: 26.17365ms Apr 7 13:29:26.542: INFO: Pod "pod-secrets-63f23f38-9079-411b-8fbb-d95bbf57c1ee": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033210466s Apr 7 13:29:28.546: INFO: Pod "pod-secrets-63f23f38-9079-411b-8fbb-d95bbf57c1ee": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.037980846s STEP: Saw pod success Apr 7 13:29:28.547: INFO: Pod "pod-secrets-63f23f38-9079-411b-8fbb-d95bbf57c1ee" satisfied condition "success or failure" Apr 7 13:29:28.550: INFO: Trying to get logs from node iruya-worker pod pod-secrets-63f23f38-9079-411b-8fbb-d95bbf57c1ee container secret-volume-test: STEP: delete the pod Apr 7 13:29:28.577: INFO: Waiting for pod pod-secrets-63f23f38-9079-411b-8fbb-d95bbf57c1ee to disappear Apr 7 13:29:28.607: INFO: Pod pod-secrets-63f23f38-9079-411b-8fbb-d95bbf57c1ee no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 7 13:29:28.607: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7036" for this suite. Apr 7 13:29:34.623: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 7 13:29:34.704: INFO: namespace secrets-7036 deletion completed in 6.090595475s STEP: Destroying namespace "secret-namespace-6406" for this suite. Apr 7 13:29:40.722: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 7 13:29:40.826: INFO: namespace secret-namespace-6406 deletion completed in 6.121904241s • [SLOW TEST:16.503 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 7 13:29:40.827: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Apr 7 13:29:40.882: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 7 13:29:45.000: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-5425" for this suite. Apr 7 13:30:23.015: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 7 13:30:23.091: INFO: namespace pods-5425 deletion completed in 38.087200776s • [SLOW TEST:42.263 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 7 13:30:23.091: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename hostpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37 [It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test hostPath mode Apr 7 13:30:23.168: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-6124" to be "success or failure" Apr 7 13:30:23.178: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 9.787496ms Apr 7 13:30:25.182: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013633136s Apr 7 13:30:27.186: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.01805661s Apr 7 13:30:29.190: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.022175068s STEP: Saw pod success Apr 7 13:30:29.190: INFO: Pod "pod-host-path-test" satisfied condition "success or failure" Apr 7 13:30:29.193: INFO: Trying to get logs from node iruya-worker pod pod-host-path-test container test-container-1: STEP: delete the pod Apr 7 13:30:29.223: INFO: Waiting for pod pod-host-path-test to disappear Apr 7 13:30:29.237: INFO: Pod pod-host-path-test no longer exists [AfterEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 7 13:30:29.237: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "hostpath-6124" for this suite. Apr 7 13:30:35.276: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 7 13:30:35.349: INFO: namespace hostpath-6124 deletion completed in 6.10788375s • [SLOW TEST:12.258 seconds] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34 should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 7 13:30:35.349: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0777 on node default medium Apr 7 13:30:35.410: INFO: Waiting up to 5m0s for pod "pod-44203555-e57f-4c54-bdce-9dd971e8bc85" in namespace "emptydir-6727" to be "success or failure" Apr 7 13:30:35.413: INFO: Pod "pod-44203555-e57f-4c54-bdce-9dd971e8bc85": Phase="Pending", Reason="", readiness=false. Elapsed: 3.318298ms Apr 7 13:30:37.530: INFO: Pod "pod-44203555-e57f-4c54-bdce-9dd971e8bc85": Phase="Pending", Reason="", readiness=false. Elapsed: 2.120273721s Apr 7 13:30:39.534: INFO: Pod "pod-44203555-e57f-4c54-bdce-9dd971e8bc85": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.124221728s STEP: Saw pod success Apr 7 13:30:39.534: INFO: Pod "pod-44203555-e57f-4c54-bdce-9dd971e8bc85" satisfied condition "success or failure" Apr 7 13:30:39.537: INFO: Trying to get logs from node iruya-worker pod pod-44203555-e57f-4c54-bdce-9dd971e8bc85 container test-container: STEP: delete the pod Apr 7 13:30:39.578: INFO: Waiting for pod pod-44203555-e57f-4c54-bdce-9dd971e8bc85 to disappear Apr 7 13:30:39.580: INFO: Pod pod-44203555-e57f-4c54-bdce-9dd971e8bc85 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 7 13:30:39.581: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6727" for this suite. Apr 7 13:30:45.596: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 7 13:30:45.669: INFO: namespace emptydir-6727 deletion completed in 6.083098459s • [SLOW TEST:10.320 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 7 13:30:45.670: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating the pod Apr 7 13:30:50.259: INFO: Successfully updated pod "labelsupdatef5782a12-c2c0-47b0-b422-8fe8237aa303" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 7 13:30:52.278: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-789" for this suite. Apr 7 13:31:14.304: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 7 13:31:14.390: INFO: namespace projected-789 deletion completed in 22.107260033s • [SLOW TEST:28.720 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 7 13:31:14.390: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-58bedef7-e2c9-43fc-9fbe-916c1575eef0 STEP: Creating a pod to test consume secrets Apr 7 13:31:14.476: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-36307431-a4af-4d0f-b217-9a6df6a3ddbd" in namespace "projected-8791" to be "success or failure" Apr 7 13:31:14.483: INFO: Pod "pod-projected-secrets-36307431-a4af-4d0f-b217-9a6df6a3ddbd": Phase="Pending", Reason="", readiness=false. Elapsed: 7.027848ms Apr 7 13:31:16.488: INFO: Pod "pod-projected-secrets-36307431-a4af-4d0f-b217-9a6df6a3ddbd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011321724s Apr 7 13:31:18.492: INFO: Pod "pod-projected-secrets-36307431-a4af-4d0f-b217-9a6df6a3ddbd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.015824724s STEP: Saw pod success Apr 7 13:31:18.492: INFO: Pod "pod-projected-secrets-36307431-a4af-4d0f-b217-9a6df6a3ddbd" satisfied condition "success or failure" Apr 7 13:31:18.495: INFO: Trying to get logs from node iruya-worker pod pod-projected-secrets-36307431-a4af-4d0f-b217-9a6df6a3ddbd container projected-secret-volume-test: STEP: delete the pod Apr 7 13:31:18.517: INFO: Waiting for pod pod-projected-secrets-36307431-a4af-4d0f-b217-9a6df6a3ddbd to disappear Apr 7 13:31:18.535: INFO: Pod pod-projected-secrets-36307431-a4af-4d0f-b217-9a6df6a3ddbd no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 7 13:31:18.535: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8791" for this suite. Apr 7 13:31:24.556: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 7 13:31:24.638: INFO: namespace projected-8791 deletion completed in 6.100093657s • [SLOW TEST:10.248 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 7 13:31:24.639: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Apr 7 13:31:29.249: INFO: Successfully updated pod "pod-update-activedeadlineseconds-a95a2911-cb68-41d9-9cf7-63e7374e9001" Apr 7 13:31:29.249: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-a95a2911-cb68-41d9-9cf7-63e7374e9001" in namespace "pods-5573" to be "terminated due to deadline exceeded" Apr 7 13:31:29.264: INFO: Pod "pod-update-activedeadlineseconds-a95a2911-cb68-41d9-9cf7-63e7374e9001": Phase="Running", Reason="", readiness=true. Elapsed: 14.709811ms Apr 7 13:31:31.267: INFO: Pod "pod-update-activedeadlineseconds-a95a2911-cb68-41d9-9cf7-63e7374e9001": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.017813167s Apr 7 13:31:31.267: INFO: Pod "pod-update-activedeadlineseconds-a95a2911-cb68-41d9-9cf7-63e7374e9001" satisfied condition "terminated due to deadline exceeded" [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 7 13:31:31.267: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-5573" for this suite. Apr 7 13:31:37.298: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 7 13:31:37.409: INFO: namespace pods-5573 deletion completed in 6.139834448s • [SLOW TEST:12.771 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 7 13:31:37.409: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Apr 7 13:31:37.486: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 7 13:31:38.534: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-8395" for this suite. Apr 7 13:31:44.554: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 7 13:31:44.642: INFO: namespace custom-resource-definition-8395 deletion completed in 6.103215129s • [SLOW TEST:7.232 seconds] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Simple CustomResourceDefinition /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:35 creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Proxy server should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 7 13:31:44.642: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Starting the proxy Apr 7 13:31:44.713: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix609210014/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 7 13:31:44.788: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-926" for this suite. Apr 7 13:31:50.816: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 7 13:31:50.886: INFO: namespace kubectl-926 deletion completed in 6.093391618s • [SLOW TEST:6.244 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Proxy server /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 7 13:31:50.886: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44 [It] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod Apr 7 13:31:50.950: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 7 13:31:58.558: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-9459" for this suite. Apr 7 13:32:04.628: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 7 13:32:04.706: INFO: namespace init-container-9459 deletion completed in 6.096090431s • [SLOW TEST:13.819 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 7 13:32:04.706: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: starting an echo server on multiple ports STEP: creating replication controller proxy-service-xj9mq in namespace proxy-1085 I0407 13:32:04.780878 6 runners.go:180] Created replication controller with name: proxy-service-xj9mq, namespace: proxy-1085, replica count: 1 I0407 13:32:05.831311 6 runners.go:180] proxy-service-xj9mq Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0407 13:32:06.831498 6 runners.go:180] proxy-service-xj9mq Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0407 13:32:07.831682 6 runners.go:180] proxy-service-xj9mq Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0407 13:32:08.831913 6 runners.go:180] proxy-service-xj9mq Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0407 13:32:09.832176 6 runners.go:180] proxy-service-xj9mq Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0407 13:32:10.832386 6 runners.go:180] proxy-service-xj9mq Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0407 13:32:11.832664 6 runners.go:180] proxy-service-xj9mq Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0407 13:32:12.832897 6 runners.go:180] proxy-service-xj9mq Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0407 13:32:13.833102 6 runners.go:180] proxy-service-xj9mq Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Apr 7 13:32:13.836: INFO: setup took 9.080371667s, starting test cases STEP: running 16 cases, 20 attempts per case, 320 total attempts Apr 7 13:32:13.844: INFO: (0) /api/v1/namespaces/proxy-1085/pods/proxy-service-xj9mq-jghcz/proxy/: test (200; 7.238034ms) Apr 7 13:32:13.844: INFO: (0) /api/v1/namespaces/proxy-1085/pods/http:proxy-service-xj9mq-jghcz:162/proxy/: bar (200; 7.332814ms) Apr 7 13:32:13.844: INFO: (0) /api/v1/namespaces/proxy-1085/pods/proxy-service-xj9mq-jghcz:160/proxy/: foo (200; 7.367423ms) Apr 7 13:32:13.844: INFO: (0) /api/v1/namespaces/proxy-1085/pods/proxy-service-xj9mq-jghcz:162/proxy/: bar (200; 7.571769ms) Apr 7 13:32:13.844: INFO: (0) /api/v1/namespaces/proxy-1085/services/proxy-service-xj9mq:portname2/proxy/: bar (200; 7.596564ms) Apr 7 13:32:13.844: INFO: (0) /api/v1/namespaces/proxy-1085/pods/http:proxy-service-xj9mq-jghcz:160/proxy/: foo (200; 8.084002ms) Apr 7 13:32:13.845: INFO: (0) /api/v1/namespaces/proxy-1085/pods/http:proxy-service-xj9mq-jghcz:1080/proxy/: ... (200; 7.979988ms) Apr 7 13:32:13.845: INFO: (0) /api/v1/namespaces/proxy-1085/services/proxy-service-xj9mq:portname1/proxy/: foo (200; 8.622491ms) Apr 7 13:32:13.845: INFO: (0) /api/v1/namespaces/proxy-1085/services/http:proxy-service-xj9mq:portname2/proxy/: bar (200; 8.581933ms) Apr 7 13:32:13.845: INFO: (0) /api/v1/namespaces/proxy-1085/pods/proxy-service-xj9mq-jghcz:1080/proxy/: test<... (200; 8.589323ms) Apr 7 13:32:13.847: INFO: (0) /api/v1/namespaces/proxy-1085/services/http:proxy-service-xj9mq:portname1/proxy/: foo (200; 10.02726ms) Apr 7 13:32:13.851: INFO: (0) /api/v1/namespaces/proxy-1085/pods/https:proxy-service-xj9mq-jghcz:460/proxy/: tls baz (200; 14.157142ms) Apr 7 13:32:13.851: INFO: (0) /api/v1/namespaces/proxy-1085/pods/https:proxy-service-xj9mq-jghcz:443/proxy/: test (200; 27.381343ms) Apr 7 13:32:13.880: INFO: (1) /api/v1/namespaces/proxy-1085/pods/http:proxy-service-xj9mq-jghcz:1080/proxy/: ... (200; 27.318102ms) Apr 7 13:32:13.880: INFO: (1) /api/v1/namespaces/proxy-1085/pods/http:proxy-service-xj9mq-jghcz:160/proxy/: foo (200; 27.34582ms) Apr 7 13:32:13.880: INFO: (1) /api/v1/namespaces/proxy-1085/pods/https:proxy-service-xj9mq-jghcz:462/proxy/: tls qux (200; 27.435516ms) Apr 7 13:32:13.880: INFO: (1) /api/v1/namespaces/proxy-1085/pods/https:proxy-service-xj9mq-jghcz:443/proxy/: test<... (200; 27.547416ms) Apr 7 13:32:13.880: INFO: (1) /api/v1/namespaces/proxy-1085/pods/http:proxy-service-xj9mq-jghcz:162/proxy/: bar (200; 27.989139ms) Apr 7 13:32:13.880: INFO: (1) /api/v1/namespaces/proxy-1085/pods/proxy-service-xj9mq-jghcz:162/proxy/: bar (200; 27.978441ms) Apr 7 13:32:13.880: INFO: (1) /api/v1/namespaces/proxy-1085/pods/proxy-service-xj9mq-jghcz:160/proxy/: foo (200; 28.12469ms) Apr 7 13:32:13.880: INFO: (1) /api/v1/namespaces/proxy-1085/pods/https:proxy-service-xj9mq-jghcz:460/proxy/: tls baz (200; 28.106063ms) Apr 7 13:32:13.880: INFO: (1) /api/v1/namespaces/proxy-1085/services/http:proxy-service-xj9mq:portname2/proxy/: bar (200; 28.220534ms) Apr 7 13:32:13.881: INFO: (1) /api/v1/namespaces/proxy-1085/services/https:proxy-service-xj9mq:tlsportname1/proxy/: tls baz (200; 28.474187ms) Apr 7 13:32:13.881: INFO: (1) /api/v1/namespaces/proxy-1085/services/proxy-service-xj9mq:portname2/proxy/: bar (200; 28.667155ms) Apr 7 13:32:13.881: INFO: (1) /api/v1/namespaces/proxy-1085/services/proxy-service-xj9mq:portname1/proxy/: foo (200; 28.736052ms) Apr 7 13:32:13.881: INFO: (1) /api/v1/namespaces/proxy-1085/services/https:proxy-service-xj9mq:tlsportname2/proxy/: tls qux (200; 29.0494ms) Apr 7 13:32:13.881: INFO: (1) /api/v1/namespaces/proxy-1085/services/http:proxy-service-xj9mq:portname1/proxy/: foo (200; 29.280495ms) Apr 7 13:32:13.887: INFO: (2) /api/v1/namespaces/proxy-1085/pods/https:proxy-service-xj9mq-jghcz:460/proxy/: tls baz (200; 5.33728ms) Apr 7 13:32:13.888: INFO: (2) /api/v1/namespaces/proxy-1085/services/http:proxy-service-xj9mq:portname1/proxy/: foo (200; 6.397439ms) Apr 7 13:32:13.888: INFO: (2) /api/v1/namespaces/proxy-1085/pods/http:proxy-service-xj9mq-jghcz:1080/proxy/: ... (200; 6.423427ms) Apr 7 13:32:13.889: INFO: (2) /api/v1/namespaces/proxy-1085/pods/proxy-service-xj9mq-jghcz:1080/proxy/: test<... (200; 6.847388ms) Apr 7 13:32:13.889: INFO: (2) /api/v1/namespaces/proxy-1085/pods/proxy-service-xj9mq-jghcz/proxy/: test (200; 7.506809ms) Apr 7 13:32:13.889: INFO: (2) /api/v1/namespaces/proxy-1085/services/proxy-service-xj9mq:portname1/proxy/: foo (200; 7.716271ms) Apr 7 13:32:13.889: INFO: (2) /api/v1/namespaces/proxy-1085/services/https:proxy-service-xj9mq:tlsportname1/proxy/: tls baz (200; 7.638567ms) Apr 7 13:32:13.889: INFO: (2) /api/v1/namespaces/proxy-1085/pods/proxy-service-xj9mq-jghcz:160/proxy/: foo (200; 7.578466ms) Apr 7 13:32:13.889: INFO: (2) /api/v1/namespaces/proxy-1085/services/http:proxy-service-xj9mq:portname2/proxy/: bar (200; 7.640096ms) Apr 7 13:32:13.889: INFO: (2) /api/v1/namespaces/proxy-1085/pods/https:proxy-service-xj9mq-jghcz:462/proxy/: tls qux (200; 7.680541ms) Apr 7 13:32:13.889: INFO: (2) /api/v1/namespaces/proxy-1085/services/proxy-service-xj9mq:portname2/proxy/: bar (200; 7.733046ms) Apr 7 13:32:13.889: INFO: (2) /api/v1/namespaces/proxy-1085/services/https:proxy-service-xj9mq:tlsportname2/proxy/: tls qux (200; 7.61454ms) Apr 7 13:32:13.889: INFO: (2) /api/v1/namespaces/proxy-1085/pods/http:proxy-service-xj9mq-jghcz:160/proxy/: foo (200; 7.665852ms) Apr 7 13:32:13.889: INFO: (2) /api/v1/namespaces/proxy-1085/pods/http:proxy-service-xj9mq-jghcz:162/proxy/: bar (200; 7.948559ms) Apr 7 13:32:13.889: INFO: (2) /api/v1/namespaces/proxy-1085/pods/proxy-service-xj9mq-jghcz:162/proxy/: bar (200; 7.904941ms) Apr 7 13:32:13.889: INFO: (2) /api/v1/namespaces/proxy-1085/pods/https:proxy-service-xj9mq-jghcz:443/proxy/: ... (200; 4.918758ms) Apr 7 13:32:13.895: INFO: (3) /api/v1/namespaces/proxy-1085/pods/http:proxy-service-xj9mq-jghcz:162/proxy/: bar (200; 4.995998ms) Apr 7 13:32:13.895: INFO: (3) /api/v1/namespaces/proxy-1085/pods/proxy-service-xj9mq-jghcz/proxy/: test (200; 5.046791ms) Apr 7 13:32:13.895: INFO: (3) /api/v1/namespaces/proxy-1085/services/http:proxy-service-xj9mq:portname1/proxy/: foo (200; 5.111302ms) Apr 7 13:32:13.895: INFO: (3) /api/v1/namespaces/proxy-1085/services/http:proxy-service-xj9mq:portname2/proxy/: bar (200; 5.303788ms) Apr 7 13:32:13.895: INFO: (3) /api/v1/namespaces/proxy-1085/services/https:proxy-service-xj9mq:tlsportname2/proxy/: tls qux (200; 5.303731ms) Apr 7 13:32:13.895: INFO: (3) /api/v1/namespaces/proxy-1085/pods/proxy-service-xj9mq-jghcz:160/proxy/: foo (200; 5.404458ms) Apr 7 13:32:13.895: INFO: (3) /api/v1/namespaces/proxy-1085/services/https:proxy-service-xj9mq:tlsportname1/proxy/: tls baz (200; 5.355314ms) Apr 7 13:32:13.895: INFO: (3) /api/v1/namespaces/proxy-1085/pods/https:proxy-service-xj9mq-jghcz:462/proxy/: tls qux (200; 5.549304ms) Apr 7 13:32:13.895: INFO: (3) /api/v1/namespaces/proxy-1085/pods/https:proxy-service-xj9mq-jghcz:460/proxy/: tls baz (200; 5.388439ms) Apr 7 13:32:13.895: INFO: (3) /api/v1/namespaces/proxy-1085/services/proxy-service-xj9mq:portname2/proxy/: bar (200; 5.792309ms) Apr 7 13:32:13.895: INFO: (3) /api/v1/namespaces/proxy-1085/pods/http:proxy-service-xj9mq-jghcz:160/proxy/: foo (200; 5.819684ms) Apr 7 13:32:13.895: INFO: (3) /api/v1/namespaces/proxy-1085/pods/proxy-service-xj9mq-jghcz:162/proxy/: bar (200; 5.748717ms) Apr 7 13:32:13.895: INFO: (3) /api/v1/namespaces/proxy-1085/pods/proxy-service-xj9mq-jghcz:1080/proxy/: test<... (200; 5.833933ms) Apr 7 13:32:13.896: INFO: (3) /api/v1/namespaces/proxy-1085/pods/https:proxy-service-xj9mq-jghcz:443/proxy/: test<... (200; 3.964993ms) Apr 7 13:32:13.900: INFO: (4) /api/v1/namespaces/proxy-1085/pods/https:proxy-service-xj9mq-jghcz:443/proxy/: test (200; 5.05051ms) Apr 7 13:32:13.901: INFO: (4) /api/v1/namespaces/proxy-1085/services/proxy-service-xj9mq:portname1/proxy/: foo (200; 5.439523ms) Apr 7 13:32:13.901: INFO: (4) /api/v1/namespaces/proxy-1085/services/http:proxy-service-xj9mq:portname2/proxy/: bar (200; 5.316906ms) Apr 7 13:32:13.901: INFO: (4) /api/v1/namespaces/proxy-1085/pods/proxy-service-xj9mq-jghcz:162/proxy/: bar (200; 5.407029ms) Apr 7 13:32:13.901: INFO: (4) /api/v1/namespaces/proxy-1085/pods/http:proxy-service-xj9mq-jghcz:160/proxy/: foo (200; 5.473775ms) Apr 7 13:32:13.901: INFO: (4) /api/v1/namespaces/proxy-1085/pods/http:proxy-service-xj9mq-jghcz:162/proxy/: bar (200; 5.466294ms) Apr 7 13:32:13.901: INFO: (4) /api/v1/namespaces/proxy-1085/services/https:proxy-service-xj9mq:tlsportname2/proxy/: tls qux (200; 5.412238ms) Apr 7 13:32:13.902: INFO: (4) /api/v1/namespaces/proxy-1085/pods/http:proxy-service-xj9mq-jghcz:1080/proxy/: ... (200; 5.882388ms) Apr 7 13:32:13.902: INFO: (4) /api/v1/namespaces/proxy-1085/services/proxy-service-xj9mq:portname2/proxy/: bar (200; 5.831316ms) Apr 7 13:32:13.902: INFO: (4) /api/v1/namespaces/proxy-1085/services/https:proxy-service-xj9mq:tlsportname1/proxy/: tls baz (200; 6.406297ms) Apr 7 13:32:13.906: INFO: (5) /api/v1/namespaces/proxy-1085/pods/proxy-service-xj9mq-jghcz:160/proxy/: foo (200; 3.588994ms) Apr 7 13:32:13.906: INFO: (5) /api/v1/namespaces/proxy-1085/pods/https:proxy-service-xj9mq-jghcz:460/proxy/: tls baz (200; 3.601162ms) Apr 7 13:32:13.906: INFO: (5) /api/v1/namespaces/proxy-1085/pods/proxy-service-xj9mq-jghcz:1080/proxy/: test<... (200; 3.611278ms) Apr 7 13:32:13.906: INFO: (5) /api/v1/namespaces/proxy-1085/pods/http:proxy-service-xj9mq-jghcz:162/proxy/: bar (200; 3.689853ms) Apr 7 13:32:13.907: INFO: (5) /api/v1/namespaces/proxy-1085/pods/proxy-service-xj9mq-jghcz/proxy/: test (200; 5.051637ms) Apr 7 13:32:13.907: INFO: (5) /api/v1/namespaces/proxy-1085/pods/https:proxy-service-xj9mq-jghcz:462/proxy/: tls qux (200; 5.197282ms) Apr 7 13:32:13.916: INFO: (5) /api/v1/namespaces/proxy-1085/pods/proxy-service-xj9mq-jghcz:162/proxy/: bar (200; 13.542609ms) Apr 7 13:32:13.920: INFO: (5) /api/v1/namespaces/proxy-1085/pods/http:proxy-service-xj9mq-jghcz:160/proxy/: foo (200; 17.216366ms) Apr 7 13:32:13.920: INFO: (5) /api/v1/namespaces/proxy-1085/pods/https:proxy-service-xj9mq-jghcz:443/proxy/: ... (200; 18.749265ms) Apr 7 13:32:13.921: INFO: (5) /api/v1/namespaces/proxy-1085/services/http:proxy-service-xj9mq:portname1/proxy/: foo (200; 18.757295ms) Apr 7 13:32:13.921: INFO: (5) /api/v1/namespaces/proxy-1085/services/proxy-service-xj9mq:portname2/proxy/: bar (200; 18.833445ms) Apr 7 13:32:13.921: INFO: (5) /api/v1/namespaces/proxy-1085/services/https:proxy-service-xj9mq:tlsportname1/proxy/: tls baz (200; 18.918674ms) Apr 7 13:32:13.927: INFO: (6) /api/v1/namespaces/proxy-1085/pods/http:proxy-service-xj9mq-jghcz:162/proxy/: bar (200; 4.848359ms) Apr 7 13:32:13.927: INFO: (6) /api/v1/namespaces/proxy-1085/pods/proxy-service-xj9mq-jghcz/proxy/: test (200; 4.659881ms) Apr 7 13:32:13.927: INFO: (6) /api/v1/namespaces/proxy-1085/pods/proxy-service-xj9mq-jghcz:160/proxy/: foo (200; 5.082481ms) Apr 7 13:32:13.927: INFO: (6) /api/v1/namespaces/proxy-1085/pods/http:proxy-service-xj9mq-jghcz:160/proxy/: foo (200; 4.631524ms) Apr 7 13:32:13.927: INFO: (6) /api/v1/namespaces/proxy-1085/pods/https:proxy-service-xj9mq-jghcz:443/proxy/: ... (200; 5.115737ms) Apr 7 13:32:13.927: INFO: (6) /api/v1/namespaces/proxy-1085/pods/https:proxy-service-xj9mq-jghcz:460/proxy/: tls baz (200; 5.393426ms) Apr 7 13:32:13.927: INFO: (6) /api/v1/namespaces/proxy-1085/pods/proxy-service-xj9mq-jghcz:1080/proxy/: test<... (200; 4.670607ms) Apr 7 13:32:13.928: INFO: (6) /api/v1/namespaces/proxy-1085/services/http:proxy-service-xj9mq:portname2/proxy/: bar (200; 5.596373ms) Apr 7 13:32:13.928: INFO: (6) /api/v1/namespaces/proxy-1085/services/proxy-service-xj9mq:portname1/proxy/: foo (200; 6.470766ms) Apr 7 13:32:13.929: INFO: (6) /api/v1/namespaces/proxy-1085/services/proxy-service-xj9mq:portname2/proxy/: bar (200; 5.873693ms) Apr 7 13:32:13.929: INFO: (6) /api/v1/namespaces/proxy-1085/services/http:proxy-service-xj9mq:portname1/proxy/: foo (200; 6.008106ms) Apr 7 13:32:13.929: INFO: (6) /api/v1/namespaces/proxy-1085/services/https:proxy-service-xj9mq:tlsportname2/proxy/: tls qux (200; 7.242647ms) Apr 7 13:32:13.929: INFO: (6) /api/v1/namespaces/proxy-1085/pods/proxy-service-xj9mq-jghcz:162/proxy/: bar (200; 7.197576ms) Apr 7 13:32:13.929: INFO: (6) /api/v1/namespaces/proxy-1085/services/https:proxy-service-xj9mq:tlsportname1/proxy/: tls baz (200; 7.665201ms) Apr 7 13:32:13.932: INFO: (7) /api/v1/namespaces/proxy-1085/pods/proxy-service-xj9mq-jghcz:162/proxy/: bar (200; 2.777556ms) Apr 7 13:32:13.932: INFO: (7) /api/v1/namespaces/proxy-1085/pods/http:proxy-service-xj9mq-jghcz:1080/proxy/: ... (200; 2.817032ms) Apr 7 13:32:13.932: INFO: (7) /api/v1/namespaces/proxy-1085/pods/https:proxy-service-xj9mq-jghcz:460/proxy/: tls baz (200; 2.772446ms) Apr 7 13:32:13.933: INFO: (7) /api/v1/namespaces/proxy-1085/pods/http:proxy-service-xj9mq-jghcz:162/proxy/: bar (200; 4.39069ms) Apr 7 13:32:13.934: INFO: (7) /api/v1/namespaces/proxy-1085/pods/proxy-service-xj9mq-jghcz:1080/proxy/: test<... (200; 4.463205ms) Apr 7 13:32:13.935: INFO: (7) /api/v1/namespaces/proxy-1085/pods/proxy-service-xj9mq-jghcz/proxy/: test (200; 5.608396ms) Apr 7 13:32:13.935: INFO: (7) /api/v1/namespaces/proxy-1085/services/http:proxy-service-xj9mq:portname2/proxy/: bar (200; 5.721242ms) Apr 7 13:32:13.935: INFO: (7) /api/v1/namespaces/proxy-1085/services/https:proxy-service-xj9mq:tlsportname1/proxy/: tls baz (200; 5.844556ms) Apr 7 13:32:13.935: INFO: (7) /api/v1/namespaces/proxy-1085/services/proxy-service-xj9mq:portname2/proxy/: bar (200; 5.900396ms) Apr 7 13:32:13.937: INFO: (7) /api/v1/namespaces/proxy-1085/services/https:proxy-service-xj9mq:tlsportname2/proxy/: tls qux (200; 7.432434ms) Apr 7 13:32:13.937: INFO: (7) /api/v1/namespaces/proxy-1085/services/proxy-service-xj9mq:portname1/proxy/: foo (200; 7.49995ms) Apr 7 13:32:13.937: INFO: (7) /api/v1/namespaces/proxy-1085/pods/https:proxy-service-xj9mq-jghcz:462/proxy/: tls qux (200; 7.433071ms) Apr 7 13:32:13.937: INFO: (7) /api/v1/namespaces/proxy-1085/pods/http:proxy-service-xj9mq-jghcz:160/proxy/: foo (200; 7.463843ms) Apr 7 13:32:13.937: INFO: (7) /api/v1/namespaces/proxy-1085/pods/proxy-service-xj9mq-jghcz:160/proxy/: foo (200; 7.495271ms) Apr 7 13:32:13.937: INFO: (7) /api/v1/namespaces/proxy-1085/pods/https:proxy-service-xj9mq-jghcz:443/proxy/: test<... (200; 5.701669ms) Apr 7 13:32:13.943: INFO: (8) /api/v1/namespaces/proxy-1085/pods/proxy-service-xj9mq-jghcz/proxy/: test (200; 5.719824ms) Apr 7 13:32:13.942: INFO: (8) /api/v1/namespaces/proxy-1085/services/http:proxy-service-xj9mq:portname2/proxy/: bar (200; 5.716071ms) Apr 7 13:32:13.943: INFO: (8) /api/v1/namespaces/proxy-1085/pods/http:proxy-service-xj9mq-jghcz:160/proxy/: foo (200; 5.889003ms) Apr 7 13:32:13.942: INFO: (8) /api/v1/namespaces/proxy-1085/services/https:proxy-service-xj9mq:tlsportname1/proxy/: tls baz (200; 5.672019ms) Apr 7 13:32:13.943: INFO: (8) /api/v1/namespaces/proxy-1085/pods/https:proxy-service-xj9mq-jghcz:460/proxy/: tls baz (200; 5.660748ms) Apr 7 13:32:13.943: INFO: (8) /api/v1/namespaces/proxy-1085/pods/https:proxy-service-xj9mq-jghcz:443/proxy/: ... (200; 5.88373ms) Apr 7 13:32:13.943: INFO: (8) /api/v1/namespaces/proxy-1085/pods/proxy-service-xj9mq-jghcz:160/proxy/: foo (200; 5.719342ms) Apr 7 13:32:13.943: INFO: (8) /api/v1/namespaces/proxy-1085/pods/http:proxy-service-xj9mq-jghcz:162/proxy/: bar (200; 5.794735ms) Apr 7 13:32:13.943: INFO: (8) /api/v1/namespaces/proxy-1085/services/proxy-service-xj9mq:portname2/proxy/: bar (200; 5.797747ms) Apr 7 13:32:13.946: INFO: (9) /api/v1/namespaces/proxy-1085/pods/proxy-service-xj9mq-jghcz:1080/proxy/: test<... (200; 2.894778ms) Apr 7 13:32:13.948: INFO: (9) /api/v1/namespaces/proxy-1085/pods/https:proxy-service-xj9mq-jghcz:443/proxy/: ... (200; 4.626844ms) Apr 7 13:32:13.948: INFO: (9) /api/v1/namespaces/proxy-1085/pods/http:proxy-service-xj9mq-jghcz:162/proxy/: bar (200; 4.630846ms) Apr 7 13:32:13.948: INFO: (9) /api/v1/namespaces/proxy-1085/pods/proxy-service-xj9mq-jghcz:160/proxy/: foo (200; 4.672893ms) Apr 7 13:32:13.948: INFO: (9) /api/v1/namespaces/proxy-1085/pods/proxy-service-xj9mq-jghcz:162/proxy/: bar (200; 4.888463ms) Apr 7 13:32:13.948: INFO: (9) /api/v1/namespaces/proxy-1085/pods/proxy-service-xj9mq-jghcz/proxy/: test (200; 5.041901ms) Apr 7 13:32:13.948: INFO: (9) /api/v1/namespaces/proxy-1085/pods/https:proxy-service-xj9mq-jghcz:462/proxy/: tls qux (200; 5.1008ms) Apr 7 13:32:13.948: INFO: (9) /api/v1/namespaces/proxy-1085/pods/http:proxy-service-xj9mq-jghcz:160/proxy/: foo (200; 5.200494ms) Apr 7 13:32:13.948: INFO: (9) /api/v1/namespaces/proxy-1085/services/proxy-service-xj9mq:portname2/proxy/: bar (200; 5.556157ms) Apr 7 13:32:13.948: INFO: (9) /api/v1/namespaces/proxy-1085/pods/https:proxy-service-xj9mq-jghcz:460/proxy/: tls baz (200; 5.644101ms) Apr 7 13:32:13.949: INFO: (9) /api/v1/namespaces/proxy-1085/services/https:proxy-service-xj9mq:tlsportname2/proxy/: tls qux (200; 6.02213ms) Apr 7 13:32:13.949: INFO: (9) /api/v1/namespaces/proxy-1085/services/https:proxy-service-xj9mq:tlsportname1/proxy/: tls baz (200; 6.042625ms) Apr 7 13:32:13.949: INFO: (9) /api/v1/namespaces/proxy-1085/services/http:proxy-service-xj9mq:portname1/proxy/: foo (200; 6.12154ms) Apr 7 13:32:13.950: INFO: (9) /api/v1/namespaces/proxy-1085/services/http:proxy-service-xj9mq:portname2/proxy/: bar (200; 6.76976ms) Apr 7 13:32:13.950: INFO: (9) /api/v1/namespaces/proxy-1085/services/proxy-service-xj9mq:portname1/proxy/: foo (200; 6.671309ms) Apr 7 13:32:13.952: INFO: (10) /api/v1/namespaces/proxy-1085/pods/http:proxy-service-xj9mq-jghcz:160/proxy/: foo (200; 2.550005ms) Apr 7 13:32:13.952: INFO: (10) /api/v1/namespaces/proxy-1085/pods/proxy-service-xj9mq-jghcz:160/proxy/: foo (200; 2.766768ms) Apr 7 13:32:13.953: INFO: (10) /api/v1/namespaces/proxy-1085/pods/proxy-service-xj9mq-jghcz/proxy/: test (200; 3.51002ms) Apr 7 13:32:13.953: INFO: (10) /api/v1/namespaces/proxy-1085/pods/proxy-service-xj9mq-jghcz:1080/proxy/: test<... (200; 3.454532ms) Apr 7 13:32:13.955: INFO: (10) /api/v1/namespaces/proxy-1085/pods/https:proxy-service-xj9mq-jghcz:460/proxy/: tls baz (200; 5.152496ms) Apr 7 13:32:13.956: INFO: (10) /api/v1/namespaces/proxy-1085/pods/https:proxy-service-xj9mq-jghcz:443/proxy/: ... (200; 6.174913ms) Apr 7 13:32:13.956: INFO: (10) /api/v1/namespaces/proxy-1085/services/https:proxy-service-xj9mq:tlsportname2/proxy/: tls qux (200; 6.162759ms) Apr 7 13:32:13.956: INFO: (10) /api/v1/namespaces/proxy-1085/services/http:proxy-service-xj9mq:portname1/proxy/: foo (200; 6.233076ms) Apr 7 13:32:13.956: INFO: (10) /api/v1/namespaces/proxy-1085/pods/http:proxy-service-xj9mq-jghcz:162/proxy/: bar (200; 6.235119ms) Apr 7 13:32:13.956: INFO: (10) /api/v1/namespaces/proxy-1085/services/https:proxy-service-xj9mq:tlsportname1/proxy/: tls baz (200; 6.352168ms) Apr 7 13:32:13.956: INFO: (10) /api/v1/namespaces/proxy-1085/services/http:proxy-service-xj9mq:portname2/proxy/: bar (200; 6.460533ms) Apr 7 13:32:13.956: INFO: (10) /api/v1/namespaces/proxy-1085/pods/proxy-service-xj9mq-jghcz:162/proxy/: bar (200; 6.562517ms) Apr 7 13:32:13.957: INFO: (10) /api/v1/namespaces/proxy-1085/services/proxy-service-xj9mq:portname2/proxy/: bar (200; 6.924557ms) Apr 7 13:32:13.960: INFO: (11) /api/v1/namespaces/proxy-1085/pods/proxy-service-xj9mq-jghcz:160/proxy/: foo (200; 2.906116ms) Apr 7 13:32:13.960: INFO: (11) /api/v1/namespaces/proxy-1085/pods/proxy-service-xj9mq-jghcz:162/proxy/: bar (200; 3.223679ms) Apr 7 13:32:13.960: INFO: (11) /api/v1/namespaces/proxy-1085/pods/proxy-service-xj9mq-jghcz:1080/proxy/: test<... (200; 3.200695ms) Apr 7 13:32:13.960: INFO: (11) /api/v1/namespaces/proxy-1085/pods/http:proxy-service-xj9mq-jghcz:1080/proxy/: ... (200; 3.327866ms) Apr 7 13:32:13.960: INFO: (11) /api/v1/namespaces/proxy-1085/pods/proxy-service-xj9mq-jghcz/proxy/: test (200; 3.286582ms) Apr 7 13:32:13.960: INFO: (11) /api/v1/namespaces/proxy-1085/pods/https:proxy-service-xj9mq-jghcz:460/proxy/: tls baz (200; 3.295444ms) Apr 7 13:32:13.961: INFO: (11) /api/v1/namespaces/proxy-1085/pods/https:proxy-service-xj9mq-jghcz:443/proxy/: test (200; 2.592336ms) Apr 7 13:32:13.966: INFO: (12) /api/v1/namespaces/proxy-1085/pods/proxy-service-xj9mq-jghcz:160/proxy/: foo (200; 4.280713ms) Apr 7 13:32:13.966: INFO: (12) /api/v1/namespaces/proxy-1085/pods/http:proxy-service-xj9mq-jghcz:1080/proxy/: ... (200; 4.323413ms) Apr 7 13:32:13.966: INFO: (12) /api/v1/namespaces/proxy-1085/pods/proxy-service-xj9mq-jghcz:1080/proxy/: test<... (200; 4.292444ms) Apr 7 13:32:13.966: INFO: (12) /api/v1/namespaces/proxy-1085/pods/https:proxy-service-xj9mq-jghcz:462/proxy/: tls qux (200; 4.449431ms) Apr 7 13:32:13.966: INFO: (12) /api/v1/namespaces/proxy-1085/pods/http:proxy-service-xj9mq-jghcz:162/proxy/: bar (200; 4.443709ms) Apr 7 13:32:13.966: INFO: (12) /api/v1/namespaces/proxy-1085/pods/http:proxy-service-xj9mq-jghcz:160/proxy/: foo (200; 4.494438ms) Apr 7 13:32:13.966: INFO: (12) /api/v1/namespaces/proxy-1085/pods/proxy-service-xj9mq-jghcz:162/proxy/: bar (200; 4.555161ms) Apr 7 13:32:13.967: INFO: (12) /api/v1/namespaces/proxy-1085/services/http:proxy-service-xj9mq:portname2/proxy/: bar (200; 5.608604ms) Apr 7 13:32:13.967: INFO: (12) /api/v1/namespaces/proxy-1085/services/proxy-service-xj9mq:portname2/proxy/: bar (200; 5.869622ms) Apr 7 13:32:13.968: INFO: (12) /api/v1/namespaces/proxy-1085/services/https:proxy-service-xj9mq:tlsportname2/proxy/: tls qux (200; 5.889256ms) Apr 7 13:32:13.968: INFO: (12) /api/v1/namespaces/proxy-1085/services/https:proxy-service-xj9mq:tlsportname1/proxy/: tls baz (200; 6.031202ms) Apr 7 13:32:13.968: INFO: (12) /api/v1/namespaces/proxy-1085/services/http:proxy-service-xj9mq:portname1/proxy/: foo (200; 5.978783ms) Apr 7 13:32:13.968: INFO: (12) /api/v1/namespaces/proxy-1085/services/proxy-service-xj9mq:portname1/proxy/: foo (200; 5.991479ms) Apr 7 13:32:13.970: INFO: (13) /api/v1/namespaces/proxy-1085/pods/proxy-service-xj9mq-jghcz:162/proxy/: bar (200; 2.417777ms) Apr 7 13:32:13.970: INFO: (13) /api/v1/namespaces/proxy-1085/pods/proxy-service-xj9mq-jghcz:1080/proxy/: test<... (200; 2.674898ms) Apr 7 13:32:13.970: INFO: (13) /api/v1/namespaces/proxy-1085/pods/proxy-service-xj9mq-jghcz/proxy/: test (200; 2.737524ms) Apr 7 13:32:13.972: INFO: (13) /api/v1/namespaces/proxy-1085/pods/https:proxy-service-xj9mq-jghcz:443/proxy/: ... (200; 9.495996ms) Apr 7 13:32:13.977: INFO: (13) /api/v1/namespaces/proxy-1085/services/https:proxy-service-xj9mq:tlsportname1/proxy/: tls baz (200; 9.496349ms) Apr 7 13:32:13.978: INFO: (13) /api/v1/namespaces/proxy-1085/services/http:proxy-service-xj9mq:portname1/proxy/: foo (200; 10.11165ms) Apr 7 13:32:13.978: INFO: (13) /api/v1/namespaces/proxy-1085/services/proxy-service-xj9mq:portname1/proxy/: foo (200; 10.217118ms) Apr 7 13:32:13.982: INFO: (13) /api/v1/namespaces/proxy-1085/pods/http:proxy-service-xj9mq-jghcz:162/proxy/: bar (200; 14.268223ms) Apr 7 13:32:13.982: INFO: (13) /api/v1/namespaces/proxy-1085/services/http:proxy-service-xj9mq:portname2/proxy/: bar (200; 14.602126ms) Apr 7 13:32:13.986: INFO: (13) /api/v1/namespaces/proxy-1085/services/https:proxy-service-xj9mq:tlsportname2/proxy/: tls qux (200; 17.996203ms) Apr 7 13:32:14.019: INFO: (14) /api/v1/namespaces/proxy-1085/services/proxy-service-xj9mq:portname1/proxy/: foo (200; 33.543731ms) Apr 7 13:32:14.020: INFO: (14) /api/v1/namespaces/proxy-1085/services/https:proxy-service-xj9mq:tlsportname1/proxy/: tls baz (200; 34.224889ms) Apr 7 13:32:14.020: INFO: (14) /api/v1/namespaces/proxy-1085/pods/proxy-service-xj9mq-jghcz:162/proxy/: bar (200; 34.256162ms) Apr 7 13:32:14.020: INFO: (14) /api/v1/namespaces/proxy-1085/pods/https:proxy-service-xj9mq-jghcz:460/proxy/: tls baz (200; 34.245273ms) Apr 7 13:32:14.020: INFO: (14) /api/v1/namespaces/proxy-1085/pods/http:proxy-service-xj9mq-jghcz:162/proxy/: bar (200; 34.178848ms) Apr 7 13:32:14.020: INFO: (14) /api/v1/namespaces/proxy-1085/pods/http:proxy-service-xj9mq-jghcz:160/proxy/: foo (200; 34.19229ms) Apr 7 13:32:14.020: INFO: (14) /api/v1/namespaces/proxy-1085/services/http:proxy-service-xj9mq:portname1/proxy/: foo (200; 34.152097ms) Apr 7 13:32:14.020: INFO: (14) /api/v1/namespaces/proxy-1085/pods/proxy-service-xj9mq-jghcz/proxy/: test (200; 34.227319ms) Apr 7 13:32:14.020: INFO: (14) /api/v1/namespaces/proxy-1085/services/http:proxy-service-xj9mq:portname2/proxy/: bar (200; 34.23637ms) Apr 7 13:32:14.020: INFO: (14) /api/v1/namespaces/proxy-1085/pods/https:proxy-service-xj9mq-jghcz:462/proxy/: tls qux (200; 34.140441ms) Apr 7 13:32:14.020: INFO: (14) /api/v1/namespaces/proxy-1085/services/proxy-service-xj9mq:portname2/proxy/: bar (200; 34.174267ms) Apr 7 13:32:14.020: INFO: (14) /api/v1/namespaces/proxy-1085/pods/https:proxy-service-xj9mq-jghcz:443/proxy/: ... (200; 34.313074ms) Apr 7 13:32:14.020: INFO: (14) /api/v1/namespaces/proxy-1085/pods/proxy-service-xj9mq-jghcz:1080/proxy/: test<... (200; 34.205186ms) Apr 7 13:32:14.020: INFO: (14) /api/v1/namespaces/proxy-1085/services/https:proxy-service-xj9mq:tlsportname2/proxy/: tls qux (200; 34.35695ms) Apr 7 13:32:14.020: INFO: (14) /api/v1/namespaces/proxy-1085/pods/proxy-service-xj9mq-jghcz:160/proxy/: foo (200; 34.262627ms) Apr 7 13:32:14.024: INFO: (15) /api/v1/namespaces/proxy-1085/services/proxy-service-xj9mq:portname1/proxy/: foo (200; 3.479821ms) Apr 7 13:32:14.024: INFO: (15) /api/v1/namespaces/proxy-1085/pods/http:proxy-service-xj9mq-jghcz:160/proxy/: foo (200; 3.440686ms) Apr 7 13:32:14.024: INFO: (15) /api/v1/namespaces/proxy-1085/pods/proxy-service-xj9mq-jghcz/proxy/: test (200; 3.537008ms) Apr 7 13:32:14.024: INFO: (15) /api/v1/namespaces/proxy-1085/pods/proxy-service-xj9mq-jghcz:1080/proxy/: test<... (200; 4.08349ms) Apr 7 13:32:14.024: INFO: (15) /api/v1/namespaces/proxy-1085/pods/http:proxy-service-xj9mq-jghcz:1080/proxy/: ... (200; 4.18281ms) Apr 7 13:32:14.024: INFO: (15) /api/v1/namespaces/proxy-1085/pods/proxy-service-xj9mq-jghcz:162/proxy/: bar (200; 4.075953ms) Apr 7 13:32:14.024: INFO: (15) /api/v1/namespaces/proxy-1085/services/http:proxy-service-xj9mq:portname2/proxy/: bar (200; 4.174887ms) Apr 7 13:32:14.024: INFO: (15) /api/v1/namespaces/proxy-1085/services/http:proxy-service-xj9mq:portname1/proxy/: foo (200; 4.224101ms) Apr 7 13:32:14.024: INFO: (15) /api/v1/namespaces/proxy-1085/pods/https:proxy-service-xj9mq-jghcz:443/proxy/: ... (200; 4.194942ms) Apr 7 13:32:14.029: INFO: (16) /api/v1/namespaces/proxy-1085/pods/proxy-service-xj9mq-jghcz:1080/proxy/: test<... (200; 4.386891ms) Apr 7 13:32:14.029: INFO: (16) /api/v1/namespaces/proxy-1085/pods/proxy-service-xj9mq-jghcz/proxy/: test (200; 4.372689ms) Apr 7 13:32:14.029: INFO: (16) /api/v1/namespaces/proxy-1085/pods/https:proxy-service-xj9mq-jghcz:443/proxy/: test<... (200; 3.156997ms) Apr 7 13:32:14.035: INFO: (17) /api/v1/namespaces/proxy-1085/pods/proxy-service-xj9mq-jghcz:162/proxy/: bar (200; 3.605496ms) Apr 7 13:32:14.035: INFO: (17) /api/v1/namespaces/proxy-1085/services/proxy-service-xj9mq:portname2/proxy/: bar (200; 3.400498ms) Apr 7 13:32:14.035: INFO: (17) /api/v1/namespaces/proxy-1085/pods/http:proxy-service-xj9mq-jghcz:160/proxy/: foo (200; 3.785576ms) Apr 7 13:32:14.035: INFO: (17) /api/v1/namespaces/proxy-1085/pods/http:proxy-service-xj9mq-jghcz:1080/proxy/: ... (200; 4.203495ms) Apr 7 13:32:14.035: INFO: (17) /api/v1/namespaces/proxy-1085/services/https:proxy-service-xj9mq:tlsportname1/proxy/: tls baz (200; 4.405623ms) Apr 7 13:32:14.035: INFO: (17) /api/v1/namespaces/proxy-1085/pods/https:proxy-service-xj9mq-jghcz:460/proxy/: tls baz (200; 4.055328ms) Apr 7 13:32:14.035: INFO: (17) /api/v1/namespaces/proxy-1085/services/https:proxy-service-xj9mq:tlsportname2/proxy/: tls qux (200; 4.36662ms) Apr 7 13:32:14.035: INFO: (17) /api/v1/namespaces/proxy-1085/services/http:proxy-service-xj9mq:portname1/proxy/: foo (200; 3.616598ms) Apr 7 13:32:14.035: INFO: (17) /api/v1/namespaces/proxy-1085/pods/proxy-service-xj9mq-jghcz:160/proxy/: foo (200; 4.191377ms) Apr 7 13:32:14.035: INFO: (17) /api/v1/namespaces/proxy-1085/pods/http:proxy-service-xj9mq-jghcz:162/proxy/: bar (200; 3.715545ms) Apr 7 13:32:14.035: INFO: (17) /api/v1/namespaces/proxy-1085/pods/https:proxy-service-xj9mq-jghcz:443/proxy/: test (200; 4.036805ms) Apr 7 13:32:14.038: INFO: (18) /api/v1/namespaces/proxy-1085/pods/proxy-service-xj9mq-jghcz:162/proxy/: bar (200; 2.740535ms) Apr 7 13:32:14.039: INFO: (18) /api/v1/namespaces/proxy-1085/pods/proxy-service-xj9mq-jghcz:160/proxy/: foo (200; 3.721628ms) Apr 7 13:32:14.039: INFO: (18) /api/v1/namespaces/proxy-1085/pods/https:proxy-service-xj9mq-jghcz:462/proxy/: tls qux (200; 3.808171ms) Apr 7 13:32:14.039: INFO: (18) /api/v1/namespaces/proxy-1085/pods/https:proxy-service-xj9mq-jghcz:443/proxy/: test<... (200; 3.891696ms) Apr 7 13:32:14.039: INFO: (18) /api/v1/namespaces/proxy-1085/pods/http:proxy-service-xj9mq-jghcz:1080/proxy/: ... (200; 3.852288ms) Apr 7 13:32:14.039: INFO: (18) /api/v1/namespaces/proxy-1085/pods/http:proxy-service-xj9mq-jghcz:162/proxy/: bar (200; 3.958379ms) Apr 7 13:32:14.039: INFO: (18) /api/v1/namespaces/proxy-1085/pods/proxy-service-xj9mq-jghcz/proxy/: test (200; 4.063743ms) Apr 7 13:32:14.039: INFO: (18) /api/v1/namespaces/proxy-1085/pods/http:proxy-service-xj9mq-jghcz:160/proxy/: foo (200; 4.181448ms) Apr 7 13:32:14.039: INFO: (18) /api/v1/namespaces/proxy-1085/services/proxy-service-xj9mq:portname1/proxy/: foo (200; 4.35597ms) Apr 7 13:32:14.040: INFO: (18) /api/v1/namespaces/proxy-1085/services/http:proxy-service-xj9mq:portname2/proxy/: bar (200; 4.621654ms) Apr 7 13:32:14.040: INFO: (18) /api/v1/namespaces/proxy-1085/services/https:proxy-service-xj9mq:tlsportname2/proxy/: tls qux (200; 4.584086ms) Apr 7 13:32:14.040: INFO: (18) /api/v1/namespaces/proxy-1085/services/https:proxy-service-xj9mq:tlsportname1/proxy/: tls baz (200; 4.69568ms) Apr 7 13:32:14.040: INFO: (18) /api/v1/namespaces/proxy-1085/services/proxy-service-xj9mq:portname2/proxy/: bar (200; 4.697203ms) Apr 7 13:32:14.040: INFO: (18) /api/v1/namespaces/proxy-1085/services/http:proxy-service-xj9mq:portname1/proxy/: foo (200; 4.722812ms) Apr 7 13:32:14.042: INFO: (19) /api/v1/namespaces/proxy-1085/pods/http:proxy-service-xj9mq-jghcz:162/proxy/: bar (200; 1.877807ms) Apr 7 13:32:14.042: INFO: (19) /api/v1/namespaces/proxy-1085/pods/https:proxy-service-xj9mq-jghcz:462/proxy/: tls qux (200; 2.471898ms) Apr 7 13:32:14.044: INFO: (19) /api/v1/namespaces/proxy-1085/pods/http:proxy-service-xj9mq-jghcz:160/proxy/: foo (200; 3.962328ms) Apr 7 13:32:14.044: INFO: (19) /api/v1/namespaces/proxy-1085/pods/proxy-service-xj9mq-jghcz:162/proxy/: bar (200; 4.37579ms) Apr 7 13:32:14.044: INFO: (19) /api/v1/namespaces/proxy-1085/pods/proxy-service-xj9mq-jghcz:1080/proxy/: test<... (200; 4.54853ms) Apr 7 13:32:14.044: INFO: (19) /api/v1/namespaces/proxy-1085/pods/https:proxy-service-xj9mq-jghcz:460/proxy/: tls baz (200; 4.49868ms) Apr 7 13:32:14.045: INFO: (19) /api/v1/namespaces/proxy-1085/services/proxy-service-xj9mq:portname1/proxy/: foo (200; 5.206973ms) Apr 7 13:32:14.045: INFO: (19) /api/v1/namespaces/proxy-1085/pods/http:proxy-service-xj9mq-jghcz:1080/proxy/: ... (200; 5.109701ms) Apr 7 13:32:14.045: INFO: (19) /api/v1/namespaces/proxy-1085/services/https:proxy-service-xj9mq:tlsportname1/proxy/: tls baz (200; 5.537145ms) Apr 7 13:32:14.045: INFO: (19) /api/v1/namespaces/proxy-1085/pods/https:proxy-service-xj9mq-jghcz:443/proxy/: test (200; 5.592956ms) STEP: deleting ReplicationController proxy-service-xj9mq in namespace proxy-1085, will wait for the garbage collector to delete the pods Apr 7 13:32:14.103: INFO: Deleting ReplicationController proxy-service-xj9mq took: 6.0035ms Apr 7 13:32:14.403: INFO: Terminating ReplicationController proxy-service-xj9mq pods took: 300.287511ms [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 7 13:32:16.404: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-1085" for this suite. Apr 7 13:32:22.418: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 7 13:32:22.496: INFO: namespace proxy-1085 deletion completed in 6.089048304s • [SLOW TEST:17.790 seconds] [sig-network] Proxy /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:58 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 7 13:32:22.498: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-9f40f770-eba4-464f-b412-ea061544823c STEP: Creating a pod to test consume secrets Apr 7 13:32:22.591: INFO: Waiting up to 5m0s for pod "pod-secrets-801f9e9e-81ce-428a-8251-3c82fe81e9e4" in namespace "secrets-3212" to be "success or failure" Apr 7 13:32:22.594: INFO: Pod "pod-secrets-801f9e9e-81ce-428a-8251-3c82fe81e9e4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.887139ms Apr 7 13:32:24.599: INFO: Pod "pod-secrets-801f9e9e-81ce-428a-8251-3c82fe81e9e4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007121646s Apr 7 13:32:26.603: INFO: Pod "pod-secrets-801f9e9e-81ce-428a-8251-3c82fe81e9e4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011717204s STEP: Saw pod success Apr 7 13:32:26.603: INFO: Pod "pod-secrets-801f9e9e-81ce-428a-8251-3c82fe81e9e4" satisfied condition "success or failure" Apr 7 13:32:26.607: INFO: Trying to get logs from node iruya-worker pod pod-secrets-801f9e9e-81ce-428a-8251-3c82fe81e9e4 container secret-volume-test: STEP: delete the pod Apr 7 13:32:26.624: INFO: Waiting for pod pod-secrets-801f9e9e-81ce-428a-8251-3c82fe81e9e4 to disappear Apr 7 13:32:26.629: INFO: Pod pod-secrets-801f9e9e-81ce-428a-8251-3c82fe81e9e4 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 7 13:32:26.629: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-3212" for this suite. Apr 7 13:32:32.644: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 7 13:32:32.767: INFO: namespace secrets-3212 deletion completed in 6.135901656s • [SLOW TEST:10.270 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 7 13:32:32.768: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes Apr 7 13:32:32.842: INFO: observed the pod list STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice Apr 7 13:32:41.877: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed STEP: verifying pod deletion was observed [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 7 13:32:41.883: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-2761" for this suite. Apr 7 13:32:47.918: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 7 13:32:48.008: INFO: namespace pods-2761 deletion completed in 6.121409274s • [SLOW TEST:15.240 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 7 13:32:48.008: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Apr 7 13:32:48.103: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e1883e0a-b887-459b-81cc-78118c88bbb7" in namespace "downward-api-9642" to be "success or failure" Apr 7 13:32:48.110: INFO: Pod "downwardapi-volume-e1883e0a-b887-459b-81cc-78118c88bbb7": Phase="Pending", Reason="", readiness=false. Elapsed: 7.057186ms Apr 7 13:32:50.115: INFO: Pod "downwardapi-volume-e1883e0a-b887-459b-81cc-78118c88bbb7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011233606s Apr 7 13:32:52.119: INFO: Pod "downwardapi-volume-e1883e0a-b887-459b-81cc-78118c88bbb7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.015567786s STEP: Saw pod success Apr 7 13:32:52.119: INFO: Pod "downwardapi-volume-e1883e0a-b887-459b-81cc-78118c88bbb7" satisfied condition "success or failure" Apr 7 13:32:52.122: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-e1883e0a-b887-459b-81cc-78118c88bbb7 container client-container: STEP: delete the pod Apr 7 13:32:52.199: INFO: Waiting for pod downwardapi-volume-e1883e0a-b887-459b-81cc-78118c88bbb7 to disappear Apr 7 13:32:52.210: INFO: Pod downwardapi-volume-e1883e0a-b887-459b-81cc-78118c88bbb7 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 7 13:32:52.210: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9642" for this suite. Apr 7 13:32:58.226: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 7 13:32:58.291: INFO: namespace downward-api-9642 deletion completed in 6.077168704s • [SLOW TEST:10.283 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 7 13:32:58.292: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Apr 7 13:32:58.331: INFO: Creating deployment "test-recreate-deployment" Apr 7 13:32:58.364: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 Apr 7 13:32:58.379: INFO: deployment "test-recreate-deployment" doesn't have the required revision set Apr 7 13:33:00.386: INFO: Waiting deployment "test-recreate-deployment" to complete Apr 7 13:33:00.388: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721863178, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721863178, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721863178, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721863178, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 7 13:33:02.392: INFO: Triggering a new rollout for deployment "test-recreate-deployment" Apr 7 13:33:02.400: INFO: Updating deployment test-recreate-deployment Apr 7 13:33:02.400: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 Apr 7 13:33:02.714: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment,GenerateName:,Namespace:deployment-973,SelfLink:/apis/apps/v1/namespaces/deployment-973/deployments/test-recreate-deployment,UID:15d6ee00-b318-439e-945a-f670667c6dcd,ResourceVersion:4126352,Generation:2,CreationTimestamp:2020-04-07 13:32:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[{Available False 2020-04-07 13:33:02 +0000 UTC 2020-04-07 13:33:02 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2020-04-07 13:33:02 +0000 UTC 2020-04-07 13:32:58 +0000 UTC ReplicaSetUpdated ReplicaSet "test-recreate-deployment-5c8c9cc69d" is progressing.}],ReadyReplicas:0,CollisionCount:nil,},} Apr 7 13:33:02.766: INFO: New ReplicaSet "test-recreate-deployment-5c8c9cc69d" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5c8c9cc69d,GenerateName:,Namespace:deployment-973,SelfLink:/apis/apps/v1/namespaces/deployment-973/replicasets/test-recreate-deployment-5c8c9cc69d,UID:9da3be5e-fc8a-45af-a74d-e6c672d913c6,ResourceVersion:4126351,Generation:1,CreationTimestamp:2020-04-07 13:33:02 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 15d6ee00-b318-439e-945a-f670667c6dcd 0xc002671b37 0xc002671b38}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Apr 7 13:33:02.766: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": Apr 7 13:33:02.766: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-6df85df6b9,GenerateName:,Namespace:deployment-973,SelfLink:/apis/apps/v1/namespaces/deployment-973/replicasets/test-recreate-deployment-6df85df6b9,UID:52530044-d523-4387-a51c-fc8182d8720c,ResourceVersion:4126341,Generation:2,CreationTimestamp:2020-04-07 13:32:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 15d6ee00-b318-439e-945a-f670667c6dcd 0xc002671c27 0xc002671c28}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Apr 7 13:33:02.771: INFO: Pod "test-recreate-deployment-5c8c9cc69d-bxt95" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5c8c9cc69d-bxt95,GenerateName:test-recreate-deployment-5c8c9cc69d-,Namespace:deployment-973,SelfLink:/api/v1/namespaces/deployment-973/pods/test-recreate-deployment-5c8c9cc69d-bxt95,UID:0535d9df-38f8-4eec-8040-d1d367ef025e,ResourceVersion:4126353,Generation:0,CreationTimestamp:2020-04-07 13:33:02 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-recreate-deployment-5c8c9cc69d 9da3be5e-fc8a-45af-a74d-e6c672d913c6 0xc002e064e7 0xc002e064e8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-j29ql {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-j29ql,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-j29ql true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002e06560} {node.kubernetes.io/unreachable Exists NoExecute 0xc002e06580}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-07 13:33:02 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-07 13:33:02 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-07 13:33:02 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-07 13:33:02 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:,StartTime:2020-04-07 13:33:02 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 7 13:33:02.771: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-973" for this suite. Apr 7 13:33:09.040: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 7 13:33:09.130: INFO: namespace deployment-973 deletion completed in 6.112143972s • [SLOW TEST:10.838 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 7 13:33:09.130: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-map-abbb4c8c-f714-45cd-94b9-e05b6e190412 STEP: Creating a pod to test consume configMaps Apr 7 13:33:09.208: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-68f43c6e-4990-4003-b9f0-4e3efe08e8a1" in namespace "projected-4069" to be "success or failure" Apr 7 13:33:09.210: INFO: Pod "pod-projected-configmaps-68f43c6e-4990-4003-b9f0-4e3efe08e8a1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.311816ms Apr 7 13:33:11.215: INFO: Pod "pod-projected-configmaps-68f43c6e-4990-4003-b9f0-4e3efe08e8a1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007081489s Apr 7 13:33:13.219: INFO: Pod "pod-projected-configmaps-68f43c6e-4990-4003-b9f0-4e3efe08e8a1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011037824s STEP: Saw pod success Apr 7 13:33:13.219: INFO: Pod "pod-projected-configmaps-68f43c6e-4990-4003-b9f0-4e3efe08e8a1" satisfied condition "success or failure" Apr 7 13:33:13.222: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-configmaps-68f43c6e-4990-4003-b9f0-4e3efe08e8a1 container projected-configmap-volume-test: STEP: delete the pod Apr 7 13:33:13.244: INFO: Waiting for pod pod-projected-configmaps-68f43c6e-4990-4003-b9f0-4e3efe08e8a1 to disappear Apr 7 13:33:13.249: INFO: Pod pod-projected-configmaps-68f43c6e-4990-4003-b9f0-4e3efe08e8a1 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 7 13:33:13.249: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4069" for this suite. Apr 7 13:33:19.275: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 7 13:33:19.360: INFO: namespace projected-4069 deletion completed in 6.10739425s • [SLOW TEST:10.230 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl api-versions should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 7 13:33:19.360: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: validating api versions Apr 7 13:33:19.393: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions' Apr 7 13:33:19.561: INFO: stderr: "" Apr 7 13:33:19.562: INFO: stdout: "admissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\napps/v1beta1\napps/v1beta2\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 7 13:33:19.562: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3390" for this suite. Apr 7 13:33:25.586: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 7 13:33:25.669: INFO: namespace kubectl-3390 deletion completed in 6.102402479s • [SLOW TEST:6.308 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl api-versions /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 7 13:33:25.670: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0666 on tmpfs Apr 7 13:33:25.728: INFO: Waiting up to 5m0s for pod "pod-a563a4bb-81e6-4a6f-ac22-20f36fe57279" in namespace "emptydir-7369" to be "success or failure" Apr 7 13:33:25.743: INFO: Pod "pod-a563a4bb-81e6-4a6f-ac22-20f36fe57279": Phase="Pending", Reason="", readiness=false. Elapsed: 14.480703ms Apr 7 13:33:27.747: INFO: Pod "pod-a563a4bb-81e6-4a6f-ac22-20f36fe57279": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018666584s Apr 7 13:33:29.753: INFO: Pod "pod-a563a4bb-81e6-4a6f-ac22-20f36fe57279": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.02531499s STEP: Saw pod success Apr 7 13:33:29.753: INFO: Pod "pod-a563a4bb-81e6-4a6f-ac22-20f36fe57279" satisfied condition "success or failure" Apr 7 13:33:29.756: INFO: Trying to get logs from node iruya-worker pod pod-a563a4bb-81e6-4a6f-ac22-20f36fe57279 container test-container: STEP: delete the pod Apr 7 13:33:29.776: INFO: Waiting for pod pod-a563a4bb-81e6-4a6f-ac22-20f36fe57279 to disappear Apr 7 13:33:29.780: INFO: Pod pod-a563a4bb-81e6-4a6f-ac22-20f36fe57279 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 7 13:33:29.780: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7369" for this suite. Apr 7 13:33:35.795: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 7 13:33:35.869: INFO: namespace emptydir-7369 deletion completed in 6.086334179s • [SLOW TEST:10.200 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 7 13:33:35.870: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Apr 7 13:33:35.933: INFO: Waiting up to 5m0s for pod "downwardapi-volume-582b7162-356d-4d03-b639-acf4a213850a" in namespace "downward-api-209" to be "success or failure" Apr 7 13:33:35.936: INFO: Pod "downwardapi-volume-582b7162-356d-4d03-b639-acf4a213850a": Phase="Pending", Reason="", readiness=false. Elapsed: 3.372761ms Apr 7 13:33:37.940: INFO: Pod "downwardapi-volume-582b7162-356d-4d03-b639-acf4a213850a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00760828s Apr 7 13:33:39.944: INFO: Pod "downwardapi-volume-582b7162-356d-4d03-b639-acf4a213850a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011582957s STEP: Saw pod success Apr 7 13:33:39.944: INFO: Pod "downwardapi-volume-582b7162-356d-4d03-b639-acf4a213850a" satisfied condition "success or failure" Apr 7 13:33:39.947: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-582b7162-356d-4d03-b639-acf4a213850a container client-container: STEP: delete the pod Apr 7 13:33:39.983: INFO: Waiting for pod downwardapi-volume-582b7162-356d-4d03-b639-acf4a213850a to disappear Apr 7 13:33:39.997: INFO: Pod downwardapi-volume-582b7162-356d-4d03-b639-acf4a213850a no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 7 13:33:39.997: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-209" for this suite. Apr 7 13:33:46.013: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 7 13:33:46.092: INFO: namespace downward-api-209 deletion completed in 6.091031565s • [SLOW TEST:10.222 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 7 13:33:46.092: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test substitution in container's command Apr 7 13:33:46.145: INFO: Waiting up to 5m0s for pod "var-expansion-4c79658b-436f-461b-8f0a-733d1f13b57e" in namespace "var-expansion-4518" to be "success or failure" Apr 7 13:33:46.165: INFO: Pod "var-expansion-4c79658b-436f-461b-8f0a-733d1f13b57e": Phase="Pending", Reason="", readiness=false. Elapsed: 19.85131ms Apr 7 13:33:48.179: INFO: Pod "var-expansion-4c79658b-436f-461b-8f0a-733d1f13b57e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03340376s Apr 7 13:33:50.183: INFO: Pod "var-expansion-4c79658b-436f-461b-8f0a-733d1f13b57e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.037770149s STEP: Saw pod success Apr 7 13:33:50.183: INFO: Pod "var-expansion-4c79658b-436f-461b-8f0a-733d1f13b57e" satisfied condition "success or failure" Apr 7 13:33:50.186: INFO: Trying to get logs from node iruya-worker pod var-expansion-4c79658b-436f-461b-8f0a-733d1f13b57e container dapi-container: STEP: delete the pod Apr 7 13:33:50.202: INFO: Waiting for pod var-expansion-4c79658b-436f-461b-8f0a-733d1f13b57e to disappear Apr 7 13:33:50.220: INFO: Pod var-expansion-4c79658b-436f-461b-8f0a-733d1f13b57e no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 7 13:33:50.220: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-4518" for this suite. Apr 7 13:33:56.235: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 7 13:33:56.328: INFO: namespace var-expansion-4518 deletion completed in 6.104985667s • [SLOW TEST:10.235 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 7 13:33:56.328: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-42594e8c-5ea2-44eb-a54f-ff3ecbd79687 STEP: Creating a pod to test consume configMaps Apr 7 13:33:56.418: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-ad3b1a89-2cf6-43ff-8c06-c153f0d847b0" in namespace "projected-1114" to be "success or failure" Apr 7 13:33:56.432: INFO: Pod "pod-projected-configmaps-ad3b1a89-2cf6-43ff-8c06-c153f0d847b0": Phase="Pending", Reason="", readiness=false. Elapsed: 13.263805ms Apr 7 13:33:58.436: INFO: Pod "pod-projected-configmaps-ad3b1a89-2cf6-43ff-8c06-c153f0d847b0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017911497s Apr 7 13:34:00.441: INFO: Pod "pod-projected-configmaps-ad3b1a89-2cf6-43ff-8c06-c153f0d847b0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.022734506s STEP: Saw pod success Apr 7 13:34:00.441: INFO: Pod "pod-projected-configmaps-ad3b1a89-2cf6-43ff-8c06-c153f0d847b0" satisfied condition "success or failure" Apr 7 13:34:00.445: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-configmaps-ad3b1a89-2cf6-43ff-8c06-c153f0d847b0 container projected-configmap-volume-test: STEP: delete the pod Apr 7 13:34:00.492: INFO: Waiting for pod pod-projected-configmaps-ad3b1a89-2cf6-43ff-8c06-c153f0d847b0 to disappear Apr 7 13:34:00.511: INFO: Pod pod-projected-configmaps-ad3b1a89-2cf6-43ff-8c06-c153f0d847b0 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 7 13:34:00.511: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1114" for this suite. Apr 7 13:34:06.527: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 7 13:34:06.607: INFO: namespace projected-1114 deletion completed in 6.091740724s • [SLOW TEST:10.279 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 7 13:34:06.607: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating the pod Apr 7 13:34:11.220: INFO: Successfully updated pod "annotationupdate50c7376a-56f6-4e3c-90e1-c54596b63deb" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 7 13:34:13.276: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5243" for this suite. Apr 7 13:34:35.300: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 7 13:34:35.376: INFO: namespace projected-5243 deletion completed in 22.096627846s • [SLOW TEST:28.769 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 7 13:34:35.377: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: Gathering metrics W0407 13:34:36.522123 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Apr 7 13:34:36.522: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 7 13:34:36.522: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-4390" for this suite. Apr 7 13:34:42.544: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 7 13:34:42.618: INFO: namespace gc-4390 deletion completed in 6.092763188s • [SLOW TEST:7.241 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 7 13:34:42.618: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Performing setup for networking test in namespace pod-network-test-6930 STEP: creating a selector STEP: Creating the service pods in kubernetes Apr 7 13:34:42.711: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Apr 7 13:35:04.847: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.123:8080/dial?request=hostName&protocol=http&host=10.244.1.122&port=8080&tries=1'] Namespace:pod-network-test-6930 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 7 13:35:04.847: INFO: >>> kubeConfig: /root/.kube/config I0407 13:35:04.876624 6 log.go:172] (0xc000dc0630) (0xc00303cf00) Create stream I0407 13:35:04.876662 6 log.go:172] (0xc000dc0630) (0xc00303cf00) Stream added, broadcasting: 1 I0407 13:35:04.879444 6 log.go:172] (0xc000dc0630) Reply frame received for 1 I0407 13:35:04.879486 6 log.go:172] (0xc000dc0630) (0xc002213c20) Create stream I0407 13:35:04.879507 6 log.go:172] (0xc000dc0630) (0xc002213c20) Stream added, broadcasting: 3 I0407 13:35:04.880939 6 log.go:172] (0xc000dc0630) Reply frame received for 3 I0407 13:35:04.880981 6 log.go:172] (0xc000dc0630) (0xc00303cfa0) Create stream I0407 13:35:04.880996 6 log.go:172] (0xc000dc0630) (0xc00303cfa0) Stream added, broadcasting: 5 I0407 13:35:04.882253 6 log.go:172] (0xc000dc0630) Reply frame received for 5 I0407 13:35:04.950573 6 log.go:172] (0xc000dc0630) Data frame received for 3 I0407 13:35:04.950663 6 log.go:172] (0xc002213c20) (3) Data frame handling I0407 13:35:04.950736 6 log.go:172] (0xc002213c20) (3) Data frame sent I0407 13:35:04.951081 6 log.go:172] (0xc000dc0630) Data frame received for 5 I0407 13:35:04.951106 6 log.go:172] (0xc00303cfa0) (5) Data frame handling I0407 13:35:04.951126 6 log.go:172] (0xc000dc0630) Data frame received for 3 I0407 13:35:04.951138 6 log.go:172] (0xc002213c20) (3) Data frame handling I0407 13:35:04.952591 6 log.go:172] (0xc000dc0630) Data frame received for 1 I0407 13:35:04.952617 6 log.go:172] (0xc00303cf00) (1) Data frame handling I0407 13:35:04.952644 6 log.go:172] (0xc00303cf00) (1) Data frame sent I0407 13:35:04.952667 6 log.go:172] (0xc000dc0630) (0xc00303cf00) Stream removed, broadcasting: 1 I0407 13:35:04.952777 6 log.go:172] (0xc000dc0630) (0xc00303cf00) Stream removed, broadcasting: 1 I0407 13:35:04.952800 6 log.go:172] (0xc000dc0630) (0xc002213c20) Stream removed, broadcasting: 3 I0407 13:35:04.952964 6 log.go:172] (0xc000dc0630) (0xc00303cfa0) Stream removed, broadcasting: 5 Apr 7 13:35:04.953: INFO: Waiting for endpoints: map[] I0407 13:35:04.953720 6 log.go:172] (0xc000dc0630) Go away received Apr 7 13:35:04.956: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.123:8080/dial?request=hostName&protocol=http&host=10.244.2.42&port=8080&tries=1'] Namespace:pod-network-test-6930 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 7 13:35:04.956: INFO: >>> kubeConfig: /root/.kube/config I0407 13:35:04.991060 6 log.go:172] (0xc001420fd0) (0xc0001128c0) Create stream I0407 13:35:04.991092 6 log.go:172] (0xc001420fd0) (0xc0001128c0) Stream added, broadcasting: 1 I0407 13:35:04.994412 6 log.go:172] (0xc001420fd0) Reply frame received for 1 I0407 13:35:04.994462 6 log.go:172] (0xc001420fd0) (0xc000a6adc0) Create stream I0407 13:35:04.994475 6 log.go:172] (0xc001420fd0) (0xc000a6adc0) Stream added, broadcasting: 3 I0407 13:35:04.995438 6 log.go:172] (0xc001420fd0) Reply frame received for 3 I0407 13:35:04.995474 6 log.go:172] (0xc001420fd0) (0xc000a6ae60) Create stream I0407 13:35:04.995485 6 log.go:172] (0xc001420fd0) (0xc000a6ae60) Stream added, broadcasting: 5 I0407 13:35:04.996414 6 log.go:172] (0xc001420fd0) Reply frame received for 5 I0407 13:35:05.069524 6 log.go:172] (0xc001420fd0) Data frame received for 3 I0407 13:35:05.069569 6 log.go:172] (0xc000a6adc0) (3) Data frame handling I0407 13:35:05.069590 6 log.go:172] (0xc000a6adc0) (3) Data frame sent I0407 13:35:05.070012 6 log.go:172] (0xc001420fd0) Data frame received for 3 I0407 13:35:05.070033 6 log.go:172] (0xc000a6adc0) (3) Data frame handling I0407 13:35:05.070088 6 log.go:172] (0xc001420fd0) Data frame received for 5 I0407 13:35:05.070119 6 log.go:172] (0xc000a6ae60) (5) Data frame handling I0407 13:35:05.071648 6 log.go:172] (0xc001420fd0) Data frame received for 1 I0407 13:35:05.071662 6 log.go:172] (0xc0001128c0) (1) Data frame handling I0407 13:35:05.071669 6 log.go:172] (0xc0001128c0) (1) Data frame sent I0407 13:35:05.071805 6 log.go:172] (0xc001420fd0) (0xc0001128c0) Stream removed, broadcasting: 1 I0407 13:35:05.071854 6 log.go:172] (0xc001420fd0) Go away received I0407 13:35:05.071971 6 log.go:172] (0xc001420fd0) (0xc0001128c0) Stream removed, broadcasting: 1 I0407 13:35:05.072005 6 log.go:172] (0xc001420fd0) (0xc000a6adc0) Stream removed, broadcasting: 3 I0407 13:35:05.072028 6 log.go:172] (0xc001420fd0) (0xc000a6ae60) Stream removed, broadcasting: 5 Apr 7 13:35:05.072: INFO: Waiting for endpoints: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 7 13:35:05.072: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-6930" for this suite. Apr 7 13:35:29.086: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 7 13:35:29.159: INFO: namespace pod-network-test-6930 deletion completed in 24.083415609s • [SLOW TEST:46.542 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 7 13:35:29.160: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change Apr 7 13:35:29.214: INFO: Pod name pod-release: Found 0 pods out of 1 Apr 7 13:35:34.220: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 7 13:35:35.256: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-3049" for this suite. Apr 7 13:35:41.341: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 7 13:35:41.408: INFO: namespace replication-controller-3049 deletion completed in 6.148396804s • [SLOW TEST:12.247 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Job should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 7 13:35:41.408: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: delete a job STEP: deleting Job.batch foo in namespace job-1890, will wait for the garbage collector to delete the pods Apr 7 13:35:45.518: INFO: Deleting Job.batch foo took: 5.790166ms Apr 7 13:35:45.618: INFO: Terminating Job.batch foo pods took: 100.23228ms STEP: Ensuring job was deleted [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 7 13:36:22.222: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-1890" for this suite. Apr 7 13:36:28.257: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 7 13:36:28.332: INFO: namespace job-1890 deletion completed in 6.106105512s • [SLOW TEST:46.923 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 7 13:36:28.332: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap configmap-5905/configmap-test-07621949-2996-4dfb-b576-855ae293efa8 STEP: Creating a pod to test consume configMaps Apr 7 13:36:28.421: INFO: Waiting up to 5m0s for pod "pod-configmaps-95b44786-5d5a-4578-b566-89be238b493e" in namespace "configmap-5905" to be "success or failure" Apr 7 13:36:28.425: INFO: Pod "pod-configmaps-95b44786-5d5a-4578-b566-89be238b493e": Phase="Pending", Reason="", readiness=false. Elapsed: 3.70701ms Apr 7 13:36:30.429: INFO: Pod "pod-configmaps-95b44786-5d5a-4578-b566-89be238b493e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007739254s Apr 7 13:36:32.434: INFO: Pod "pod-configmaps-95b44786-5d5a-4578-b566-89be238b493e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012157291s STEP: Saw pod success Apr 7 13:36:32.434: INFO: Pod "pod-configmaps-95b44786-5d5a-4578-b566-89be238b493e" satisfied condition "success or failure" Apr 7 13:36:32.437: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-95b44786-5d5a-4578-b566-89be238b493e container env-test: STEP: delete the pod Apr 7 13:36:32.542: INFO: Waiting for pod pod-configmaps-95b44786-5d5a-4578-b566-89be238b493e to disappear Apr 7 13:36:32.556: INFO: Pod pod-configmaps-95b44786-5d5a-4578-b566-89be238b493e no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 7 13:36:32.556: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5905" for this suite. Apr 7 13:36:38.572: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 7 13:36:38.672: INFO: namespace configmap-5905 deletion completed in 6.112867384s • [SLOW TEST:10.341 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 7 13:36:38.673: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test override all Apr 7 13:36:38.732: INFO: Waiting up to 5m0s for pod "client-containers-f6fdba5d-dae3-45d8-94b6-17393d1440a9" in namespace "containers-3074" to be "success or failure" Apr 7 13:36:38.736: INFO: Pod "client-containers-f6fdba5d-dae3-45d8-94b6-17393d1440a9": Phase="Pending", Reason="", readiness=false. Elapsed: 3.878613ms Apr 7 13:36:40.740: INFO: Pod "client-containers-f6fdba5d-dae3-45d8-94b6-17393d1440a9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007445335s Apr 7 13:36:42.744: INFO: Pod "client-containers-f6fdba5d-dae3-45d8-94b6-17393d1440a9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011406301s STEP: Saw pod success Apr 7 13:36:42.744: INFO: Pod "client-containers-f6fdba5d-dae3-45d8-94b6-17393d1440a9" satisfied condition "success or failure" Apr 7 13:36:42.746: INFO: Trying to get logs from node iruya-worker pod client-containers-f6fdba5d-dae3-45d8-94b6-17393d1440a9 container test-container: STEP: delete the pod Apr 7 13:36:42.779: INFO: Waiting for pod client-containers-f6fdba5d-dae3-45d8-94b6-17393d1440a9 to disappear Apr 7 13:36:42.784: INFO: Pod client-containers-f6fdba5d-dae3-45d8-94b6-17393d1440a9 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 7 13:36:42.784: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-3074" for this suite. Apr 7 13:36:48.799: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 7 13:36:48.877: INFO: namespace containers-3074 deletion completed in 6.089451014s • [SLOW TEST:10.204 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 7 13:36:48.878: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test override arguments Apr 7 13:36:48.931: INFO: Waiting up to 5m0s for pod "client-containers-286686a2-e171-41b8-8132-0ea4f1783ab4" in namespace "containers-7041" to be "success or failure" Apr 7 13:36:48.946: INFO: Pod "client-containers-286686a2-e171-41b8-8132-0ea4f1783ab4": Phase="Pending", Reason="", readiness=false. Elapsed: 15.368809ms Apr 7 13:36:50.950: INFO: Pod "client-containers-286686a2-e171-41b8-8132-0ea4f1783ab4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019683844s Apr 7 13:36:52.953: INFO: Pod "client-containers-286686a2-e171-41b8-8132-0ea4f1783ab4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.022694888s STEP: Saw pod success Apr 7 13:36:52.954: INFO: Pod "client-containers-286686a2-e171-41b8-8132-0ea4f1783ab4" satisfied condition "success or failure" Apr 7 13:36:52.956: INFO: Trying to get logs from node iruya-worker2 pod client-containers-286686a2-e171-41b8-8132-0ea4f1783ab4 container test-container: STEP: delete the pod Apr 7 13:36:52.971: INFO: Waiting for pod client-containers-286686a2-e171-41b8-8132-0ea4f1783ab4 to disappear Apr 7 13:36:53.008: INFO: Pod client-containers-286686a2-e171-41b8-8132-0ea4f1783ab4 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 7 13:36:53.008: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-7041" for this suite. Apr 7 13:36:59.027: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 7 13:36:59.122: INFO: namespace containers-7041 deletion completed in 6.110519571s • [SLOW TEST:10.244 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 7 13:36:59.122: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-0d0f27dd-96a5-4bae-bc5d-85b3c28e1299 STEP: Creating a pod to test consume configMaps Apr 7 13:36:59.202: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-3d4036e8-dead-481e-89d9-c69631513a02" in namespace "projected-2512" to be "success or failure" Apr 7 13:36:59.216: INFO: Pod "pod-projected-configmaps-3d4036e8-dead-481e-89d9-c69631513a02": Phase="Pending", Reason="", readiness=false. Elapsed: 14.187498ms Apr 7 13:37:01.220: INFO: Pod "pod-projected-configmaps-3d4036e8-dead-481e-89d9-c69631513a02": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018087215s Apr 7 13:37:03.224: INFO: Pod "pod-projected-configmaps-3d4036e8-dead-481e-89d9-c69631513a02": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.022063882s STEP: Saw pod success Apr 7 13:37:03.224: INFO: Pod "pod-projected-configmaps-3d4036e8-dead-481e-89d9-c69631513a02" satisfied condition "success or failure" Apr 7 13:37:03.227: INFO: Trying to get logs from node iruya-worker pod pod-projected-configmaps-3d4036e8-dead-481e-89d9-c69631513a02 container projected-configmap-volume-test: STEP: delete the pod Apr 7 13:37:03.285: INFO: Waiting for pod pod-projected-configmaps-3d4036e8-dead-481e-89d9-c69631513a02 to disappear Apr 7 13:37:03.294: INFO: Pod pod-projected-configmaps-3d4036e8-dead-481e-89d9-c69631513a02 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 7 13:37:03.294: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2512" for this suite. Apr 7 13:37:09.309: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 7 13:37:09.388: INFO: namespace projected-2512 deletion completed in 6.089769467s • [SLOW TEST:10.266 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 7 13:37:09.388: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-secret-kpfh STEP: Creating a pod to test atomic-volume-subpath Apr 7 13:37:09.452: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-kpfh" in namespace "subpath-2941" to be "success or failure" Apr 7 13:37:09.456: INFO: Pod "pod-subpath-test-secret-kpfh": Phase="Pending", Reason="", readiness=false. Elapsed: 4.042311ms Apr 7 13:37:11.460: INFO: Pod "pod-subpath-test-secret-kpfh": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008491105s Apr 7 13:37:13.465: INFO: Pod "pod-subpath-test-secret-kpfh": Phase="Running", Reason="", readiness=true. Elapsed: 4.0130454s Apr 7 13:37:15.469: INFO: Pod "pod-subpath-test-secret-kpfh": Phase="Running", Reason="", readiness=true. Elapsed: 6.017167861s Apr 7 13:37:17.472: INFO: Pod "pod-subpath-test-secret-kpfh": Phase="Running", Reason="", readiness=true. Elapsed: 8.020742754s Apr 7 13:37:19.477: INFO: Pod "pod-subpath-test-secret-kpfh": Phase="Running", Reason="", readiness=true. Elapsed: 10.025090203s Apr 7 13:37:21.480: INFO: Pod "pod-subpath-test-secret-kpfh": Phase="Running", Reason="", readiness=true. Elapsed: 12.02872606s Apr 7 13:37:23.484: INFO: Pod "pod-subpath-test-secret-kpfh": Phase="Running", Reason="", readiness=true. Elapsed: 14.032621474s Apr 7 13:37:25.488: INFO: Pod "pod-subpath-test-secret-kpfh": Phase="Running", Reason="", readiness=true. Elapsed: 16.036392479s Apr 7 13:37:27.493: INFO: Pod "pod-subpath-test-secret-kpfh": Phase="Running", Reason="", readiness=true. Elapsed: 18.040806575s Apr 7 13:37:29.497: INFO: Pod "pod-subpath-test-secret-kpfh": Phase="Running", Reason="", readiness=true. Elapsed: 20.045057675s Apr 7 13:37:31.500: INFO: Pod "pod-subpath-test-secret-kpfh": Phase="Running", Reason="", readiness=true. Elapsed: 22.048695275s Apr 7 13:37:33.505: INFO: Pod "pod-subpath-test-secret-kpfh": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.053010042s STEP: Saw pod success Apr 7 13:37:33.505: INFO: Pod "pod-subpath-test-secret-kpfh" satisfied condition "success or failure" Apr 7 13:37:33.508: INFO: Trying to get logs from node iruya-worker2 pod pod-subpath-test-secret-kpfh container test-container-subpath-secret-kpfh: STEP: delete the pod Apr 7 13:37:33.539: INFO: Waiting for pod pod-subpath-test-secret-kpfh to disappear Apr 7 13:37:33.625: INFO: Pod pod-subpath-test-secret-kpfh no longer exists STEP: Deleting pod pod-subpath-test-secret-kpfh Apr 7 13:37:33.625: INFO: Deleting pod "pod-subpath-test-secret-kpfh" in namespace "subpath-2941" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 7 13:37:33.628: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-2941" for this suite. Apr 7 13:37:39.645: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 7 13:37:39.727: INFO: namespace subpath-2941 deletion completed in 6.096493069s • [SLOW TEST:30.339 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 7 13:37:39.728: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-6353475b-005c-46ea-a3cd-552d4708adec STEP: Creating a pod to test consume configMaps Apr 7 13:37:39.799: INFO: Waiting up to 5m0s for pod "pod-configmaps-84874c2a-e800-449e-91c2-f15f33708d88" in namespace "configmap-6667" to be "success or failure" Apr 7 13:37:39.803: INFO: Pod "pod-configmaps-84874c2a-e800-449e-91c2-f15f33708d88": Phase="Pending", Reason="", readiness=false. Elapsed: 4.126687ms Apr 7 13:37:41.829: INFO: Pod "pod-configmaps-84874c2a-e800-449e-91c2-f15f33708d88": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029808945s Apr 7 13:37:43.833: INFO: Pod "pod-configmaps-84874c2a-e800-449e-91c2-f15f33708d88": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.03356504s STEP: Saw pod success Apr 7 13:37:43.833: INFO: Pod "pod-configmaps-84874c2a-e800-449e-91c2-f15f33708d88" satisfied condition "success or failure" Apr 7 13:37:43.836: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-84874c2a-e800-449e-91c2-f15f33708d88 container configmap-volume-test: STEP: delete the pod Apr 7 13:37:43.858: INFO: Waiting for pod pod-configmaps-84874c2a-e800-449e-91c2-f15f33708d88 to disappear Apr 7 13:37:43.862: INFO: Pod pod-configmaps-84874c2a-e800-449e-91c2-f15f33708d88 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 7 13:37:43.863: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6667" for this suite. Apr 7 13:37:49.874: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 7 13:37:49.949: INFO: namespace configmap-6667 deletion completed in 6.084077295s • [SLOW TEST:10.221 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 7 13:37:49.950: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 7 13:37:54.022: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-5750" for this suite. Apr 7 13:38:44.051: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 7 13:38:44.132: INFO: namespace kubelet-test-5750 deletion completed in 50.106592839s • [SLOW TEST:54.183 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a busybox Pod with hostAliases /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:136 should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 7 13:38:44.133: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the container STEP: wait for the container to reach Failed STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Apr 7 13:38:48.234: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 7 13:38:48.271: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-8328" for this suite. Apr 7 13:38:54.300: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 7 13:38:54.386: INFO: namespace container-runtime-8328 deletion completed in 6.111083404s • [SLOW TEST:10.253 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129 should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 7 13:38:54.387: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Apr 7 13:38:54.488: INFO: (0) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 5.491065ms) Apr 7 13:38:54.491: INFO: (1) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.934481ms) Apr 7 13:38:54.495: INFO: (2) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.155363ms) Apr 7 13:38:54.497: INFO: (3) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.777552ms) Apr 7 13:38:54.501: INFO: (4) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.847462ms) Apr 7 13:38:54.505: INFO: (5) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.51216ms) Apr 7 13:38:54.508: INFO: (6) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.463074ms) Apr 7 13:38:54.512: INFO: (7) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.467372ms) Apr 7 13:38:54.515: INFO: (8) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.183666ms) Apr 7 13:38:54.518: INFO: (9) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.061879ms) Apr 7 13:38:54.521: INFO: (10) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.077816ms) Apr 7 13:38:54.524: INFO: (11) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.915009ms) Apr 7 13:38:54.527: INFO: (12) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.040525ms) Apr 7 13:38:54.531: INFO: (13) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.253733ms) Apr 7 13:38:54.534: INFO: (14) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.898305ms) Apr 7 13:38:54.537: INFO: (15) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.358248ms) Apr 7 13:38:54.540: INFO: (16) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.657879ms) Apr 7 13:38:54.543: INFO: (17) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.21128ms) Apr 7 13:38:54.546: INFO: (18) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.436765ms) Apr 7 13:38:54.550: INFO: (19) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.484823ms) [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 7 13:38:54.550: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-7525" for this suite. Apr 7 13:39:00.569: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 7 13:39:00.640: INFO: namespace proxy-7525 deletion completed in 6.087658005s • [SLOW TEST:6.254 seconds] [sig-network] Proxy /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:58 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 7 13:39:00.641: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Apr 7 13:39:00.763: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. Apr 7 13:39:00.770: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 7 13:39:00.775: INFO: Number of nodes with available pods: 0 Apr 7 13:39:00.775: INFO: Node iruya-worker is running more than one daemon pod Apr 7 13:39:01.780: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 7 13:39:01.784: INFO: Number of nodes with available pods: 0 Apr 7 13:39:01.784: INFO: Node iruya-worker is running more than one daemon pod Apr 7 13:39:02.780: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 7 13:39:02.784: INFO: Number of nodes with available pods: 0 Apr 7 13:39:02.784: INFO: Node iruya-worker is running more than one daemon pod Apr 7 13:39:03.855: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 7 13:39:03.858: INFO: Number of nodes with available pods: 0 Apr 7 13:39:03.858: INFO: Node iruya-worker is running more than one daemon pod Apr 7 13:39:04.781: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 7 13:39:04.785: INFO: Number of nodes with available pods: 1 Apr 7 13:39:04.785: INFO: Node iruya-worker2 is running more than one daemon pod Apr 7 13:39:05.783: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 7 13:39:05.786: INFO: Number of nodes with available pods: 2 Apr 7 13:39:05.786: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. Apr 7 13:39:05.848: INFO: Wrong image for pod: daemon-set-2ghjc. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 7 13:39:05.848: INFO: Wrong image for pod: daemon-set-swwm7. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 7 13:39:05.855: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 7 13:39:06.859: INFO: Wrong image for pod: daemon-set-2ghjc. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 7 13:39:06.859: INFO: Wrong image for pod: daemon-set-swwm7. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 7 13:39:06.862: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 7 13:39:07.859: INFO: Wrong image for pod: daemon-set-2ghjc. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 7 13:39:07.859: INFO: Wrong image for pod: daemon-set-swwm7. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 7 13:39:07.863: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 7 13:39:08.860: INFO: Wrong image for pod: daemon-set-2ghjc. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 7 13:39:08.860: INFO: Pod daemon-set-2ghjc is not available Apr 7 13:39:08.860: INFO: Wrong image for pod: daemon-set-swwm7. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 7 13:39:08.863: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 7 13:39:09.859: INFO: Wrong image for pod: daemon-set-2ghjc. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 7 13:39:09.859: INFO: Pod daemon-set-2ghjc is not available Apr 7 13:39:09.859: INFO: Wrong image for pod: daemon-set-swwm7. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 7 13:39:09.862: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 7 13:39:10.860: INFO: Wrong image for pod: daemon-set-2ghjc. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 7 13:39:10.860: INFO: Pod daemon-set-2ghjc is not available Apr 7 13:39:10.860: INFO: Wrong image for pod: daemon-set-swwm7. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 7 13:39:10.864: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 7 13:39:11.869: INFO: Wrong image for pod: daemon-set-2ghjc. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 7 13:39:11.870: INFO: Pod daemon-set-2ghjc is not available Apr 7 13:39:11.870: INFO: Wrong image for pod: daemon-set-swwm7. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 7 13:39:11.882: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 7 13:39:12.859: INFO: Pod daemon-set-s77dc is not available Apr 7 13:39:12.859: INFO: Wrong image for pod: daemon-set-swwm7. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 7 13:39:12.863: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 7 13:39:13.859: INFO: Pod daemon-set-s77dc is not available Apr 7 13:39:13.859: INFO: Wrong image for pod: daemon-set-swwm7. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 7 13:39:13.862: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 7 13:39:14.858: INFO: Wrong image for pod: daemon-set-swwm7. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 7 13:39:14.861: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 7 13:39:15.859: INFO: Wrong image for pod: daemon-set-swwm7. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 7 13:39:15.859: INFO: Pod daemon-set-swwm7 is not available Apr 7 13:39:15.862: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 7 13:39:16.859: INFO: Pod daemon-set-hmxdf is not available Apr 7 13:39:16.863: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node STEP: Check that daemon pods are still running on every node of the cluster. Apr 7 13:39:16.867: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 7 13:39:16.871: INFO: Number of nodes with available pods: 1 Apr 7 13:39:16.871: INFO: Node iruya-worker is running more than one daemon pod Apr 7 13:39:17.981: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 7 13:39:17.984: INFO: Number of nodes with available pods: 1 Apr 7 13:39:17.984: INFO: Node iruya-worker is running more than one daemon pod Apr 7 13:39:18.876: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 7 13:39:18.880: INFO: Number of nodes with available pods: 1 Apr 7 13:39:18.880: INFO: Node iruya-worker is running more than one daemon pod Apr 7 13:39:19.875: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 7 13:39:19.878: INFO: Number of nodes with available pods: 2 Apr 7 13:39:19.878: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-9202, will wait for the garbage collector to delete the pods Apr 7 13:39:19.951: INFO: Deleting DaemonSet.extensions daemon-set took: 6.595879ms Apr 7 13:39:20.252: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.280932ms Apr 7 13:39:32.273: INFO: Number of nodes with available pods: 0 Apr 7 13:39:32.273: INFO: Number of running nodes: 0, number of available pods: 0 Apr 7 13:39:32.276: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-9202/daemonsets","resourceVersion":"4127781"},"items":null} Apr 7 13:39:32.279: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-9202/pods","resourceVersion":"4127781"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 7 13:39:32.289: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-9202" for this suite. Apr 7 13:39:38.307: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 7 13:39:38.385: INFO: namespace daemonsets-9202 deletion completed in 6.093607016s • [SLOW TEST:37.744 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 7 13:39:38.386: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod liveness-90cb0201-1559-4fc6-a3ef-32655e8f1260 in namespace container-probe-9880 Apr 7 13:39:42.463: INFO: Started pod liveness-90cb0201-1559-4fc6-a3ef-32655e8f1260 in namespace container-probe-9880 STEP: checking the pod's current state and verifying that restartCount is present Apr 7 13:39:42.465: INFO: Initial restart count of pod liveness-90cb0201-1559-4fc6-a3ef-32655e8f1260 is 0 Apr 7 13:40:02.530: INFO: Restart count of pod container-probe-9880/liveness-90cb0201-1559-4fc6-a3ef-32655e8f1260 is now 1 (20.064641136s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 7 13:40:02.544: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-9880" for this suite. Apr 7 13:40:08.575: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 7 13:40:08.644: INFO: namespace container-probe-9880 deletion completed in 6.089869683s • [SLOW TEST:30.259 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 7 13:40:08.645: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir volume type on node default medium Apr 7 13:40:08.700: INFO: Waiting up to 5m0s for pod "pod-218cbfd8-b329-407c-a7cf-dbcc2258f5a4" in namespace "emptydir-1531" to be "success or failure" Apr 7 13:40:08.718: INFO: Pod "pod-218cbfd8-b329-407c-a7cf-dbcc2258f5a4": Phase="Pending", Reason="", readiness=false. Elapsed: 18.584809ms Apr 7 13:40:10.723: INFO: Pod "pod-218cbfd8-b329-407c-a7cf-dbcc2258f5a4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022595408s Apr 7 13:40:12.726: INFO: Pod "pod-218cbfd8-b329-407c-a7cf-dbcc2258f5a4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.026310575s STEP: Saw pod success Apr 7 13:40:12.726: INFO: Pod "pod-218cbfd8-b329-407c-a7cf-dbcc2258f5a4" satisfied condition "success or failure" Apr 7 13:40:12.729: INFO: Trying to get logs from node iruya-worker pod pod-218cbfd8-b329-407c-a7cf-dbcc2258f5a4 container test-container: STEP: delete the pod Apr 7 13:40:12.753: INFO: Waiting for pod pod-218cbfd8-b329-407c-a7cf-dbcc2258f5a4 to disappear Apr 7 13:40:12.757: INFO: Pod pod-218cbfd8-b329-407c-a7cf-dbcc2258f5a4 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 7 13:40:12.757: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1531" for this suite. Apr 7 13:40:18.772: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 7 13:40:18.849: INFO: namespace emptydir-1531 deletion completed in 6.088249081s • [SLOW TEST:10.204 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 7 13:40:18.849: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name secret-emptykey-test-73914aaf-d71f-41ae-8336-b21d7bed6e34 [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 7 13:40:18.903: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1096" for this suite. Apr 7 13:40:24.924: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 7 13:40:24.998: INFO: namespace secrets-1096 deletion completed in 6.082883411s • [SLOW TEST:6.149 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31 should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 7 13:40:24.999: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Apr 7 13:40:25.084: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a8102628-3e9d-4985-9003-6418b1cad659" in namespace "downward-api-9671" to be "success or failure" Apr 7 13:40:25.089: INFO: Pod "downwardapi-volume-a8102628-3e9d-4985-9003-6418b1cad659": Phase="Pending", Reason="", readiness=false. Elapsed: 4.450231ms Apr 7 13:40:27.092: INFO: Pod "downwardapi-volume-a8102628-3e9d-4985-9003-6418b1cad659": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008225998s Apr 7 13:40:29.097: INFO: Pod "downwardapi-volume-a8102628-3e9d-4985-9003-6418b1cad659": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013003486s STEP: Saw pod success Apr 7 13:40:29.097: INFO: Pod "downwardapi-volume-a8102628-3e9d-4985-9003-6418b1cad659" satisfied condition "success or failure" Apr 7 13:40:29.100: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-a8102628-3e9d-4985-9003-6418b1cad659 container client-container: STEP: delete the pod Apr 7 13:40:29.130: INFO: Waiting for pod downwardapi-volume-a8102628-3e9d-4985-9003-6418b1cad659 to disappear Apr 7 13:40:29.160: INFO: Pod downwardapi-volume-a8102628-3e9d-4985-9003-6418b1cad659 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 7 13:40:29.160: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9671" for this suite. Apr 7 13:40:35.195: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 7 13:40:35.270: INFO: namespace downward-api-9671 deletion completed in 6.106072232s • [SLOW TEST:10.270 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 7 13:40:35.270: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 7 13:40:39.423: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-3780" for this suite. Apr 7 13:41:17.438: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 7 13:41:17.543: INFO: namespace kubelet-test-3780 deletion completed in 38.116366359s • [SLOW TEST:42.273 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a busybox command in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:40 should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 7 13:41:17.543: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-281ae5e8-2b6e-4757-8bc2-6bb0d3b1f3a4 STEP: Creating a pod to test consume configMaps Apr 7 13:41:17.624: INFO: Waiting up to 5m0s for pod "pod-configmaps-03a48b00-50d9-4200-8f1e-8757ae07c9a7" in namespace "configmap-7297" to be "success or failure" Apr 7 13:41:17.635: INFO: Pod "pod-configmaps-03a48b00-50d9-4200-8f1e-8757ae07c9a7": Phase="Pending", Reason="", readiness=false. Elapsed: 10.205162ms Apr 7 13:41:19.639: INFO: Pod "pod-configmaps-03a48b00-50d9-4200-8f1e-8757ae07c9a7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014295321s Apr 7 13:41:21.643: INFO: Pod "pod-configmaps-03a48b00-50d9-4200-8f1e-8757ae07c9a7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018888889s STEP: Saw pod success Apr 7 13:41:21.643: INFO: Pod "pod-configmaps-03a48b00-50d9-4200-8f1e-8757ae07c9a7" satisfied condition "success or failure" Apr 7 13:41:21.647: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-03a48b00-50d9-4200-8f1e-8757ae07c9a7 container configmap-volume-test: STEP: delete the pod Apr 7 13:41:21.702: INFO: Waiting for pod pod-configmaps-03a48b00-50d9-4200-8f1e-8757ae07c9a7 to disappear Apr 7 13:41:21.712: INFO: Pod pod-configmaps-03a48b00-50d9-4200-8f1e-8757ae07c9a7 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 7 13:41:21.713: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7297" for this suite. Apr 7 13:41:27.734: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 7 13:41:27.810: INFO: namespace configmap-7297 deletion completed in 6.09383754s • [SLOW TEST:10.267 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 7 13:41:27.810: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-configmap-fbjg STEP: Creating a pod to test atomic-volume-subpath Apr 7 13:41:27.906: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-fbjg" in namespace "subpath-1305" to be "success or failure" Apr 7 13:41:27.938: INFO: Pod "pod-subpath-test-configmap-fbjg": Phase="Pending", Reason="", readiness=false. Elapsed: 31.527308ms Apr 7 13:41:29.941: INFO: Pod "pod-subpath-test-configmap-fbjg": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034967225s Apr 7 13:41:31.945: INFO: Pod "pod-subpath-test-configmap-fbjg": Phase="Running", Reason="", readiness=true. Elapsed: 4.039165082s Apr 7 13:41:33.950: INFO: Pod "pod-subpath-test-configmap-fbjg": Phase="Running", Reason="", readiness=true. Elapsed: 6.043706081s Apr 7 13:41:35.954: INFO: Pod "pod-subpath-test-configmap-fbjg": Phase="Running", Reason="", readiness=true. Elapsed: 8.048117248s Apr 7 13:41:37.958: INFO: Pod "pod-subpath-test-configmap-fbjg": Phase="Running", Reason="", readiness=true. Elapsed: 10.052089833s Apr 7 13:41:39.962: INFO: Pod "pod-subpath-test-configmap-fbjg": Phase="Running", Reason="", readiness=true. Elapsed: 12.056220227s Apr 7 13:41:41.966: INFO: Pod "pod-subpath-test-configmap-fbjg": Phase="Running", Reason="", readiness=true. Elapsed: 14.060462502s Apr 7 13:41:43.971: INFO: Pod "pod-subpath-test-configmap-fbjg": Phase="Running", Reason="", readiness=true. Elapsed: 16.064855467s Apr 7 13:41:45.976: INFO: Pod "pod-subpath-test-configmap-fbjg": Phase="Running", Reason="", readiness=true. Elapsed: 18.069626934s Apr 7 13:41:47.980: INFO: Pod "pod-subpath-test-configmap-fbjg": Phase="Running", Reason="", readiness=true. Elapsed: 20.074088873s Apr 7 13:41:49.985: INFO: Pod "pod-subpath-test-configmap-fbjg": Phase="Running", Reason="", readiness=true. Elapsed: 22.078800084s Apr 7 13:41:51.989: INFO: Pod "pod-subpath-test-configmap-fbjg": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.083378581s STEP: Saw pod success Apr 7 13:41:51.989: INFO: Pod "pod-subpath-test-configmap-fbjg" satisfied condition "success or failure" Apr 7 13:41:51.993: INFO: Trying to get logs from node iruya-worker pod pod-subpath-test-configmap-fbjg container test-container-subpath-configmap-fbjg: STEP: delete the pod Apr 7 13:41:52.014: INFO: Waiting for pod pod-subpath-test-configmap-fbjg to disappear Apr 7 13:41:52.016: INFO: Pod pod-subpath-test-configmap-fbjg no longer exists STEP: Deleting pod pod-subpath-test-configmap-fbjg Apr 7 13:41:52.016: INFO: Deleting pod "pod-subpath-test-configmap-fbjg" in namespace "subpath-1305" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 7 13:41:52.018: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-1305" for this suite. Apr 7 13:41:58.032: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 7 13:41:58.114: INFO: namespace subpath-1305 deletion completed in 6.092841115s • [SLOW TEST:30.304 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 7 13:41:58.115: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-b0b9bef8-4416-4159-8bde-3ebe0c9994e8 STEP: Creating a pod to test consume secrets Apr 7 13:41:58.209: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-36a30c94-2403-4b18-8421-3655ed5c6b27" in namespace "projected-3429" to be "success or failure" Apr 7 13:41:58.214: INFO: Pod "pod-projected-secrets-36a30c94-2403-4b18-8421-3655ed5c6b27": Phase="Pending", Reason="", readiness=false. Elapsed: 5.120246ms Apr 7 13:42:00.311: INFO: Pod "pod-projected-secrets-36a30c94-2403-4b18-8421-3655ed5c6b27": Phase="Pending", Reason="", readiness=false. Elapsed: 2.102580341s Apr 7 13:42:02.315: INFO: Pod "pod-projected-secrets-36a30c94-2403-4b18-8421-3655ed5c6b27": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.106407642s STEP: Saw pod success Apr 7 13:42:02.315: INFO: Pod "pod-projected-secrets-36a30c94-2403-4b18-8421-3655ed5c6b27" satisfied condition "success or failure" Apr 7 13:42:02.318: INFO: Trying to get logs from node iruya-worker pod pod-projected-secrets-36a30c94-2403-4b18-8421-3655ed5c6b27 container projected-secret-volume-test: STEP: delete the pod Apr 7 13:42:02.379: INFO: Waiting for pod pod-projected-secrets-36a30c94-2403-4b18-8421-3655ed5c6b27 to disappear Apr 7 13:42:02.384: INFO: Pod pod-projected-secrets-36a30c94-2403-4b18-8421-3655ed5c6b27 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 7 13:42:02.384: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3429" for this suite. Apr 7 13:42:08.400: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 7 13:42:08.503: INFO: namespace projected-3429 deletion completed in 6.116384321s • [SLOW TEST:10.388 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 7 13:42:08.504: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Apr 7 13:42:08.590: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8be26358-5732-4d75-b4f5-453d553306e6" in namespace "downward-api-9714" to be "success or failure" Apr 7 13:42:08.611: INFO: Pod "downwardapi-volume-8be26358-5732-4d75-b4f5-453d553306e6": Phase="Pending", Reason="", readiness=false. Elapsed: 21.324093ms Apr 7 13:42:10.615: INFO: Pod "downwardapi-volume-8be26358-5732-4d75-b4f5-453d553306e6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024876492s Apr 7 13:42:12.619: INFO: Pod "downwardapi-volume-8be26358-5732-4d75-b4f5-453d553306e6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.029161368s STEP: Saw pod success Apr 7 13:42:12.619: INFO: Pod "downwardapi-volume-8be26358-5732-4d75-b4f5-453d553306e6" satisfied condition "success or failure" Apr 7 13:42:12.622: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-8be26358-5732-4d75-b4f5-453d553306e6 container client-container: STEP: delete the pod Apr 7 13:42:12.660: INFO: Waiting for pod downwardapi-volume-8be26358-5732-4d75-b4f5-453d553306e6 to disappear Apr 7 13:42:12.688: INFO: Pod downwardapi-volume-8be26358-5732-4d75-b4f5-453d553306e6 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 7 13:42:12.688: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9714" for this suite. Apr 7 13:42:18.706: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 7 13:42:18.782: INFO: namespace downward-api-9714 deletion completed in 6.089970805s • [SLOW TEST:10.278 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 7 13:42:18.782: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: starting a background goroutine to produce watch events STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 7 13:42:24.245: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-8847" for this suite. Apr 7 13:42:30.391: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 7 13:42:30.465: INFO: namespace watch-8847 deletion completed in 6.176172291s • [SLOW TEST:11.683 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 7 13:42:30.465: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-map-6dae20ce-f94d-4bf4-a194-5a8ac9e921fd STEP: Creating a pod to test consume secrets Apr 7 13:42:30.540: INFO: Waiting up to 5m0s for pod "pod-secrets-eafad3a9-e985-4510-b01a-ec0bda43f212" in namespace "secrets-1798" to be "success or failure" Apr 7 13:42:30.544: INFO: Pod "pod-secrets-eafad3a9-e985-4510-b01a-ec0bda43f212": Phase="Pending", Reason="", readiness=false. Elapsed: 4.322586ms Apr 7 13:42:32.549: INFO: Pod "pod-secrets-eafad3a9-e985-4510-b01a-ec0bda43f212": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0092511s Apr 7 13:42:34.552: INFO: Pod "pod-secrets-eafad3a9-e985-4510-b01a-ec0bda43f212": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012672626s STEP: Saw pod success Apr 7 13:42:34.553: INFO: Pod "pod-secrets-eafad3a9-e985-4510-b01a-ec0bda43f212" satisfied condition "success or failure" Apr 7 13:42:34.555: INFO: Trying to get logs from node iruya-worker2 pod pod-secrets-eafad3a9-e985-4510-b01a-ec0bda43f212 container secret-volume-test: STEP: delete the pod Apr 7 13:42:34.583: INFO: Waiting for pod pod-secrets-eafad3a9-e985-4510-b01a-ec0bda43f212 to disappear Apr 7 13:42:34.588: INFO: Pod pod-secrets-eafad3a9-e985-4510-b01a-ec0bda43f212 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 7 13:42:34.588: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1798" for this suite. Apr 7 13:42:40.604: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 7 13:42:40.723: INFO: namespace secrets-1798 deletion completed in 6.131792776s • [SLOW TEST:10.258 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 7 13:42:40.723: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-3370 [It] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace statefulset-3370 STEP: Creating statefulset with conflicting port in namespace statefulset-3370 STEP: Waiting until pod test-pod will start running in namespace statefulset-3370 STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-3370 Apr 7 13:42:44.872: INFO: Observed stateful pod in namespace: statefulset-3370, name: ss-0, uid: b102c33d-f0d9-4de2-8ea1-459a7b53266b, status phase: Pending. Waiting for statefulset controller to delete. Apr 7 13:42:45.418: INFO: Observed stateful pod in namespace: statefulset-3370, name: ss-0, uid: b102c33d-f0d9-4de2-8ea1-459a7b53266b, status phase: Failed. Waiting for statefulset controller to delete. Apr 7 13:42:45.427: INFO: Observed stateful pod in namespace: statefulset-3370, name: ss-0, uid: b102c33d-f0d9-4de2-8ea1-459a7b53266b, status phase: Failed. Waiting for statefulset controller to delete. Apr 7 13:42:45.431: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-3370 STEP: Removing pod with conflicting port in namespace statefulset-3370 STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-3370 and will be in running state [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 Apr 7 13:42:49.493: INFO: Deleting all statefulset in ns statefulset-3370 Apr 7 13:42:49.496: INFO: Scaling statefulset ss to 0 Apr 7 13:43:09.512: INFO: Waiting for statefulset status.replicas updated to 0 Apr 7 13:43:09.514: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 7 13:43:09.531: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-3370" for this suite. Apr 7 13:43:15.587: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 7 13:43:15.663: INFO: namespace statefulset-3370 deletion completed in 6.128863999s • [SLOW TEST:34.940 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 7 13:43:15.664: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Apr 7 13:43:15.766: INFO: Create a RollingUpdate DaemonSet Apr 7 13:43:15.769: INFO: Check that daemon pods launch on every node of the cluster Apr 7 13:43:15.773: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 7 13:43:15.775: INFO: Number of nodes with available pods: 0 Apr 7 13:43:15.775: INFO: Node iruya-worker is running more than one daemon pod Apr 7 13:43:16.798: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 7 13:43:16.801: INFO: Number of nodes with available pods: 0 Apr 7 13:43:16.801: INFO: Node iruya-worker is running more than one daemon pod Apr 7 13:43:17.781: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 7 13:43:17.785: INFO: Number of nodes with available pods: 0 Apr 7 13:43:17.785: INFO: Node iruya-worker is running more than one daemon pod Apr 7 13:43:18.804: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 7 13:43:18.808: INFO: Number of nodes with available pods: 0 Apr 7 13:43:18.808: INFO: Node iruya-worker is running more than one daemon pod Apr 7 13:43:19.780: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 7 13:43:19.784: INFO: Number of nodes with available pods: 1 Apr 7 13:43:19.784: INFO: Node iruya-worker is running more than one daemon pod Apr 7 13:43:20.781: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 7 13:43:20.785: INFO: Number of nodes with available pods: 2 Apr 7 13:43:20.785: INFO: Number of running nodes: 2, number of available pods: 2 Apr 7 13:43:20.785: INFO: Update the DaemonSet to trigger a rollout Apr 7 13:43:20.791: INFO: Updating DaemonSet daemon-set Apr 7 13:43:32.810: INFO: Roll back the DaemonSet before rollout is complete Apr 7 13:43:32.815: INFO: Updating DaemonSet daemon-set Apr 7 13:43:32.815: INFO: Make sure DaemonSet rollback is complete Apr 7 13:43:32.821: INFO: Wrong image for pod: daemon-set-zr4z5. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent. Apr 7 13:43:32.821: INFO: Pod daemon-set-zr4z5 is not available Apr 7 13:43:32.827: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 7 13:43:33.831: INFO: Wrong image for pod: daemon-set-zr4z5. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent. Apr 7 13:43:33.831: INFO: Pod daemon-set-zr4z5 is not available Apr 7 13:43:33.836: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 7 13:43:34.831: INFO: Wrong image for pod: daemon-set-zr4z5. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent. Apr 7 13:43:34.831: INFO: Pod daemon-set-zr4z5 is not available Apr 7 13:43:34.834: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 7 13:43:35.839: INFO: Wrong image for pod: daemon-set-zr4z5. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent. Apr 7 13:43:35.839: INFO: Pod daemon-set-zr4z5 is not available Apr 7 13:43:35.846: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 7 13:43:36.832: INFO: Pod daemon-set-d24qx is not available Apr 7 13:43:36.837: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-3118, will wait for the garbage collector to delete the pods Apr 7 13:43:36.902: INFO: Deleting DaemonSet.extensions daemon-set took: 6.826764ms Apr 7 13:43:37.202: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.268429ms Apr 7 13:43:42.205: INFO: Number of nodes with available pods: 0 Apr 7 13:43:42.205: INFO: Number of running nodes: 0, number of available pods: 0 Apr 7 13:43:42.207: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-3118/daemonsets","resourceVersion":"4128867"},"items":null} Apr 7 13:43:42.210: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-3118/pods","resourceVersion":"4128867"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 7 13:43:42.219: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-3118" for this suite. Apr 7 13:43:48.281: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 7 13:43:48.363: INFO: namespace daemonsets-3118 deletion completed in 6.141665086s • [SLOW TEST:32.700 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 7 13:43:48.364: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars Apr 7 13:43:48.455: INFO: Waiting up to 5m0s for pod "downward-api-4446fbc8-1e0e-416b-b554-626f6287020e" in namespace "downward-api-455" to be "success or failure" Apr 7 13:43:48.465: INFO: Pod "downward-api-4446fbc8-1e0e-416b-b554-626f6287020e": Phase="Pending", Reason="", readiness=false. Elapsed: 9.708875ms Apr 7 13:43:50.469: INFO: Pod "downward-api-4446fbc8-1e0e-416b-b554-626f6287020e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01380341s Apr 7 13:43:52.473: INFO: Pod "downward-api-4446fbc8-1e0e-416b-b554-626f6287020e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018383136s STEP: Saw pod success Apr 7 13:43:52.474: INFO: Pod "downward-api-4446fbc8-1e0e-416b-b554-626f6287020e" satisfied condition "success or failure" Apr 7 13:43:52.477: INFO: Trying to get logs from node iruya-worker pod downward-api-4446fbc8-1e0e-416b-b554-626f6287020e container dapi-container: STEP: delete the pod Apr 7 13:43:52.536: INFO: Waiting for pod downward-api-4446fbc8-1e0e-416b-b554-626f6287020e to disappear Apr 7 13:43:52.545: INFO: Pod downward-api-4446fbc8-1e0e-416b-b554-626f6287020e no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 7 13:43:52.545: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-455" for this suite. Apr 7 13:43:58.561: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 7 13:43:58.664: INFO: namespace downward-api-455 deletion completed in 6.115160845s • [SLOW TEST:10.300 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-network] DNS should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 7 13:43:58.664: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7210.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7210.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 7 13:44:04.764: INFO: DNS probes using dns-7210/dns-test-cd7e4d93-44d1-4a90-b1e1-cd38df7e5766 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 7 13:44:04.787: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-7210" for this suite. Apr 7 13:44:10.828: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 7 13:44:10.900: INFO: namespace dns-7210 deletion completed in 6.100913255s • [SLOW TEST:12.237 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 7 13:44:10.901: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 7 13:44:37.136: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-8773" for this suite. Apr 7 13:44:43.151: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 7 13:44:43.223: INFO: namespace namespaces-8773 deletion completed in 6.08297847s STEP: Destroying namespace "nsdeletetest-4750" for this suite. Apr 7 13:44:43.225: INFO: Namespace nsdeletetest-4750 was already deleted STEP: Destroying namespace "nsdeletetest-6308" for this suite. Apr 7 13:44:49.250: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 7 13:44:49.391: INFO: namespace nsdeletetest-6308 deletion completed in 6.165957146s • [SLOW TEST:38.490 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 7 13:44:49.391: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Apr 7 13:44:49.479: INFO: Pod name rollover-pod: Found 0 pods out of 1 Apr 7 13:44:54.484: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Apr 7 13:44:54.484: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready Apr 7 13:44:56.488: INFO: Creating deployment "test-rollover-deployment" Apr 7 13:44:56.499: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations Apr 7 13:44:58.506: INFO: Check revision of new replica set for deployment "test-rollover-deployment" Apr 7 13:44:58.513: INFO: Ensure that both replica sets have 1 created replica Apr 7 13:44:58.519: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update Apr 7 13:44:58.525: INFO: Updating deployment test-rollover-deployment Apr 7 13:44:58.525: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller Apr 7 13:45:00.558: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 Apr 7 13:45:00.564: INFO: Make sure deployment "test-rollover-deployment" is complete Apr 7 13:45:00.571: INFO: all replica sets need to contain the pod-template-hash label Apr 7 13:45:00.571: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721863896, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721863896, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721863898, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721863896, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 7 13:45:02.579: INFO: all replica sets need to contain the pod-template-hash label Apr 7 13:45:02.579: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721863896, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721863896, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721863902, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721863896, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 7 13:45:04.579: INFO: all replica sets need to contain the pod-template-hash label Apr 7 13:45:04.580: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721863896, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721863896, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721863902, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721863896, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 7 13:45:06.579: INFO: all replica sets need to contain the pod-template-hash label Apr 7 13:45:06.580: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721863896, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721863896, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721863902, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721863896, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 7 13:45:08.580: INFO: all replica sets need to contain the pod-template-hash label Apr 7 13:45:08.580: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721863896, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721863896, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721863902, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721863896, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 7 13:45:10.579: INFO: all replica sets need to contain the pod-template-hash label Apr 7 13:45:10.580: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721863896, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721863896, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721863902, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721863896, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 7 13:45:12.579: INFO: Apr 7 13:45:12.579: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 Apr 7 13:45:12.586: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment,GenerateName:,Namespace:deployment-4635,SelfLink:/apis/apps/v1/namespaces/deployment-4635/deployments/test-rollover-deployment,UID:d2d4bd97-e087-46dd-9e63-782dac8bf9b3,ResourceVersion:4129252,Generation:2,CreationTimestamp:2020-04-07 13:44:56 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-04-07 13:44:56 +0000 UTC 2020-04-07 13:44:56 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-04-07 13:45:12 +0000 UTC 2020-04-07 13:44:56 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rollover-deployment-854595fc44" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},} Apr 7 13:45:12.589: INFO: New ReplicaSet "test-rollover-deployment-854595fc44" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-854595fc44,GenerateName:,Namespace:deployment-4635,SelfLink:/apis/apps/v1/namespaces/deployment-4635/replicasets/test-rollover-deployment-854595fc44,UID:5b84b22d-175d-48fd-96e5-309919464e38,ResourceVersion:4129241,Generation:2,CreationTimestamp:2020-04-07 13:44:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment d2d4bd97-e087-46dd-9e63-782dac8bf9b3 0xc0025daff7 0xc0025daff8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} Apr 7 13:45:12.589: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": Apr 7 13:45:12.589: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-controller,GenerateName:,Namespace:deployment-4635,SelfLink:/apis/apps/v1/namespaces/deployment-4635/replicasets/test-rollover-controller,UID:0f983714-5808-4b9c-9e2f-b04375367e95,ResourceVersion:4129251,Generation:2,CreationTimestamp:2020-04-07 13:44:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment d2d4bd97-e087-46dd-9e63-782dac8bf9b3 0xc0025daf27 0xc0025daf28}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Apr 7 13:45:12.589: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-9b8b997cf,GenerateName:,Namespace:deployment-4635,SelfLink:/apis/apps/v1/namespaces/deployment-4635/replicasets/test-rollover-deployment-9b8b997cf,UID:a9f980f4-d441-4620-b5ab-aeece64d0864,ResourceVersion:4129207,Generation:2,CreationTimestamp:2020-04-07 13:44:56 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment d2d4bd97-e087-46dd-9e63-782dac8bf9b3 0xc0025db0c0 0xc0025db0c1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Apr 7 13:45:12.592: INFO: Pod "test-rollover-deployment-854595fc44-gwkc6" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-854595fc44-gwkc6,GenerateName:test-rollover-deployment-854595fc44-,Namespace:deployment-4635,SelfLink:/api/v1/namespaces/deployment-4635/pods/test-rollover-deployment-854595fc44-gwkc6,UID:d3e930bc-d345-41c1-b096-c51845758edb,ResourceVersion:4129219,Generation:0,CreationTimestamp:2020-04-07 13:44:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rollover-deployment-854595fc44 5b84b22d-175d-48fd-96e5-309919464e38 0xc002f973b7 0xc002f973b8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-r9zdx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-r9zdx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-r9zdx true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002f97430} {node.kubernetes.io/unreachable Exists NoExecute 0xc002f97450}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-07 13:44:58 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-04-07 13:45:01 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-04-07 13:45:01 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-07 13:44:58 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:10.244.2.64,StartTime:2020-04-07 13:44:58 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-04-07 13:45:00 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 containerd://9de46a4bbd3df6b0ed2efa8e3d7b351bf1691e49135fc2aac8087d71737d74b2}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 7 13:45:12.592: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-4635" for this suite. Apr 7 13:45:18.608: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 7 13:45:18.687: INFO: namespace deployment-4635 deletion completed in 6.09210316s • [SLOW TEST:29.296 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 7 13:45:18.687: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 7 13:46:18.741: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-4373" for this suite. Apr 7 13:46:40.767: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 7 13:46:40.855: INFO: namespace container-probe-4373 deletion completed in 22.110718897s • [SLOW TEST:82.168 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 7 13:46:40.855: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod Apr 7 13:46:40.939: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 7 13:46:47.129: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-6711" for this suite. Apr 7 13:46:53.229: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 7 13:46:53.312: INFO: namespace init-container-6711 deletion completed in 6.161753634s • [SLOW TEST:12.458 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 7 13:46:53.313: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0644 on node default medium Apr 7 13:46:53.393: INFO: Waiting up to 5m0s for pod "pod-fd6f88ff-0152-4c2d-9501-ae41f8bb62d4" in namespace "emptydir-2897" to be "success or failure" Apr 7 13:46:53.397: INFO: Pod "pod-fd6f88ff-0152-4c2d-9501-ae41f8bb62d4": Phase="Pending", Reason="", readiness=false. Elapsed: 3.751998ms Apr 7 13:46:55.401: INFO: Pod "pod-fd6f88ff-0152-4c2d-9501-ae41f8bb62d4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008285141s Apr 7 13:46:57.405: INFO: Pod "pod-fd6f88ff-0152-4c2d-9501-ae41f8bb62d4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012289328s STEP: Saw pod success Apr 7 13:46:57.405: INFO: Pod "pod-fd6f88ff-0152-4c2d-9501-ae41f8bb62d4" satisfied condition "success or failure" Apr 7 13:46:57.408: INFO: Trying to get logs from node iruya-worker pod pod-fd6f88ff-0152-4c2d-9501-ae41f8bb62d4 container test-container: STEP: delete the pod Apr 7 13:46:57.428: INFO: Waiting for pod pod-fd6f88ff-0152-4c2d-9501-ae41f8bb62d4 to disappear Apr 7 13:46:57.438: INFO: Pod pod-fd6f88ff-0152-4c2d-9501-ae41f8bb62d4 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 7 13:46:57.438: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2897" for this suite. Apr 7 13:47:03.454: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 7 13:47:03.541: INFO: namespace emptydir-2897 deletion completed in 6.100236667s • [SLOW TEST:10.228 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run default should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 7 13:47:03.541: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1420 [It] should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Apr 7 13:47:03.605: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-1560' Apr 7 13:47:05.964: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Apr 7 13:47:05.964: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n" STEP: verifying the pod controlled by e2e-test-nginx-deployment gets created [AfterEach] [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1426 Apr 7 13:47:07.978: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=kubectl-1560' Apr 7 13:47:08.326: INFO: stderr: "" Apr 7 13:47:08.326: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 7 13:47:08.326: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1560" for this suite. Apr 7 13:49:10.342: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 7 13:49:10.418: INFO: namespace kubectl-1560 deletion completed in 2m2.08830579s • [SLOW TEST:126.877 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 7 13:49:10.419: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-c13dcb5c-80b6-41dd-972f-41d13f48940f STEP: Creating a pod to test consume configMaps Apr 7 13:49:10.480: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-e5b4aff5-9e92-4d26-a016-26f26853f2e5" in namespace "projected-4736" to be "success or failure" Apr 7 13:49:10.483: INFO: Pod "pod-projected-configmaps-e5b4aff5-9e92-4d26-a016-26f26853f2e5": Phase="Pending", Reason="", readiness=false. Elapsed: 3.256285ms Apr 7 13:49:12.487: INFO: Pod "pod-projected-configmaps-e5b4aff5-9e92-4d26-a016-26f26853f2e5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007433742s Apr 7 13:49:14.491: INFO: Pod "pod-projected-configmaps-e5b4aff5-9e92-4d26-a016-26f26853f2e5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011564018s STEP: Saw pod success Apr 7 13:49:14.491: INFO: Pod "pod-projected-configmaps-e5b4aff5-9e92-4d26-a016-26f26853f2e5" satisfied condition "success or failure" Apr 7 13:49:14.494: INFO: Trying to get logs from node iruya-worker pod pod-projected-configmaps-e5b4aff5-9e92-4d26-a016-26f26853f2e5 container projected-configmap-volume-test: STEP: delete the pod Apr 7 13:49:14.514: INFO: Waiting for pod pod-projected-configmaps-e5b4aff5-9e92-4d26-a016-26f26853f2e5 to disappear Apr 7 13:49:14.519: INFO: Pod pod-projected-configmaps-e5b4aff5-9e92-4d26-a016-26f26853f2e5 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 7 13:49:14.519: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4736" for this suite. Apr 7 13:49:20.534: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 7 13:49:20.657: INFO: namespace projected-4736 deletion completed in 6.135873178s • [SLOW TEST:10.239 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 7 13:49:20.658: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Apr 7 13:49:23.743: INFO: Expected: &{OK} to match Container's Termination Message: OK -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 7 13:49:23.778: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-5263" for this suite. Apr 7 13:49:29.793: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 7 13:49:29.879: INFO: namespace container-runtime-5263 deletion completed in 6.097232924s • [SLOW TEST:9.221 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129 should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 7 13:49:29.879: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88 [It] should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating service endpoint-test2 in namespace services-1117 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-1117 to expose endpoints map[] Apr 7 13:49:29.975: INFO: Get endpoints failed (4.639394ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found Apr 7 13:49:30.979: INFO: successfully validated that service endpoint-test2 in namespace services-1117 exposes endpoints map[] (1.008739391s elapsed) STEP: Creating pod pod1 in namespace services-1117 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-1117 to expose endpoints map[pod1:[80]] Apr 7 13:49:34.011: INFO: successfully validated that service endpoint-test2 in namespace services-1117 exposes endpoints map[pod1:[80]] (3.024886355s elapsed) STEP: Creating pod pod2 in namespace services-1117 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-1117 to expose endpoints map[pod1:[80] pod2:[80]] Apr 7 13:49:37.123: INFO: successfully validated that service endpoint-test2 in namespace services-1117 exposes endpoints map[pod1:[80] pod2:[80]] (3.087523392s elapsed) STEP: Deleting pod pod1 in namespace services-1117 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-1117 to expose endpoints map[pod2:[80]] Apr 7 13:49:38.168: INFO: successfully validated that service endpoint-test2 in namespace services-1117 exposes endpoints map[pod2:[80]] (1.039671216s elapsed) STEP: Deleting pod pod2 in namespace services-1117 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-1117 to expose endpoints map[] Apr 7 13:49:39.304: INFO: successfully validated that service endpoint-test2 in namespace services-1117 exposes endpoints map[] (1.132092039s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 7 13:49:39.341: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-1117" for this suite. Apr 7 13:49:45.356: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 7 13:49:45.436: INFO: namespace services-1117 deletion completed in 6.08920413s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92 • [SLOW TEST:15.557 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [k8s.io] Pods should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 7 13:49:45.437: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating pod Apr 7 13:49:49.531: INFO: Pod pod-hostip-8cc272c0-3551-4b2f-a590-785dbb4d74bb has hostIP: 172.17.0.6 [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 7 13:49:49.531: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-7217" for this suite. Apr 7 13:50:11.549: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 7 13:50:11.634: INFO: namespace pods-7217 deletion completed in 22.099237004s • [SLOW TEST:26.198 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 7 13:50:11.635: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-fd7b304e-b0df-47f1-a4d2-86bb9e670b19 STEP: Creating a pod to test consume secrets Apr 7 13:50:11.697: INFO: Waiting up to 5m0s for pod "pod-secrets-c0d3d665-c556-42ce-94bc-83b8a83fecb1" in namespace "secrets-1978" to be "success or failure" Apr 7 13:50:11.719: INFO: Pod "pod-secrets-c0d3d665-c556-42ce-94bc-83b8a83fecb1": Phase="Pending", Reason="", readiness=false. Elapsed: 22.000278ms Apr 7 13:50:13.749: INFO: Pod "pod-secrets-c0d3d665-c556-42ce-94bc-83b8a83fecb1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.052001173s Apr 7 13:50:15.753: INFO: Pod "pod-secrets-c0d3d665-c556-42ce-94bc-83b8a83fecb1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.056124476s STEP: Saw pod success Apr 7 13:50:15.753: INFO: Pod "pod-secrets-c0d3d665-c556-42ce-94bc-83b8a83fecb1" satisfied condition "success or failure" Apr 7 13:50:15.756: INFO: Trying to get logs from node iruya-worker2 pod pod-secrets-c0d3d665-c556-42ce-94bc-83b8a83fecb1 container secret-volume-test: STEP: delete the pod Apr 7 13:50:15.773: INFO: Waiting for pod pod-secrets-c0d3d665-c556-42ce-94bc-83b8a83fecb1 to disappear Apr 7 13:50:15.778: INFO: Pod pod-secrets-c0d3d665-c556-42ce-94bc-83b8a83fecb1 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 7 13:50:15.778: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1978" for this suite. Apr 7 13:50:21.808: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 7 13:50:21.888: INFO: namespace secrets-1978 deletion completed in 6.106699593s • [SLOW TEST:10.253 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-network] Services should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 7 13:50:21.888: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88 [It] should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 7 13:50:21.970: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-671" for this suite. Apr 7 13:50:27.986: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 7 13:50:28.062: INFO: namespace services-671 deletion completed in 6.088925133s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92 • [SLOW TEST:6.174 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 7 13:50:28.063: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Apr 7 13:50:32.180: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 7 13:50:32.227: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-8953" for this suite. Apr 7 13:50:38.256: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 7 13:50:38.346: INFO: namespace container-runtime-8953 deletion completed in 6.115337748s • [SLOW TEST:10.283 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129 should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 7 13:50:38.346: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name cm-test-opt-del-fc751b87-4160-4cc2-a3e6-550f71cd94d0 STEP: Creating configMap with name cm-test-opt-upd-cfc3f666-e06e-457a-93a5-ee1cb5f69171 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-fc751b87-4160-4cc2-a3e6-550f71cd94d0 STEP: Updating configmap cm-test-opt-upd-cfc3f666-e06e-457a-93a5-ee1cb5f69171 STEP: Creating configMap with name cm-test-opt-create-bdc04ecc-a910-4899-84ff-a671defaa200 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 7 13:50:46.544: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8652" for this suite. Apr 7 13:51:08.561: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 7 13:51:08.639: INFO: namespace configmap-8652 deletion completed in 22.091494731s • [SLOW TEST:30.293 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 7 13:51:08.639: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating replication controller my-hostname-basic-d3974229-f7e2-4167-853e-13a269383ad7 Apr 7 13:51:08.725: INFO: Pod name my-hostname-basic-d3974229-f7e2-4167-853e-13a269383ad7: Found 0 pods out of 1 Apr 7 13:51:13.729: INFO: Pod name my-hostname-basic-d3974229-f7e2-4167-853e-13a269383ad7: Found 1 pods out of 1 Apr 7 13:51:13.729: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-d3974229-f7e2-4167-853e-13a269383ad7" are running Apr 7 13:51:13.732: INFO: Pod "my-hostname-basic-d3974229-f7e2-4167-853e-13a269383ad7-48526" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-07 13:51:08 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-07 13:51:11 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-07 13:51:11 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-07 13:51:08 +0000 UTC Reason: Message:}]) Apr 7 13:51:13.732: INFO: Trying to dial the pod Apr 7 13:51:18.744: INFO: Controller my-hostname-basic-d3974229-f7e2-4167-853e-13a269383ad7: Got expected result from replica 1 [my-hostname-basic-d3974229-f7e2-4167-853e-13a269383ad7-48526]: "my-hostname-basic-d3974229-f7e2-4167-853e-13a269383ad7-48526", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 7 13:51:18.744: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-615" for this suite. Apr 7 13:51:24.759: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 7 13:51:24.838: INFO: namespace replication-controller-615 deletion completed in 6.090786214s • [SLOW TEST:16.198 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 7 13:51:24.838: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name projected-secret-test-ae09155d-bc37-40cb-ba31-38dfcaddbdfe STEP: Creating a pod to test consume secrets Apr 7 13:51:24.919: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-55c32368-5f34-41d2-bcf2-beea1f23466a" in namespace "projected-6635" to be "success or failure" Apr 7 13:51:24.936: INFO: Pod "pod-projected-secrets-55c32368-5f34-41d2-bcf2-beea1f23466a": Phase="Pending", Reason="", readiness=false. Elapsed: 16.203331ms Apr 7 13:51:26.939: INFO: Pod "pod-projected-secrets-55c32368-5f34-41d2-bcf2-beea1f23466a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019829415s Apr 7 13:51:28.943: INFO: Pod "pod-projected-secrets-55c32368-5f34-41d2-bcf2-beea1f23466a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023636539s STEP: Saw pod success Apr 7 13:51:28.943: INFO: Pod "pod-projected-secrets-55c32368-5f34-41d2-bcf2-beea1f23466a" satisfied condition "success or failure" Apr 7 13:51:28.946: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-secrets-55c32368-5f34-41d2-bcf2-beea1f23466a container secret-volume-test: STEP: delete the pod Apr 7 13:51:29.057: INFO: Waiting for pod pod-projected-secrets-55c32368-5f34-41d2-bcf2-beea1f23466a to disappear Apr 7 13:51:29.061: INFO: Pod pod-projected-secrets-55c32368-5f34-41d2-bcf2-beea1f23466a no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 7 13:51:29.061: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6635" for this suite. Apr 7 13:51:35.079: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 7 13:51:35.152: INFO: namespace projected-6635 deletion completed in 6.087351489s • [SLOW TEST:10.314 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 7 13:51:35.153: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Setting up the test STEP: Creating hostNetwork=false pod STEP: Creating hostNetwork=true pod STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false Apr 7 13:51:45.254: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-8474 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 7 13:51:45.254: INFO: >>> kubeConfig: /root/.kube/config I0407 13:51:45.289029 6 log.go:172] (0xc00122c8f0) (0xc0033fa280) Create stream I0407 13:51:45.289062 6 log.go:172] (0xc00122c8f0) (0xc0033fa280) Stream added, broadcasting: 1 I0407 13:51:45.291690 6 log.go:172] (0xc00122c8f0) Reply frame received for 1 I0407 13:51:45.291735 6 log.go:172] (0xc00122c8f0) (0xc001fe0000) Create stream I0407 13:51:45.291752 6 log.go:172] (0xc00122c8f0) (0xc001fe0000) Stream added, broadcasting: 3 I0407 13:51:45.292735 6 log.go:172] (0xc00122c8f0) Reply frame received for 3 I0407 13:51:45.292768 6 log.go:172] (0xc00122c8f0) (0xc002212aa0) Create stream I0407 13:51:45.292798 6 log.go:172] (0xc00122c8f0) (0xc002212aa0) Stream added, broadcasting: 5 I0407 13:51:45.293941 6 log.go:172] (0xc00122c8f0) Reply frame received for 5 I0407 13:51:45.358562 6 log.go:172] (0xc00122c8f0) Data frame received for 5 I0407 13:51:45.358605 6 log.go:172] (0xc002212aa0) (5) Data frame handling I0407 13:51:45.358650 6 log.go:172] (0xc00122c8f0) Data frame received for 3 I0407 13:51:45.358702 6 log.go:172] (0xc001fe0000) (3) Data frame handling I0407 13:51:45.358730 6 log.go:172] (0xc001fe0000) (3) Data frame sent I0407 13:51:45.358750 6 log.go:172] (0xc00122c8f0) Data frame received for 3 I0407 13:51:45.358768 6 log.go:172] (0xc001fe0000) (3) Data frame handling I0407 13:51:45.360334 6 log.go:172] (0xc00122c8f0) Data frame received for 1 I0407 13:51:45.360367 6 log.go:172] (0xc0033fa280) (1) Data frame handling I0407 13:51:45.360402 6 log.go:172] (0xc0033fa280) (1) Data frame sent I0407 13:51:45.360429 6 log.go:172] (0xc00122c8f0) (0xc0033fa280) Stream removed, broadcasting: 1 I0407 13:51:45.360464 6 log.go:172] (0xc00122c8f0) Go away received I0407 13:51:45.360552 6 log.go:172] (0xc00122c8f0) (0xc0033fa280) Stream removed, broadcasting: 1 I0407 13:51:45.360568 6 log.go:172] (0xc00122c8f0) (0xc001fe0000) Stream removed, broadcasting: 3 I0407 13:51:45.360575 6 log.go:172] (0xc00122c8f0) (0xc002212aa0) Stream removed, broadcasting: 5 Apr 7 13:51:45.360: INFO: Exec stderr: "" Apr 7 13:51:45.360: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-8474 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 7 13:51:45.360: INFO: >>> kubeConfig: /root/.kube/config I0407 13:51:45.394988 6 log.go:172] (0xc0011dc790) (0xc002212e60) Create stream I0407 13:51:45.395020 6 log.go:172] (0xc0011dc790) (0xc002212e60) Stream added, broadcasting: 1 I0407 13:51:45.397816 6 log.go:172] (0xc0011dc790) Reply frame received for 1 I0407 13:51:45.397880 6 log.go:172] (0xc0011dc790) (0xc001e1c0a0) Create stream I0407 13:51:45.397916 6 log.go:172] (0xc0011dc790) (0xc001e1c0a0) Stream added, broadcasting: 3 I0407 13:51:45.399023 6 log.go:172] (0xc0011dc790) Reply frame received for 3 I0407 13:51:45.399067 6 log.go:172] (0xc0011dc790) (0xc002212f00) Create stream I0407 13:51:45.399083 6 log.go:172] (0xc0011dc790) (0xc002212f00) Stream added, broadcasting: 5 I0407 13:51:45.400175 6 log.go:172] (0xc0011dc790) Reply frame received for 5 I0407 13:51:45.451485 6 log.go:172] (0xc0011dc790) Data frame received for 5 I0407 13:51:45.451562 6 log.go:172] (0xc002212f00) (5) Data frame handling I0407 13:51:45.451587 6 log.go:172] (0xc0011dc790) Data frame received for 3 I0407 13:51:45.451594 6 log.go:172] (0xc001e1c0a0) (3) Data frame handling I0407 13:51:45.451609 6 log.go:172] (0xc001e1c0a0) (3) Data frame sent I0407 13:51:45.451618 6 log.go:172] (0xc0011dc790) Data frame received for 3 I0407 13:51:45.451625 6 log.go:172] (0xc001e1c0a0) (3) Data frame handling I0407 13:51:45.452708 6 log.go:172] (0xc0011dc790) Data frame received for 1 I0407 13:51:45.452724 6 log.go:172] (0xc002212e60) (1) Data frame handling I0407 13:51:45.452739 6 log.go:172] (0xc002212e60) (1) Data frame sent I0407 13:51:45.452798 6 log.go:172] (0xc0011dc790) (0xc002212e60) Stream removed, broadcasting: 1 I0407 13:51:45.452913 6 log.go:172] (0xc0011dc790) (0xc002212e60) Stream removed, broadcasting: 1 I0407 13:51:45.452931 6 log.go:172] (0xc0011dc790) (0xc001e1c0a0) Stream removed, broadcasting: 3 I0407 13:51:45.452999 6 log.go:172] (0xc0011dc790) Go away received I0407 13:51:45.453104 6 log.go:172] (0xc0011dc790) (0xc002212f00) Stream removed, broadcasting: 5 Apr 7 13:51:45.453: INFO: Exec stderr: "" Apr 7 13:51:45.453: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-8474 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 7 13:51:45.453: INFO: >>> kubeConfig: /root/.kube/config I0407 13:51:45.479182 6 log.go:172] (0xc0011dd290) (0xc0022132c0) Create stream I0407 13:51:45.479208 6 log.go:172] (0xc0011dd290) (0xc0022132c0) Stream added, broadcasting: 1 I0407 13:51:45.481318 6 log.go:172] (0xc0011dd290) Reply frame received for 1 I0407 13:51:45.481368 6 log.go:172] (0xc0011dd290) (0xc001e1c140) Create stream I0407 13:51:45.481388 6 log.go:172] (0xc0011dd290) (0xc001e1c140) Stream added, broadcasting: 3 I0407 13:51:45.482166 6 log.go:172] (0xc0011dd290) Reply frame received for 3 I0407 13:51:45.482192 6 log.go:172] (0xc0011dd290) (0xc002213400) Create stream I0407 13:51:45.482200 6 log.go:172] (0xc0011dd290) (0xc002213400) Stream added, broadcasting: 5 I0407 13:51:45.483033 6 log.go:172] (0xc0011dd290) Reply frame received for 5 I0407 13:51:45.534387 6 log.go:172] (0xc0011dd290) Data frame received for 5 I0407 13:51:45.534434 6 log.go:172] (0xc002213400) (5) Data frame handling I0407 13:51:45.534466 6 log.go:172] (0xc0011dd290) Data frame received for 3 I0407 13:51:45.534484 6 log.go:172] (0xc001e1c140) (3) Data frame handling I0407 13:51:45.534508 6 log.go:172] (0xc001e1c140) (3) Data frame sent I0407 13:51:45.534525 6 log.go:172] (0xc0011dd290) Data frame received for 3 I0407 13:51:45.534534 6 log.go:172] (0xc001e1c140) (3) Data frame handling I0407 13:51:45.535655 6 log.go:172] (0xc0011dd290) Data frame received for 1 I0407 13:51:45.535681 6 log.go:172] (0xc0022132c0) (1) Data frame handling I0407 13:51:45.535694 6 log.go:172] (0xc0022132c0) (1) Data frame sent I0407 13:51:45.535702 6 log.go:172] (0xc0011dd290) (0xc0022132c0) Stream removed, broadcasting: 1 I0407 13:51:45.535774 6 log.go:172] (0xc0011dd290) (0xc0022132c0) Stream removed, broadcasting: 1 I0407 13:51:45.535791 6 log.go:172] (0xc0011dd290) Go away received I0407 13:51:45.535827 6 log.go:172] (0xc0011dd290) (0xc001e1c140) Stream removed, broadcasting: 3 I0407 13:51:45.535867 6 log.go:172] (0xc0011dd290) (0xc002213400) Stream removed, broadcasting: 5 Apr 7 13:51:45.535: INFO: Exec stderr: "" Apr 7 13:51:45.535: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-8474 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 7 13:51:45.535: INFO: >>> kubeConfig: /root/.kube/config I0407 13:51:45.568221 6 log.go:172] (0xc0011ddd90) (0xc002213680) Create stream I0407 13:51:45.568250 6 log.go:172] (0xc0011ddd90) (0xc002213680) Stream added, broadcasting: 1 I0407 13:51:45.571244 6 log.go:172] (0xc0011ddd90) Reply frame received for 1 I0407 13:51:45.571304 6 log.go:172] (0xc0011ddd90) (0xc001e1c320) Create stream I0407 13:51:45.571332 6 log.go:172] (0xc0011ddd90) (0xc001e1c320) Stream added, broadcasting: 3 I0407 13:51:45.572461 6 log.go:172] (0xc0011ddd90) Reply frame received for 3 I0407 13:51:45.572503 6 log.go:172] (0xc0011ddd90) (0xc001e1c500) Create stream I0407 13:51:45.572520 6 log.go:172] (0xc0011ddd90) (0xc001e1c500) Stream added, broadcasting: 5 I0407 13:51:45.573778 6 log.go:172] (0xc0011ddd90) Reply frame received for 5 I0407 13:51:45.649420 6 log.go:172] (0xc0011ddd90) Data frame received for 5 I0407 13:51:45.649469 6 log.go:172] (0xc001e1c500) (5) Data frame handling I0407 13:51:45.649519 6 log.go:172] (0xc0011ddd90) Data frame received for 3 I0407 13:51:45.649532 6 log.go:172] (0xc001e1c320) (3) Data frame handling I0407 13:51:45.649544 6 log.go:172] (0xc001e1c320) (3) Data frame sent I0407 13:51:45.649689 6 log.go:172] (0xc0011ddd90) Data frame received for 3 I0407 13:51:45.649728 6 log.go:172] (0xc001e1c320) (3) Data frame handling I0407 13:51:45.651646 6 log.go:172] (0xc0011ddd90) Data frame received for 1 I0407 13:51:45.651665 6 log.go:172] (0xc002213680) (1) Data frame handling I0407 13:51:45.651678 6 log.go:172] (0xc002213680) (1) Data frame sent I0407 13:51:45.651692 6 log.go:172] (0xc0011ddd90) (0xc002213680) Stream removed, broadcasting: 1 I0407 13:51:45.651784 6 log.go:172] (0xc0011ddd90) Go away received I0407 13:51:45.651889 6 log.go:172] (0xc0011ddd90) (0xc002213680) Stream removed, broadcasting: 1 I0407 13:51:45.651910 6 log.go:172] (0xc0011ddd90) (0xc001e1c320) Stream removed, broadcasting: 3 I0407 13:51:45.651930 6 log.go:172] (0xc0011ddd90) (0xc001e1c500) Stream removed, broadcasting: 5 Apr 7 13:51:45.651: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount Apr 7 13:51:45.652: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-8474 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 7 13:51:45.652: INFO: >>> kubeConfig: /root/.kube/config I0407 13:51:45.682177 6 log.go:172] (0xc00122db80) (0xc0033fa5a0) Create stream I0407 13:51:45.682199 6 log.go:172] (0xc00122db80) (0xc0033fa5a0) Stream added, broadcasting: 1 I0407 13:51:45.685408 6 log.go:172] (0xc00122db80) Reply frame received for 1 I0407 13:51:45.685493 6 log.go:172] (0xc00122db80) (0xc00164e960) Create stream I0407 13:51:45.685521 6 log.go:172] (0xc00122db80) (0xc00164e960) Stream added, broadcasting: 3 I0407 13:51:45.686590 6 log.go:172] (0xc00122db80) Reply frame received for 3 I0407 13:51:45.686631 6 log.go:172] (0xc00122db80) (0xc002213720) Create stream I0407 13:51:45.686655 6 log.go:172] (0xc00122db80) (0xc002213720) Stream added, broadcasting: 5 I0407 13:51:45.687611 6 log.go:172] (0xc00122db80) Reply frame received for 5 I0407 13:51:45.768367 6 log.go:172] (0xc00122db80) Data frame received for 3 I0407 13:51:45.768413 6 log.go:172] (0xc00164e960) (3) Data frame handling I0407 13:51:45.768427 6 log.go:172] (0xc00164e960) (3) Data frame sent I0407 13:51:45.768454 6 log.go:172] (0xc00122db80) Data frame received for 5 I0407 13:51:45.768528 6 log.go:172] (0xc002213720) (5) Data frame handling I0407 13:51:45.768579 6 log.go:172] (0xc00122db80) Data frame received for 3 I0407 13:51:45.768603 6 log.go:172] (0xc00164e960) (3) Data frame handling I0407 13:51:45.770052 6 log.go:172] (0xc00122db80) Data frame received for 1 I0407 13:51:45.770089 6 log.go:172] (0xc0033fa5a0) (1) Data frame handling I0407 13:51:45.770104 6 log.go:172] (0xc0033fa5a0) (1) Data frame sent I0407 13:51:45.770121 6 log.go:172] (0xc00122db80) (0xc0033fa5a0) Stream removed, broadcasting: 1 I0407 13:51:45.770138 6 log.go:172] (0xc00122db80) Go away received I0407 13:51:45.770406 6 log.go:172] (0xc00122db80) (0xc0033fa5a0) Stream removed, broadcasting: 1 I0407 13:51:45.770439 6 log.go:172] (0xc00122db80) (0xc00164e960) Stream removed, broadcasting: 3 I0407 13:51:45.770460 6 log.go:172] (0xc00122db80) (0xc002213720) Stream removed, broadcasting: 5 Apr 7 13:51:45.770: INFO: Exec stderr: "" Apr 7 13:51:45.770: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-8474 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 7 13:51:45.770: INFO: >>> kubeConfig: /root/.kube/config I0407 13:51:45.805398 6 log.go:172] (0xc001deebb0) (0xc002213a40) Create stream I0407 13:51:45.805424 6 log.go:172] (0xc001deebb0) (0xc002213a40) Stream added, broadcasting: 1 I0407 13:51:45.812944 6 log.go:172] (0xc001deebb0) Reply frame received for 1 I0407 13:51:45.813004 6 log.go:172] (0xc001deebb0) (0xc002213ae0) Create stream I0407 13:51:45.813021 6 log.go:172] (0xc001deebb0) (0xc002213ae0) Stream added, broadcasting: 3 I0407 13:51:45.817324 6 log.go:172] (0xc001deebb0) Reply frame received for 3 I0407 13:51:45.817363 6 log.go:172] (0xc001deebb0) (0xc001fe00a0) Create stream I0407 13:51:45.817378 6 log.go:172] (0xc001deebb0) (0xc001fe00a0) Stream added, broadcasting: 5 I0407 13:51:45.818455 6 log.go:172] (0xc001deebb0) Reply frame received for 5 I0407 13:51:45.890276 6 log.go:172] (0xc001deebb0) Data frame received for 5 I0407 13:51:45.890307 6 log.go:172] (0xc001fe00a0) (5) Data frame handling I0407 13:51:45.890356 6 log.go:172] (0xc001deebb0) Data frame received for 3 I0407 13:51:45.890398 6 log.go:172] (0xc002213ae0) (3) Data frame handling I0407 13:51:45.890429 6 log.go:172] (0xc002213ae0) (3) Data frame sent I0407 13:51:45.890456 6 log.go:172] (0xc001deebb0) Data frame received for 3 I0407 13:51:45.890480 6 log.go:172] (0xc002213ae0) (3) Data frame handling I0407 13:51:45.892218 6 log.go:172] (0xc001deebb0) Data frame received for 1 I0407 13:51:45.892296 6 log.go:172] (0xc002213a40) (1) Data frame handling I0407 13:51:45.892371 6 log.go:172] (0xc002213a40) (1) Data frame sent I0407 13:51:45.892416 6 log.go:172] (0xc001deebb0) (0xc002213a40) Stream removed, broadcasting: 1 I0407 13:51:45.892458 6 log.go:172] (0xc001deebb0) Go away received I0407 13:51:45.892614 6 log.go:172] (0xc001deebb0) (0xc002213a40) Stream removed, broadcasting: 1 I0407 13:51:45.892636 6 log.go:172] (0xc001deebb0) (0xc002213ae0) Stream removed, broadcasting: 3 I0407 13:51:45.892652 6 log.go:172] (0xc001deebb0) (0xc001fe00a0) Stream removed, broadcasting: 5 Apr 7 13:51:45.892: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true Apr 7 13:51:45.892: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-8474 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 7 13:51:45.892: INFO: >>> kubeConfig: /root/.kube/config I0407 13:51:45.927382 6 log.go:172] (0xc0016a9810) (0xc001e1c960) Create stream I0407 13:51:45.927425 6 log.go:172] (0xc0016a9810) (0xc001e1c960) Stream added, broadcasting: 1 I0407 13:51:45.930305 6 log.go:172] (0xc0016a9810) Reply frame received for 1 I0407 13:51:45.930346 6 log.go:172] (0xc0016a9810) (0xc001e1ca00) Create stream I0407 13:51:45.930362 6 log.go:172] (0xc0016a9810) (0xc001e1ca00) Stream added, broadcasting: 3 I0407 13:51:45.931452 6 log.go:172] (0xc0016a9810) Reply frame received for 3 I0407 13:51:45.931517 6 log.go:172] (0xc0016a9810) (0xc00164ea00) Create stream I0407 13:51:45.931550 6 log.go:172] (0xc0016a9810) (0xc00164ea00) Stream added, broadcasting: 5 I0407 13:51:45.932633 6 log.go:172] (0xc0016a9810) Reply frame received for 5 I0407 13:51:46.001617 6 log.go:172] (0xc0016a9810) Data frame received for 5 I0407 13:51:46.001658 6 log.go:172] (0xc00164ea00) (5) Data frame handling I0407 13:51:46.001689 6 log.go:172] (0xc0016a9810) Data frame received for 3 I0407 13:51:46.001708 6 log.go:172] (0xc001e1ca00) (3) Data frame handling I0407 13:51:46.001723 6 log.go:172] (0xc001e1ca00) (3) Data frame sent I0407 13:51:46.001735 6 log.go:172] (0xc0016a9810) Data frame received for 3 I0407 13:51:46.001747 6 log.go:172] (0xc001e1ca00) (3) Data frame handling I0407 13:51:46.003701 6 log.go:172] (0xc0016a9810) Data frame received for 1 I0407 13:51:46.003727 6 log.go:172] (0xc001e1c960) (1) Data frame handling I0407 13:51:46.003740 6 log.go:172] (0xc001e1c960) (1) Data frame sent I0407 13:51:46.003754 6 log.go:172] (0xc0016a9810) (0xc001e1c960) Stream removed, broadcasting: 1 I0407 13:51:46.003790 6 log.go:172] (0xc0016a9810) Go away received I0407 13:51:46.003943 6 log.go:172] (0xc0016a9810) (0xc001e1c960) Stream removed, broadcasting: 1 I0407 13:51:46.003979 6 log.go:172] (0xc0016a9810) (0xc001e1ca00) Stream removed, broadcasting: 3 I0407 13:51:46.004000 6 log.go:172] (0xc0016a9810) (0xc00164ea00) Stream removed, broadcasting: 5 Apr 7 13:51:46.004: INFO: Exec stderr: "" Apr 7 13:51:46.004: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-8474 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 7 13:51:46.004: INFO: >>> kubeConfig: /root/.kube/config I0407 13:51:46.040535 6 log.go:172] (0xc000d0f550) (0xc001fe03c0) Create stream I0407 13:51:46.040557 6 log.go:172] (0xc000d0f550) (0xc001fe03c0) Stream added, broadcasting: 1 I0407 13:51:46.043485 6 log.go:172] (0xc000d0f550) Reply frame received for 1 I0407 13:51:46.043534 6 log.go:172] (0xc000d0f550) (0xc0033fa640) Create stream I0407 13:51:46.043549 6 log.go:172] (0xc000d0f550) (0xc0033fa640) Stream added, broadcasting: 3 I0407 13:51:46.044686 6 log.go:172] (0xc000d0f550) Reply frame received for 3 I0407 13:51:46.044739 6 log.go:172] (0xc000d0f550) (0xc00164eaa0) Create stream I0407 13:51:46.044756 6 log.go:172] (0xc000d0f550) (0xc00164eaa0) Stream added, broadcasting: 5 I0407 13:51:46.046164 6 log.go:172] (0xc000d0f550) Reply frame received for 5 I0407 13:51:46.101623 6 log.go:172] (0xc000d0f550) Data frame received for 5 I0407 13:51:46.101650 6 log.go:172] (0xc00164eaa0) (5) Data frame handling I0407 13:51:46.101679 6 log.go:172] (0xc000d0f550) Data frame received for 3 I0407 13:51:46.101688 6 log.go:172] (0xc0033fa640) (3) Data frame handling I0407 13:51:46.101696 6 log.go:172] (0xc0033fa640) (3) Data frame sent I0407 13:51:46.101722 6 log.go:172] (0xc000d0f550) Data frame received for 3 I0407 13:51:46.101735 6 log.go:172] (0xc0033fa640) (3) Data frame handling I0407 13:51:46.103461 6 log.go:172] (0xc000d0f550) Data frame received for 1 I0407 13:51:46.103499 6 log.go:172] (0xc001fe03c0) (1) Data frame handling I0407 13:51:46.103549 6 log.go:172] (0xc001fe03c0) (1) Data frame sent I0407 13:51:46.103574 6 log.go:172] (0xc000d0f550) (0xc001fe03c0) Stream removed, broadcasting: 1 I0407 13:51:46.103601 6 log.go:172] (0xc000d0f550) Go away received I0407 13:51:46.103746 6 log.go:172] (0xc000d0f550) (0xc001fe03c0) Stream removed, broadcasting: 1 I0407 13:51:46.103876 6 log.go:172] (0xc000d0f550) (0xc0033fa640) Stream removed, broadcasting: 3 I0407 13:51:46.103897 6 log.go:172] (0xc000d0f550) (0xc00164eaa0) Stream removed, broadcasting: 5 Apr 7 13:51:46.103: INFO: Exec stderr: "" Apr 7 13:51:46.103: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-8474 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 7 13:51:46.103: INFO: >>> kubeConfig: /root/.kube/config I0407 13:51:46.137292 6 log.go:172] (0xc001defe40) (0xc002213e00) Create stream I0407 13:51:46.137341 6 log.go:172] (0xc001defe40) (0xc002213e00) Stream added, broadcasting: 1 I0407 13:51:46.139569 6 log.go:172] (0xc001defe40) Reply frame received for 1 I0407 13:51:46.139604 6 log.go:172] (0xc001defe40) (0xc001e1cb40) Create stream I0407 13:51:46.139621 6 log.go:172] (0xc001defe40) (0xc001e1cb40) Stream added, broadcasting: 3 I0407 13:51:46.140597 6 log.go:172] (0xc001defe40) Reply frame received for 3 I0407 13:51:46.140639 6 log.go:172] (0xc001defe40) (0xc001fe0460) Create stream I0407 13:51:46.140653 6 log.go:172] (0xc001defe40) (0xc001fe0460) Stream added, broadcasting: 5 I0407 13:51:46.141945 6 log.go:172] (0xc001defe40) Reply frame received for 5 I0407 13:51:46.187164 6 log.go:172] (0xc001defe40) Data frame received for 5 I0407 13:51:46.187208 6 log.go:172] (0xc001fe0460) (5) Data frame handling I0407 13:51:46.187238 6 log.go:172] (0xc001defe40) Data frame received for 3 I0407 13:51:46.187251 6 log.go:172] (0xc001e1cb40) (3) Data frame handling I0407 13:51:46.187276 6 log.go:172] (0xc001e1cb40) (3) Data frame sent I0407 13:51:46.187291 6 log.go:172] (0xc001defe40) Data frame received for 3 I0407 13:51:46.187340 6 log.go:172] (0xc001e1cb40) (3) Data frame handling I0407 13:51:46.188783 6 log.go:172] (0xc001defe40) Data frame received for 1 I0407 13:51:46.188803 6 log.go:172] (0xc002213e00) (1) Data frame handling I0407 13:51:46.188814 6 log.go:172] (0xc002213e00) (1) Data frame sent I0407 13:51:46.188830 6 log.go:172] (0xc001defe40) (0xc002213e00) Stream removed, broadcasting: 1 I0407 13:51:46.188889 6 log.go:172] (0xc001defe40) Go away received I0407 13:51:46.188955 6 log.go:172] (0xc001defe40) (0xc002213e00) Stream removed, broadcasting: 1 I0407 13:51:46.188973 6 log.go:172] (0xc001defe40) (0xc001e1cb40) Stream removed, broadcasting: 3 I0407 13:51:46.188983 6 log.go:172] (0xc001defe40) (0xc001fe0460) Stream removed, broadcasting: 5 Apr 7 13:51:46.188: INFO: Exec stderr: "" Apr 7 13:51:46.189: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-8474 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 7 13:51:46.189: INFO: >>> kubeConfig: /root/.kube/config I0407 13:51:46.221288 6 log.go:172] (0xc0029eaa50) (0xc000a6a3c0) Create stream I0407 13:51:46.221310 6 log.go:172] (0xc0029eaa50) (0xc000a6a3c0) Stream added, broadcasting: 1 I0407 13:51:46.223569 6 log.go:172] (0xc0029eaa50) Reply frame received for 1 I0407 13:51:46.223610 6 log.go:172] (0xc0029eaa50) (0xc001fe0500) Create stream I0407 13:51:46.223625 6 log.go:172] (0xc0029eaa50) (0xc001fe0500) Stream added, broadcasting: 3 I0407 13:51:46.224771 6 log.go:172] (0xc0029eaa50) Reply frame received for 3 I0407 13:51:46.224820 6 log.go:172] (0xc0029eaa50) (0xc001fe05a0) Create stream I0407 13:51:46.224841 6 log.go:172] (0xc0029eaa50) (0xc001fe05a0) Stream added, broadcasting: 5 I0407 13:51:46.226113 6 log.go:172] (0xc0029eaa50) Reply frame received for 5 I0407 13:51:46.297669 6 log.go:172] (0xc0029eaa50) Data frame received for 3 I0407 13:51:46.297707 6 log.go:172] (0xc001fe0500) (3) Data frame handling I0407 13:51:46.297730 6 log.go:172] (0xc001fe0500) (3) Data frame sent I0407 13:51:46.297892 6 log.go:172] (0xc0029eaa50) Data frame received for 5 I0407 13:51:46.297943 6 log.go:172] (0xc001fe05a0) (5) Data frame handling I0407 13:51:46.297986 6 log.go:172] (0xc0029eaa50) Data frame received for 3 I0407 13:51:46.298022 6 log.go:172] (0xc001fe0500) (3) Data frame handling I0407 13:51:46.298773 6 log.go:172] (0xc0029eaa50) Data frame received for 1 I0407 13:51:46.298795 6 log.go:172] (0xc000a6a3c0) (1) Data frame handling I0407 13:51:46.298807 6 log.go:172] (0xc000a6a3c0) (1) Data frame sent I0407 13:51:46.299201 6 log.go:172] (0xc0029eaa50) (0xc000a6a3c0) Stream removed, broadcasting: 1 I0407 13:51:46.299252 6 log.go:172] (0xc0029eaa50) Go away received I0407 13:51:46.299409 6 log.go:172] (0xc0029eaa50) (0xc000a6a3c0) Stream removed, broadcasting: 1 I0407 13:51:46.299440 6 log.go:172] (0xc0029eaa50) (0xc001fe0500) Stream removed, broadcasting: 3 I0407 13:51:46.299458 6 log.go:172] (0xc0029eaa50) (0xc001fe05a0) Stream removed, broadcasting: 5 Apr 7 13:51:46.299: INFO: Exec stderr: "" [AfterEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 7 13:51:46.299: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-kubelet-etc-hosts-8474" for this suite. Apr 7 13:52:36.316: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 7 13:52:36.397: INFO: namespace e2e-kubelet-etc-hosts-8474 deletion completed in 50.092596031s • [SLOW TEST:61.244 seconds] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run pod should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 7 13:52:36.398: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1685 [It] should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Apr 7 13:52:36.467: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-5092' Apr 7 13:52:36.597: INFO: stderr: "" Apr 7 13:52:36.597: INFO: stdout: "pod/e2e-test-nginx-pod created\n" STEP: verifying the pod e2e-test-nginx-pod was created [AfterEach] [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1690 Apr 7 13:52:36.605: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=kubectl-5092' Apr 7 13:52:42.164: INFO: stderr: "" Apr 7 13:52:42.164: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 7 13:52:42.164: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5092" for this suite. Apr 7 13:52:48.198: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 7 13:52:48.280: INFO: namespace kubectl-5092 deletion completed in 6.099146978s • [SLOW TEST:11.882 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 7 13:52:48.280: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Delete Grace Period /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:47 [It] should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: setting up selector STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes Apr 7 13:52:52.399: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0' STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice Apr 7 13:53:02.497: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 7 13:53:02.501: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-7387" for this suite. Apr 7 13:53:08.539: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 7 13:53:08.609: INFO: namespace pods-7387 deletion completed in 6.103186941s • [SLOW TEST:20.329 seconds] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 [k8s.io] Delete Grace Period /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 7 13:53:08.610: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Apr 7 13:53:08.671: INFO: Waiting up to 5m0s for pod "downwardapi-volume-273320d5-a9fb-4f13-afa0-05050b004b32" in namespace "downward-api-7967" to be "success or failure" Apr 7 13:53:08.674: INFO: Pod "downwardapi-volume-273320d5-a9fb-4f13-afa0-05050b004b32": Phase="Pending", Reason="", readiness=false. Elapsed: 3.542864ms Apr 7 13:53:10.679: INFO: Pod "downwardapi-volume-273320d5-a9fb-4f13-afa0-05050b004b32": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00801163s Apr 7 13:53:12.683: INFO: Pod "downwardapi-volume-273320d5-a9fb-4f13-afa0-05050b004b32": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01210705s STEP: Saw pod success Apr 7 13:53:12.683: INFO: Pod "downwardapi-volume-273320d5-a9fb-4f13-afa0-05050b004b32" satisfied condition "success or failure" Apr 7 13:53:12.686: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-273320d5-a9fb-4f13-afa0-05050b004b32 container client-container: STEP: delete the pod Apr 7 13:53:12.721: INFO: Waiting for pod downwardapi-volume-273320d5-a9fb-4f13-afa0-05050b004b32 to disappear Apr 7 13:53:12.723: INFO: Pod downwardapi-volume-273320d5-a9fb-4f13-afa0-05050b004b32 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 7 13:53:12.723: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7967" for this suite. Apr 7 13:53:18.737: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 7 13:53:18.814: INFO: namespace downward-api-7967 deletion completed in 6.087621777s • [SLOW TEST:10.205 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl expose should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 7 13:53:18.816: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating Redis RC Apr 7 13:53:18.854: INFO: namespace kubectl-6257 Apr 7 13:53:18.854: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6257' Apr 7 13:53:19.188: INFO: stderr: "" Apr 7 13:53:19.188: INFO: stdout: "replicationcontroller/redis-master created\n" STEP: Waiting for Redis master to start. Apr 7 13:53:20.191: INFO: Selector matched 1 pods for map[app:redis] Apr 7 13:53:20.191: INFO: Found 0 / 1 Apr 7 13:53:21.192: INFO: Selector matched 1 pods for map[app:redis] Apr 7 13:53:21.192: INFO: Found 0 / 1 Apr 7 13:53:22.194: INFO: Selector matched 1 pods for map[app:redis] Apr 7 13:53:22.194: INFO: Found 1 / 1 Apr 7 13:53:22.194: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Apr 7 13:53:22.198: INFO: Selector matched 1 pods for map[app:redis] Apr 7 13:53:22.198: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Apr 7 13:53:22.198: INFO: wait on redis-master startup in kubectl-6257 Apr 7 13:53:22.198: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-5lrbn redis-master --namespace=kubectl-6257' Apr 7 13:53:22.297: INFO: stderr: "" Apr 7 13:53:22.297: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 07 Apr 13:53:21.578 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 07 Apr 13:53:21.578 # Server started, Redis version 3.2.12\n1:M 07 Apr 13:53:21.578 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 07 Apr 13:53:21.578 * The server is now ready to accept connections on port 6379\n" STEP: exposing RC Apr 7 13:53:22.297: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc redis-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-6257' Apr 7 13:53:22.437: INFO: stderr: "" Apr 7 13:53:22.437: INFO: stdout: "service/rm2 exposed\n" Apr 7 13:53:22.446: INFO: Service rm2 in namespace kubectl-6257 found. STEP: exposing service Apr 7 13:53:24.454: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-6257' Apr 7 13:53:24.589: INFO: stderr: "" Apr 7 13:53:24.589: INFO: stdout: "service/rm3 exposed\n" Apr 7 13:53:24.597: INFO: Service rm3 in namespace kubectl-6257 found. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 7 13:53:26.604: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6257" for this suite. Apr 7 13:53:48.625: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 7 13:53:48.707: INFO: namespace kubectl-6257 deletion completed in 22.09927339s • [SLOW TEST:29.891 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl expose /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 7 13:53:48.707: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-dc21169f-7a32-465d-b48e-4b1a9c6603d6 STEP: Creating a pod to test consume secrets Apr 7 13:53:48.814: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-622be4a5-c5e4-40e8-a663-cb1e4108b8fa" in namespace "projected-6830" to be "success or failure" Apr 7 13:53:48.819: INFO: Pod "pod-projected-secrets-622be4a5-c5e4-40e8-a663-cb1e4108b8fa": Phase="Pending", Reason="", readiness=false. Elapsed: 5.393061ms Apr 7 13:53:50.823: INFO: Pod "pod-projected-secrets-622be4a5-c5e4-40e8-a663-cb1e4108b8fa": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009176263s Apr 7 13:53:52.827: INFO: Pod "pod-projected-secrets-622be4a5-c5e4-40e8-a663-cb1e4108b8fa": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013583994s STEP: Saw pod success Apr 7 13:53:52.828: INFO: Pod "pod-projected-secrets-622be4a5-c5e4-40e8-a663-cb1e4108b8fa" satisfied condition "success or failure" Apr 7 13:53:52.831: INFO: Trying to get logs from node iruya-worker pod pod-projected-secrets-622be4a5-c5e4-40e8-a663-cb1e4108b8fa container projected-secret-volume-test: STEP: delete the pod Apr 7 13:53:52.863: INFO: Waiting for pod pod-projected-secrets-622be4a5-c5e4-40e8-a663-cb1e4108b8fa to disappear Apr 7 13:53:52.873: INFO: Pod pod-projected-secrets-622be4a5-c5e4-40e8-a663-cb1e4108b8fa no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 7 13:53:52.873: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6830" for this suite. Apr 7 13:53:58.889: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 7 13:53:58.965: INFO: namespace projected-6830 deletion completed in 6.088136861s • [SLOW TEST:10.257 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 7 13:53:58.965: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap configmap-4867/configmap-test-b27e2e25-e376-4736-9971-45158ea0d305 STEP: Creating a pod to test consume configMaps Apr 7 13:53:59.145: INFO: Waiting up to 5m0s for pod "pod-configmaps-1cf377e8-6992-4302-a826-752881c3e0df" in namespace "configmap-4867" to be "success or failure" Apr 7 13:53:59.156: INFO: Pod "pod-configmaps-1cf377e8-6992-4302-a826-752881c3e0df": Phase="Pending", Reason="", readiness=false. Elapsed: 10.406758ms Apr 7 13:54:01.160: INFO: Pod "pod-configmaps-1cf377e8-6992-4302-a826-752881c3e0df": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014714132s Apr 7 13:54:03.167: INFO: Pod "pod-configmaps-1cf377e8-6992-4302-a826-752881c3e0df": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.022187344s STEP: Saw pod success Apr 7 13:54:03.167: INFO: Pod "pod-configmaps-1cf377e8-6992-4302-a826-752881c3e0df" satisfied condition "success or failure" Apr 7 13:54:03.170: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-1cf377e8-6992-4302-a826-752881c3e0df container env-test: STEP: delete the pod Apr 7 13:54:03.207: INFO: Waiting for pod pod-configmaps-1cf377e8-6992-4302-a826-752881c3e0df to disappear Apr 7 13:54:03.220: INFO: Pod pod-configmaps-1cf377e8-6992-4302-a826-752881c3e0df no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 7 13:54:03.220: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4867" for this suite. Apr 7 13:54:09.236: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 7 13:54:09.325: INFO: namespace configmap-4867 deletion completed in 6.1013556s • [SLOW TEST:10.360 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 7 13:54:09.325: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Apr 7 13:54:09.407: INFO: Waiting up to 5m0s for pod "downwardapi-volume-46b1709c-785f-4b54-aab3-2daf256afbe6" in namespace "downward-api-5831" to be "success or failure" Apr 7 13:54:09.411: INFO: Pod "downwardapi-volume-46b1709c-785f-4b54-aab3-2daf256afbe6": Phase="Pending", Reason="", readiness=false. Elapsed: 3.643514ms Apr 7 13:54:11.435: INFO: Pod "downwardapi-volume-46b1709c-785f-4b54-aab3-2daf256afbe6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027240316s Apr 7 13:54:13.439: INFO: Pod "downwardapi-volume-46b1709c-785f-4b54-aab3-2daf256afbe6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.031541629s STEP: Saw pod success Apr 7 13:54:13.439: INFO: Pod "downwardapi-volume-46b1709c-785f-4b54-aab3-2daf256afbe6" satisfied condition "success or failure" Apr 7 13:54:13.442: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-46b1709c-785f-4b54-aab3-2daf256afbe6 container client-container: STEP: delete the pod Apr 7 13:54:13.476: INFO: Waiting for pod downwardapi-volume-46b1709c-785f-4b54-aab3-2daf256afbe6 to disappear Apr 7 13:54:13.490: INFO: Pod downwardapi-volume-46b1709c-785f-4b54-aab3-2daf256afbe6 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 7 13:54:13.490: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5831" for this suite. Apr 7 13:54:19.506: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 7 13:54:19.584: INFO: namespace downward-api-5831 deletion completed in 6.090753136s • [SLOW TEST:10.259 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 7 13:54:19.584: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-map-891a1461-ede6-4e60-9a76-ab7e24b9e82b STEP: Creating a pod to test consume configMaps Apr 7 13:54:19.665: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-fd8afd9b-f08c-4ec2-b2d5-b4cfbe43afcd" in namespace "projected-327" to be "success or failure" Apr 7 13:54:19.681: INFO: Pod "pod-projected-configmaps-fd8afd9b-f08c-4ec2-b2d5-b4cfbe43afcd": Phase="Pending", Reason="", readiness=false. Elapsed: 15.584914ms Apr 7 13:54:21.702: INFO: Pod "pod-projected-configmaps-fd8afd9b-f08c-4ec2-b2d5-b4cfbe43afcd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036922751s Apr 7 13:54:23.706: INFO: Pod "pod-projected-configmaps-fd8afd9b-f08c-4ec2-b2d5-b4cfbe43afcd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.041238378s STEP: Saw pod success Apr 7 13:54:23.706: INFO: Pod "pod-projected-configmaps-fd8afd9b-f08c-4ec2-b2d5-b4cfbe43afcd" satisfied condition "success or failure" Apr 7 13:54:23.710: INFO: Trying to get logs from node iruya-worker pod pod-projected-configmaps-fd8afd9b-f08c-4ec2-b2d5-b4cfbe43afcd container projected-configmap-volume-test: STEP: delete the pod Apr 7 13:54:23.725: INFO: Waiting for pod pod-projected-configmaps-fd8afd9b-f08c-4ec2-b2d5-b4cfbe43afcd to disappear Apr 7 13:54:23.776: INFO: Pod pod-projected-configmaps-fd8afd9b-f08c-4ec2-b2d5-b4cfbe43afcd no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 7 13:54:23.776: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-327" for this suite. Apr 7 13:54:29.794: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 7 13:54:29.889: INFO: namespace projected-327 deletion completed in 6.109476084s • [SLOW TEST:10.305 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 7 13:54:29.890: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-map-e9450165-36e9-4b8f-829d-272d7e8fdfcc STEP: Creating a pod to test consume configMaps Apr 7 13:54:29.976: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-4e6c7ea0-9f24-40a0-b057-8d1aa2c4acd2" in namespace "projected-1332" to be "success or failure" Apr 7 13:54:30.016: INFO: Pod "pod-projected-configmaps-4e6c7ea0-9f24-40a0-b057-8d1aa2c4acd2": Phase="Pending", Reason="", readiness=false. Elapsed: 40.196276ms Apr 7 13:54:32.020: INFO: Pod "pod-projected-configmaps-4e6c7ea0-9f24-40a0-b057-8d1aa2c4acd2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.044115243s Apr 7 13:54:34.024: INFO: Pod "pod-projected-configmaps-4e6c7ea0-9f24-40a0-b057-8d1aa2c4acd2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.048062681s STEP: Saw pod success Apr 7 13:54:34.024: INFO: Pod "pod-projected-configmaps-4e6c7ea0-9f24-40a0-b057-8d1aa2c4acd2" satisfied condition "success or failure" Apr 7 13:54:34.027: INFO: Trying to get logs from node iruya-worker pod pod-projected-configmaps-4e6c7ea0-9f24-40a0-b057-8d1aa2c4acd2 container projected-configmap-volume-test: STEP: delete the pod Apr 7 13:54:34.075: INFO: Waiting for pod pod-projected-configmaps-4e6c7ea0-9f24-40a0-b057-8d1aa2c4acd2 to disappear Apr 7 13:54:34.090: INFO: Pod pod-projected-configmaps-4e6c7ea0-9f24-40a0-b057-8d1aa2c4acd2 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 7 13:54:34.090: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1332" for this suite. Apr 7 13:54:40.117: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 7 13:54:40.212: INFO: namespace projected-1332 deletion completed in 6.118172598s • [SLOW TEST:10.322 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 7 13:54:40.212: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-9a8d5830-c02d-4716-b193-316a0413e195 STEP: Creating a pod to test consume configMaps Apr 7 13:54:40.355: INFO: Waiting up to 5m0s for pod "pod-configmaps-28af7dd6-f145-4c55-ae61-05e7dddf09b3" in namespace "configmap-9586" to be "success or failure" Apr 7 13:54:40.359: INFO: Pod "pod-configmaps-28af7dd6-f145-4c55-ae61-05e7dddf09b3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.091436ms Apr 7 13:54:42.362: INFO: Pod "pod-configmaps-28af7dd6-f145-4c55-ae61-05e7dddf09b3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007521138s Apr 7 13:54:44.367: INFO: Pod "pod-configmaps-28af7dd6-f145-4c55-ae61-05e7dddf09b3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012004838s STEP: Saw pod success Apr 7 13:54:44.367: INFO: Pod "pod-configmaps-28af7dd6-f145-4c55-ae61-05e7dddf09b3" satisfied condition "success or failure" Apr 7 13:54:44.369: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-28af7dd6-f145-4c55-ae61-05e7dddf09b3 container configmap-volume-test: STEP: delete the pod Apr 7 13:54:44.384: INFO: Waiting for pod pod-configmaps-28af7dd6-f145-4c55-ae61-05e7dddf09b3 to disappear Apr 7 13:54:44.398: INFO: Pod pod-configmaps-28af7dd6-f145-4c55-ae61-05e7dddf09b3 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 7 13:54:44.398: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9586" for this suite. Apr 7 13:54:50.422: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 7 13:54:50.495: INFO: namespace configmap-9586 deletion completed in 6.093815759s • [SLOW TEST:10.283 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Service endpoints latency should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 7 13:54:50.496: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating replication controller svc-latency-rc in namespace svc-latency-5393 I0407 13:54:50.556045 6 runners.go:180] Created replication controller with name: svc-latency-rc, namespace: svc-latency-5393, replica count: 1 I0407 13:54:51.606551 6 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0407 13:54:52.606772 6 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0407 13:54:53.607033 6 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0407 13:54:54.607244 6 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Apr 7 13:54:54.734: INFO: Created: latency-svc-pwgm6 Apr 7 13:54:54.771: INFO: Got endpoints: latency-svc-pwgm6 [63.766421ms] Apr 7 13:54:54.801: INFO: Created: latency-svc-rts5j Apr 7 13:54:54.824: INFO: Got endpoints: latency-svc-rts5j [53.004771ms] Apr 7 13:54:54.855: INFO: Created: latency-svc-jh5pk Apr 7 13:54:54.868: INFO: Got endpoints: latency-svc-jh5pk [97.033528ms] Apr 7 13:54:54.927: INFO: Created: latency-svc-lzsh8 Apr 7 13:54:54.930: INFO: Got endpoints: latency-svc-lzsh8 [159.054476ms] Apr 7 13:54:54.980: INFO: Created: latency-svc-rh6ws Apr 7 13:54:54.994: INFO: Got endpoints: latency-svc-rh6ws [223.46866ms] Apr 7 13:54:55.016: INFO: Created: latency-svc-2vwrg Apr 7 13:54:55.052: INFO: Got endpoints: latency-svc-2vwrg [280.815426ms] Apr 7 13:54:55.063: INFO: Created: latency-svc-n4c9c Apr 7 13:54:55.079: INFO: Got endpoints: latency-svc-n4c9c [308.049814ms] Apr 7 13:54:55.100: INFO: Created: latency-svc-fr7b8 Apr 7 13:54:55.109: INFO: Got endpoints: latency-svc-fr7b8 [338.091989ms] Apr 7 13:54:55.131: INFO: Created: latency-svc-zzfmz Apr 7 13:54:55.140: INFO: Got endpoints: latency-svc-zzfmz [368.378694ms] Apr 7 13:54:55.204: INFO: Created: latency-svc-fvwrr Apr 7 13:54:55.230: INFO: Got endpoints: latency-svc-fvwrr [458.492012ms] Apr 7 13:54:55.260: INFO: Created: latency-svc-6z5hb Apr 7 13:54:55.272: INFO: Got endpoints: latency-svc-6z5hb [500.931063ms] Apr 7 13:54:55.330: INFO: Created: latency-svc-gblsb Apr 7 13:54:55.338: INFO: Got endpoints: latency-svc-gblsb [567.279887ms] Apr 7 13:54:55.370: INFO: Created: latency-svc-t94qm Apr 7 13:54:55.387: INFO: Got endpoints: latency-svc-t94qm [615.389818ms] Apr 7 13:54:55.407: INFO: Created: latency-svc-wxxqh Apr 7 13:54:55.423: INFO: Got endpoints: latency-svc-wxxqh [651.790464ms] Apr 7 13:54:55.472: INFO: Created: latency-svc-6wj5v Apr 7 13:54:55.487: INFO: Got endpoints: latency-svc-6wj5v [715.840977ms] Apr 7 13:54:55.502: INFO: Created: latency-svc-rldbc Apr 7 13:54:55.514: INFO: Got endpoints: latency-svc-rldbc [742.72076ms] Apr 7 13:54:55.538: INFO: Created: latency-svc-d2454 Apr 7 13:54:55.621: INFO: Got endpoints: latency-svc-d2454 [133.967312ms] Apr 7 13:54:55.623: INFO: Created: latency-svc-zbxlc Apr 7 13:54:55.628: INFO: Got endpoints: latency-svc-zbxlc [803.685621ms] Apr 7 13:54:55.688: INFO: Created: latency-svc-645cm Apr 7 13:54:55.704: INFO: Got endpoints: latency-svc-645cm [835.461193ms] Apr 7 13:54:55.718: INFO: Created: latency-svc-n2dcz Apr 7 13:54:55.777: INFO: Got endpoints: latency-svc-n2dcz [846.620845ms] Apr 7 13:54:55.779: INFO: Created: latency-svc-wt9c7 Apr 7 13:54:55.792: INFO: Got endpoints: latency-svc-wt9c7 [797.180172ms] Apr 7 13:54:55.814: INFO: Created: latency-svc-6g25j Apr 7 13:54:55.821: INFO: Got endpoints: latency-svc-6g25j [769.302507ms] Apr 7 13:54:55.845: INFO: Created: latency-svc-n62qp Apr 7 13:54:55.852: INFO: Got endpoints: latency-svc-n62qp [772.305569ms] Apr 7 13:54:55.870: INFO: Created: latency-svc-6xf5d Apr 7 13:54:55.920: INFO: Got endpoints: latency-svc-6xf5d [811.044215ms] Apr 7 13:54:55.940: INFO: Created: latency-svc-6xdcw Apr 7 13:54:55.954: INFO: Got endpoints: latency-svc-6xdcw [814.365728ms] Apr 7 13:54:55.977: INFO: Created: latency-svc-c4znp Apr 7 13:54:55.984: INFO: Got endpoints: latency-svc-c4znp [754.518262ms] Apr 7 13:54:56.012: INFO: Created: latency-svc-6ktfs Apr 7 13:54:56.064: INFO: Got endpoints: latency-svc-6ktfs [791.807085ms] Apr 7 13:54:56.066: INFO: Created: latency-svc-7lcfp Apr 7 13:54:56.075: INFO: Got endpoints: latency-svc-7lcfp [736.480736ms] Apr 7 13:54:56.096: INFO: Created: latency-svc-59hc9 Apr 7 13:54:56.111: INFO: Got endpoints: latency-svc-59hc9 [724.576932ms] Apr 7 13:54:56.132: INFO: Created: latency-svc-nbnqc Apr 7 13:54:56.142: INFO: Got endpoints: latency-svc-nbnqc [719.026878ms] Apr 7 13:54:56.164: INFO: Created: latency-svc-5nv4d Apr 7 13:54:56.226: INFO: Got endpoints: latency-svc-5nv4d [712.219278ms] Apr 7 13:54:56.252: INFO: Created: latency-svc-gs2mf Apr 7 13:54:56.288: INFO: Got endpoints: latency-svc-gs2mf [667.0756ms] Apr 7 13:54:56.324: INFO: Created: latency-svc-7jgc4 Apr 7 13:54:56.387: INFO: Got endpoints: latency-svc-7jgc4 [759.589651ms] Apr 7 13:54:56.420: INFO: Created: latency-svc-cfl2n Apr 7 13:54:56.456: INFO: Got endpoints: latency-svc-cfl2n [752.12883ms] Apr 7 13:54:56.487: INFO: Created: latency-svc-szr8x Apr 7 13:54:56.531: INFO: Got endpoints: latency-svc-szr8x [753.995413ms] Apr 7 13:54:56.565: INFO: Created: latency-svc-dq7hc Apr 7 13:54:56.575: INFO: Got endpoints: latency-svc-dq7hc [783.015111ms] Apr 7 13:54:56.595: INFO: Created: latency-svc-5rbqv Apr 7 13:54:56.605: INFO: Got endpoints: latency-svc-5rbqv [784.140603ms] Apr 7 13:54:56.624: INFO: Created: latency-svc-chqmv Apr 7 13:54:56.669: INFO: Got endpoints: latency-svc-chqmv [817.037666ms] Apr 7 13:54:56.672: INFO: Created: latency-svc-6zgzc Apr 7 13:54:56.684: INFO: Got endpoints: latency-svc-6zgzc [763.317955ms] Apr 7 13:54:56.702: INFO: Created: latency-svc-lbxrb Apr 7 13:54:56.714: INFO: Got endpoints: latency-svc-lbxrb [759.935436ms] Apr 7 13:54:56.732: INFO: Created: latency-svc-7gfjh Apr 7 13:54:56.744: INFO: Got endpoints: latency-svc-7gfjh [760.081323ms] Apr 7 13:54:56.762: INFO: Created: latency-svc-dtssd Apr 7 13:54:56.795: INFO: Got endpoints: latency-svc-dtssd [730.921326ms] Apr 7 13:54:56.810: INFO: Created: latency-svc-4rvz6 Apr 7 13:54:56.823: INFO: Got endpoints: latency-svc-4rvz6 [747.950698ms] Apr 7 13:54:56.852: INFO: Created: latency-svc-w8h69 Apr 7 13:54:56.866: INFO: Got endpoints: latency-svc-w8h69 [754.456595ms] Apr 7 13:54:56.882: INFO: Created: latency-svc-8l95z Apr 7 13:54:56.926: INFO: Got endpoints: latency-svc-8l95z [783.925478ms] Apr 7 13:54:56.942: INFO: Created: latency-svc-l8j5s Apr 7 13:54:56.956: INFO: Got endpoints: latency-svc-l8j5s [730.169361ms] Apr 7 13:54:56.985: INFO: Created: latency-svc-lw556 Apr 7 13:54:57.008: INFO: Got endpoints: latency-svc-lw556 [720.050056ms] Apr 7 13:54:57.064: INFO: Created: latency-svc-pvw6q Apr 7 13:54:57.067: INFO: Got endpoints: latency-svc-pvw6q [679.790793ms] Apr 7 13:54:57.117: INFO: Created: latency-svc-ks8s7 Apr 7 13:54:57.131: INFO: Got endpoints: latency-svc-ks8s7 [675.050776ms] Apr 7 13:54:57.152: INFO: Created: latency-svc-txq9j Apr 7 13:54:57.168: INFO: Got endpoints: latency-svc-txq9j [636.54021ms] Apr 7 13:54:57.208: INFO: Created: latency-svc-xg6wr Apr 7 13:54:57.230: INFO: Got endpoints: latency-svc-xg6wr [654.800082ms] Apr 7 13:54:57.230: INFO: Created: latency-svc-hmbcn Apr 7 13:54:57.246: INFO: Got endpoints: latency-svc-hmbcn [640.130336ms] Apr 7 13:54:57.266: INFO: Created: latency-svc-8vk6d Apr 7 13:54:57.282: INFO: Got endpoints: latency-svc-8vk6d [613.049752ms] Apr 7 13:54:57.376: INFO: Created: latency-svc-88sld Apr 7 13:54:57.378: INFO: Created: latency-svc-cd6s6 Apr 7 13:54:57.403: INFO: Got endpoints: latency-svc-cd6s6 [688.478306ms] Apr 7 13:54:57.403: INFO: Got endpoints: latency-svc-88sld [718.849534ms] Apr 7 13:54:57.422: INFO: Created: latency-svc-44wzq Apr 7 13:54:57.440: INFO: Got endpoints: latency-svc-44wzq [695.204172ms] Apr 7 13:54:57.515: INFO: Created: latency-svc-grtp4 Apr 7 13:54:57.516: INFO: Got endpoints: latency-svc-grtp4 [721.296736ms] Apr 7 13:54:57.542: INFO: Created: latency-svc-54bs7 Apr 7 13:54:57.553: INFO: Got endpoints: latency-svc-54bs7 [730.128897ms] Apr 7 13:54:57.572: INFO: Created: latency-svc-tj7wq Apr 7 13:54:57.583: INFO: Got endpoints: latency-svc-tj7wq [717.635967ms] Apr 7 13:54:57.602: INFO: Created: latency-svc-rhw6q Apr 7 13:54:57.663: INFO: Got endpoints: latency-svc-rhw6q [736.846218ms] Apr 7 13:54:57.666: INFO: Created: latency-svc-9mt6x Apr 7 13:54:57.692: INFO: Got endpoints: latency-svc-9mt6x [735.889806ms] Apr 7 13:54:57.722: INFO: Created: latency-svc-xvknv Apr 7 13:54:57.734: INFO: Got endpoints: latency-svc-xvknv [725.749253ms] Apr 7 13:54:57.758: INFO: Created: latency-svc-xhskl Apr 7 13:54:57.800: INFO: Got endpoints: latency-svc-xhskl [733.09057ms] Apr 7 13:54:57.813: INFO: Created: latency-svc-cw9p7 Apr 7 13:54:57.832: INFO: Got endpoints: latency-svc-cw9p7 [700.764336ms] Apr 7 13:54:57.854: INFO: Created: latency-svc-8txmn Apr 7 13:54:57.867: INFO: Got endpoints: latency-svc-8txmn [699.554362ms] Apr 7 13:54:57.896: INFO: Created: latency-svc-ltdf5 Apr 7 13:54:57.933: INFO: Got endpoints: latency-svc-ltdf5 [702.866958ms] Apr 7 13:54:57.938: INFO: Created: latency-svc-5dlwq Apr 7 13:54:57.952: INFO: Got endpoints: latency-svc-5dlwq [706.19638ms] Apr 7 13:54:57.974: INFO: Created: latency-svc-l5wtw Apr 7 13:54:57.982: INFO: Got endpoints: latency-svc-l5wtw [699.908873ms] Apr 7 13:54:58.006: INFO: Created: latency-svc-lnqsg Apr 7 13:54:58.012: INFO: Got endpoints: latency-svc-lnqsg [609.597351ms] Apr 7 13:54:58.065: INFO: Created: latency-svc-2drpv Apr 7 13:54:58.068: INFO: Got endpoints: latency-svc-2drpv [665.499409ms] Apr 7 13:54:58.094: INFO: Created: latency-svc-jb5hm Apr 7 13:54:58.120: INFO: Got endpoints: latency-svc-jb5hm [680.674377ms] Apr 7 13:54:58.143: INFO: Created: latency-svc-cdnhx Apr 7 13:54:58.151: INFO: Got endpoints: latency-svc-cdnhx [634.892621ms] Apr 7 13:54:58.214: INFO: Created: latency-svc-xhmnd Apr 7 13:54:58.217: INFO: Got endpoints: latency-svc-xhmnd [663.988568ms] Apr 7 13:54:58.245: INFO: Created: latency-svc-9vwrl Apr 7 13:54:58.254: INFO: Got endpoints: latency-svc-9vwrl [670.448907ms] Apr 7 13:54:58.274: INFO: Created: latency-svc-wvv7z Apr 7 13:54:58.304: INFO: Got endpoints: latency-svc-wvv7z [640.49097ms] Apr 7 13:54:58.359: INFO: Created: latency-svc-gxgpl Apr 7 13:54:58.376: INFO: Got endpoints: latency-svc-gxgpl [683.674351ms] Apr 7 13:54:58.406: INFO: Created: latency-svc-nwmw7 Apr 7 13:54:58.507: INFO: Got endpoints: latency-svc-nwmw7 [772.840537ms] Apr 7 13:54:58.514: INFO: Created: latency-svc-zt4br Apr 7 13:54:58.527: INFO: Got endpoints: latency-svc-zt4br [726.057192ms] Apr 7 13:54:58.550: INFO: Created: latency-svc-fnw4j Apr 7 13:54:58.569: INFO: Got endpoints: latency-svc-fnw4j [736.948133ms] Apr 7 13:54:58.592: INFO: Created: latency-svc-bkgq6 Apr 7 13:54:58.605: INFO: Got endpoints: latency-svc-bkgq6 [737.662196ms] Apr 7 13:54:58.657: INFO: Created: latency-svc-62qbw Apr 7 13:54:58.665: INFO: Got endpoints: latency-svc-62qbw [732.419126ms] Apr 7 13:54:58.713: INFO: Created: latency-svc-dwvlv Apr 7 13:54:58.813: INFO: Got endpoints: latency-svc-dwvlv [860.61574ms] Apr 7 13:54:58.857: INFO: Created: latency-svc-f7969 Apr 7 13:54:58.887: INFO: Got endpoints: latency-svc-f7969 [905.299933ms] Apr 7 13:54:58.969: INFO: Created: latency-svc-9z2wk Apr 7 13:54:58.984: INFO: Got endpoints: latency-svc-9z2wk [971.257501ms] Apr 7 13:54:59.043: INFO: Created: latency-svc-d2vjl Apr 7 13:54:59.118: INFO: Got endpoints: latency-svc-d2vjl [1.050174062s] Apr 7 13:54:59.133: INFO: Created: latency-svc-w6j9n Apr 7 13:54:59.158: INFO: Got endpoints: latency-svc-w6j9n [1.037106551s] Apr 7 13:54:59.194: INFO: Created: latency-svc-4kxfd Apr 7 13:54:59.268: INFO: Got endpoints: latency-svc-4kxfd [1.116962513s] Apr 7 13:54:59.269: INFO: Created: latency-svc-qgkcd Apr 7 13:54:59.325: INFO: Got endpoints: latency-svc-qgkcd [1.108112634s] Apr 7 13:54:59.448: INFO: Created: latency-svc-wtcp7 Apr 7 13:54:59.457: INFO: Got endpoints: latency-svc-wtcp7 [1.202756762s] Apr 7 13:54:59.488: INFO: Created: latency-svc-ffd8p Apr 7 13:54:59.501: INFO: Got endpoints: latency-svc-ffd8p [1.196917819s] Apr 7 13:54:59.518: INFO: Created: latency-svc-czn96 Apr 7 13:54:59.531: INFO: Got endpoints: latency-svc-czn96 [1.154956519s] Apr 7 13:54:59.586: INFO: Created: latency-svc-4k6l4 Apr 7 13:54:59.591: INFO: Got endpoints: latency-svc-4k6l4 [1.083748223s] Apr 7 13:54:59.662: INFO: Created: latency-svc-rxwtr Apr 7 13:54:59.681: INFO: Got endpoints: latency-svc-rxwtr [1.154617444s] Apr 7 13:54:59.730: INFO: Created: latency-svc-cfkv8 Apr 7 13:54:59.735: INFO: Got endpoints: latency-svc-cfkv8 [1.166116766s] Apr 7 13:54:59.757: INFO: Created: latency-svc-mq8qn Apr 7 13:54:59.772: INFO: Got endpoints: latency-svc-mq8qn [1.166860383s] Apr 7 13:54:59.794: INFO: Created: latency-svc-6s5v4 Apr 7 13:54:59.802: INFO: Got endpoints: latency-svc-6s5v4 [1.136840504s] Apr 7 13:54:59.823: INFO: Created: latency-svc-ftk4r Apr 7 13:54:59.857: INFO: Got endpoints: latency-svc-ftk4r [1.043935179s] Apr 7 13:54:59.877: INFO: Created: latency-svc-lh88j Apr 7 13:54:59.899: INFO: Got endpoints: latency-svc-lh88j [1.011466127s] Apr 7 13:54:59.919: INFO: Created: latency-svc-ldvgz Apr 7 13:54:59.935: INFO: Got endpoints: latency-svc-ldvgz [951.316476ms] Apr 7 13:54:59.999: INFO: Created: latency-svc-68m45 Apr 7 13:55:00.007: INFO: Got endpoints: latency-svc-68m45 [888.851491ms] Apr 7 13:55:00.028: INFO: Created: latency-svc-d2vxx Apr 7 13:55:00.038: INFO: Got endpoints: latency-svc-d2vxx [880.330266ms] Apr 7 13:55:00.161: INFO: Created: latency-svc-stwmt Apr 7 13:55:00.163: INFO: Got endpoints: latency-svc-stwmt [894.637997ms] Apr 7 13:55:00.202: INFO: Created: latency-svc-vtwk6 Apr 7 13:55:00.218: INFO: Got endpoints: latency-svc-vtwk6 [892.460521ms] Apr 7 13:55:00.243: INFO: Created: latency-svc-984qs Apr 7 13:55:00.292: INFO: Got endpoints: latency-svc-984qs [834.683676ms] Apr 7 13:55:00.303: INFO: Created: latency-svc-5l8qn Apr 7 13:55:00.314: INFO: Got endpoints: latency-svc-5l8qn [813.289324ms] Apr 7 13:55:00.334: INFO: Created: latency-svc-7qwlg Apr 7 13:55:00.357: INFO: Got endpoints: latency-svc-7qwlg [825.834275ms] Apr 7 13:55:00.387: INFO: Created: latency-svc-94z58 Apr 7 13:55:00.448: INFO: Got endpoints: latency-svc-94z58 [856.801402ms] Apr 7 13:55:00.455: INFO: Created: latency-svc-fdfnp Apr 7 13:55:00.477: INFO: Got endpoints: latency-svc-fdfnp [796.073273ms] Apr 7 13:55:00.507: INFO: Created: latency-svc-9ctnf Apr 7 13:55:00.543: INFO: Got endpoints: latency-svc-9ctnf [807.667429ms] Apr 7 13:55:00.597: INFO: Created: latency-svc-nshch Apr 7 13:55:00.609: INFO: Got endpoints: latency-svc-nshch [837.447647ms] Apr 7 13:55:00.634: INFO: Created: latency-svc-wxctk Apr 7 13:55:00.652: INFO: Got endpoints: latency-svc-wxctk [849.800124ms] Apr 7 13:55:00.675: INFO: Created: latency-svc-z8w5q Apr 7 13:55:00.717: INFO: Got endpoints: latency-svc-z8w5q [860.551417ms] Apr 7 13:55:00.735: INFO: Created: latency-svc-c2pkq Apr 7 13:55:00.749: INFO: Got endpoints: latency-svc-c2pkq [849.704996ms] Apr 7 13:55:00.765: INFO: Created: latency-svc-b59f4 Apr 7 13:55:00.779: INFO: Got endpoints: latency-svc-b59f4 [843.722966ms] Apr 7 13:55:00.801: INFO: Created: latency-svc-7smfj Apr 7 13:55:00.809: INFO: Got endpoints: latency-svc-7smfj [801.894864ms] Apr 7 13:55:00.869: INFO: Created: latency-svc-9j84h Apr 7 13:55:00.875: INFO: Got endpoints: latency-svc-9j84h [837.361675ms] Apr 7 13:55:00.897: INFO: Created: latency-svc-g96lr Apr 7 13:55:00.912: INFO: Got endpoints: latency-svc-g96lr [748.48632ms] Apr 7 13:55:00.934: INFO: Created: latency-svc-pfv8l Apr 7 13:55:00.948: INFO: Got endpoints: latency-svc-pfv8l [729.873977ms] Apr 7 13:55:01.017: INFO: Created: latency-svc-wgx82 Apr 7 13:55:01.048: INFO: Created: latency-svc-7b6kc Apr 7 13:55:01.048: INFO: Got endpoints: latency-svc-wgx82 [756.472951ms] Apr 7 13:55:01.083: INFO: Got endpoints: latency-svc-7b6kc [768.780836ms] Apr 7 13:55:01.208: INFO: Created: latency-svc-9wh57 Apr 7 13:55:01.239: INFO: Got endpoints: latency-svc-9wh57 [882.032622ms] Apr 7 13:55:01.271: INFO: Created: latency-svc-bb77w Apr 7 13:55:01.285: INFO: Got endpoints: latency-svc-bb77w [836.785919ms] Apr 7 13:55:01.305: INFO: Created: latency-svc-gzb4x Apr 7 13:55:01.346: INFO: Got endpoints: latency-svc-gzb4x [868.270277ms] Apr 7 13:55:01.359: INFO: Created: latency-svc-g2rjh Apr 7 13:55:01.375: INFO: Got endpoints: latency-svc-g2rjh [832.406759ms] Apr 7 13:55:01.395: INFO: Created: latency-svc-zslck Apr 7 13:55:01.405: INFO: Got endpoints: latency-svc-zslck [796.009493ms] Apr 7 13:55:01.426: INFO: Created: latency-svc-4phxq Apr 7 13:55:01.435: INFO: Got endpoints: latency-svc-4phxq [783.376341ms] Apr 7 13:55:01.508: INFO: Created: latency-svc-k4kv9 Apr 7 13:55:01.514: INFO: Got endpoints: latency-svc-k4kv9 [796.453828ms] Apr 7 13:55:01.533: INFO: Created: latency-svc-grqxn Apr 7 13:55:01.544: INFO: Got endpoints: latency-svc-grqxn [795.471958ms] Apr 7 13:55:01.563: INFO: Created: latency-svc-z527b Apr 7 13:55:01.575: INFO: Got endpoints: latency-svc-z527b [796.041057ms] Apr 7 13:55:01.593: INFO: Created: latency-svc-hns72 Apr 7 13:55:01.605: INFO: Got endpoints: latency-svc-hns72 [795.90073ms] Apr 7 13:55:01.654: INFO: Created: latency-svc-762cn Apr 7 13:55:01.678: INFO: Got endpoints: latency-svc-762cn [802.196106ms] Apr 7 13:55:01.701: INFO: Created: latency-svc-hfkln Apr 7 13:55:01.714: INFO: Got endpoints: latency-svc-hfkln [801.947144ms] Apr 7 13:55:01.738: INFO: Created: latency-svc-qnmfz Apr 7 13:55:01.782: INFO: Got endpoints: latency-svc-qnmfz [834.626637ms] Apr 7 13:55:01.798: INFO: Created: latency-svc-rsqhx Apr 7 13:55:01.810: INFO: Got endpoints: latency-svc-rsqhx [762.063382ms] Apr 7 13:55:01.833: INFO: Created: latency-svc-7mp4t Apr 7 13:55:01.857: INFO: Got endpoints: latency-svc-7mp4t [773.961164ms] Apr 7 13:55:01.927: INFO: Created: latency-svc-r67nr Apr 7 13:55:01.930: INFO: Got endpoints: latency-svc-r67nr [691.233009ms] Apr 7 13:55:01.990: INFO: Created: latency-svc-49h5n Apr 7 13:55:02.003: INFO: Got endpoints: latency-svc-49h5n [718.257139ms] Apr 7 13:55:02.027: INFO: Created: latency-svc-dbbrh Apr 7 13:55:02.083: INFO: Got endpoints: latency-svc-dbbrh [736.775592ms] Apr 7 13:55:02.088: INFO: Created: latency-svc-rkqkj Apr 7 13:55:02.093: INFO: Got endpoints: latency-svc-rkqkj [717.728868ms] Apr 7 13:55:02.115: INFO: Created: latency-svc-m6s8m Apr 7 13:55:02.130: INFO: Got endpoints: latency-svc-m6s8m [724.540726ms] Apr 7 13:55:02.158: INFO: Created: latency-svc-vh2fx Apr 7 13:55:02.208: INFO: Got endpoints: latency-svc-vh2fx [772.29671ms] Apr 7 13:55:02.265: INFO: Created: latency-svc-k2hfv Apr 7 13:55:02.274: INFO: Got endpoints: latency-svc-k2hfv [760.203082ms] Apr 7 13:55:02.295: INFO: Created: latency-svc-v528z Apr 7 13:55:02.334: INFO: Got endpoints: latency-svc-v528z [789.796876ms] Apr 7 13:55:02.349: INFO: Created: latency-svc-6jfxd Apr 7 13:55:02.364: INFO: Got endpoints: latency-svc-6jfxd [789.505965ms] Apr 7 13:55:02.385: INFO: Created: latency-svc-dsbkr Apr 7 13:55:02.401: INFO: Got endpoints: latency-svc-dsbkr [796.290095ms] Apr 7 13:55:02.421: INFO: Created: latency-svc-thcg9 Apr 7 13:55:02.453: INFO: Got endpoints: latency-svc-thcg9 [775.730457ms] Apr 7 13:55:02.481: INFO: Created: latency-svc-5lbw4 Apr 7 13:55:02.498: INFO: Got endpoints: latency-svc-5lbw4 [784.729429ms] Apr 7 13:55:02.518: INFO: Created: latency-svc-rgdzj Apr 7 13:55:02.528: INFO: Got endpoints: latency-svc-rgdzj [745.467053ms] Apr 7 13:55:02.547: INFO: Created: latency-svc-vcnh2 Apr 7 13:55:02.603: INFO: Got endpoints: latency-svc-vcnh2 [793.026729ms] Apr 7 13:55:02.619: INFO: Created: latency-svc-h2v8q Apr 7 13:55:02.636: INFO: Got endpoints: latency-svc-h2v8q [779.434006ms] Apr 7 13:55:02.674: INFO: Created: latency-svc-xqmtt Apr 7 13:55:02.759: INFO: Got endpoints: latency-svc-xqmtt [828.370968ms] Apr 7 13:55:02.775: INFO: Created: latency-svc-kfrb4 Apr 7 13:55:02.811: INFO: Got endpoints: latency-svc-kfrb4 [807.705791ms] Apr 7 13:55:02.847: INFO: Created: latency-svc-754dj Apr 7 13:55:02.896: INFO: Got endpoints: latency-svc-754dj [813.761992ms] Apr 7 13:55:02.900: INFO: Created: latency-svc-hkhcs Apr 7 13:55:02.907: INFO: Got endpoints: latency-svc-hkhcs [813.820779ms] Apr 7 13:55:02.932: INFO: Created: latency-svc-fznxf Apr 7 13:55:02.944: INFO: Got endpoints: latency-svc-fznxf [813.604507ms] Apr 7 13:55:02.967: INFO: Created: latency-svc-b4m5c Apr 7 13:55:02.980: INFO: Got endpoints: latency-svc-b4m5c [772.183438ms] Apr 7 13:55:03.041: INFO: Created: latency-svc-4hvmv Apr 7 13:55:03.045: INFO: Got endpoints: latency-svc-4hvmv [771.493706ms] Apr 7 13:55:03.075: INFO: Created: latency-svc-7wjml Apr 7 13:55:03.088: INFO: Got endpoints: latency-svc-7wjml [754.252941ms] Apr 7 13:55:03.105: INFO: Created: latency-svc-ngwd7 Apr 7 13:55:03.119: INFO: Got endpoints: latency-svc-ngwd7 [754.303034ms] Apr 7 13:55:03.138: INFO: Created: latency-svc-f9fmv Apr 7 13:55:03.190: INFO: Got endpoints: latency-svc-f9fmv [788.525519ms] Apr 7 13:55:03.192: INFO: Created: latency-svc-ljhwb Apr 7 13:55:03.197: INFO: Got endpoints: latency-svc-ljhwb [743.371754ms] Apr 7 13:55:03.225: INFO: Created: latency-svc-284q7 Apr 7 13:55:03.239: INFO: Got endpoints: latency-svc-284q7 [740.992236ms] Apr 7 13:55:03.261: INFO: Created: latency-svc-zqnjl Apr 7 13:55:03.276: INFO: Got endpoints: latency-svc-zqnjl [747.723604ms] Apr 7 13:55:03.340: INFO: Created: latency-svc-hdht7 Apr 7 13:55:03.350: INFO: Got endpoints: latency-svc-hdht7 [747.016254ms] Apr 7 13:55:03.381: INFO: Created: latency-svc-nf6kj Apr 7 13:55:03.390: INFO: Got endpoints: latency-svc-nf6kj [753.759522ms] Apr 7 13:55:03.410: INFO: Created: latency-svc-f9vjt Apr 7 13:55:03.420: INFO: Got endpoints: latency-svc-f9vjt [661.694574ms] Apr 7 13:55:03.497: INFO: Created: latency-svc-vxndf Apr 7 13:55:03.499: INFO: Got endpoints: latency-svc-vxndf [688.529025ms] Apr 7 13:55:03.543: INFO: Created: latency-svc-kljqt Apr 7 13:55:03.560: INFO: Got endpoints: latency-svc-kljqt [663.204503ms] Apr 7 13:55:03.579: INFO: Created: latency-svc-nlnkc Apr 7 13:55:03.589: INFO: Got endpoints: latency-svc-nlnkc [682.496838ms] Apr 7 13:55:03.639: INFO: Created: latency-svc-92gv7 Apr 7 13:55:03.643: INFO: Got endpoints: latency-svc-92gv7 [699.668604ms] Apr 7 13:55:03.699: INFO: Created: latency-svc-s8mqk Apr 7 13:55:03.777: INFO: Created: latency-svc-p2xrb Apr 7 13:55:03.778: INFO: Got endpoints: latency-svc-s8mqk [797.692514ms] Apr 7 13:55:03.779: INFO: Got endpoints: latency-svc-p2xrb [733.784966ms] Apr 7 13:55:03.812: INFO: Created: latency-svc-gfnlh Apr 7 13:55:03.825: INFO: Got endpoints: latency-svc-gfnlh [736.236365ms] Apr 7 13:55:03.849: INFO: Created: latency-svc-pd8mb Apr 7 13:55:03.872: INFO: Got endpoints: latency-svc-pd8mb [753.586799ms] Apr 7 13:55:03.932: INFO: Created: latency-svc-nf5dd Apr 7 13:55:03.945: INFO: Got endpoints: latency-svc-nf5dd [755.214139ms] Apr 7 13:55:03.969: INFO: Created: latency-svc-qswf4 Apr 7 13:55:03.981: INFO: Got endpoints: latency-svc-qswf4 [784.126131ms] Apr 7 13:55:04.002: INFO: Created: latency-svc-z2dvz Apr 7 13:55:04.012: INFO: Got endpoints: latency-svc-z2dvz [772.121522ms] Apr 7 13:55:04.030: INFO: Created: latency-svc-c2k7j Apr 7 13:55:04.077: INFO: Got endpoints: latency-svc-c2k7j [801.018579ms] Apr 7 13:55:04.082: INFO: Created: latency-svc-x9vbp Apr 7 13:55:04.090: INFO: Got endpoints: latency-svc-x9vbp [739.486923ms] Apr 7 13:55:04.112: INFO: Created: latency-svc-4z8rt Apr 7 13:55:04.143: INFO: Got endpoints: latency-svc-4z8rt [752.432771ms] Apr 7 13:55:04.244: INFO: Created: latency-svc-p6r4g Apr 7 13:55:04.247: INFO: Got endpoints: latency-svc-p6r4g [826.459101ms] Apr 7 13:55:04.300: INFO: Created: latency-svc-x5z5b Apr 7 13:55:04.307: INFO: Got endpoints: latency-svc-x5z5b [807.558741ms] Apr 7 13:55:04.329: INFO: Created: latency-svc-4msvx Apr 7 13:55:04.343: INFO: Got endpoints: latency-svc-4msvx [783.518735ms] Apr 7 13:55:04.394: INFO: Created: latency-svc-46l2c Apr 7 13:55:04.398: INFO: Got endpoints: latency-svc-46l2c [808.080878ms] Apr 7 13:55:04.418: INFO: Created: latency-svc-hdhs4 Apr 7 13:55:04.448: INFO: Got endpoints: latency-svc-hdhs4 [804.887854ms] Apr 7 13:55:04.472: INFO: Created: latency-svc-f9gzg Apr 7 13:55:04.488: INFO: Got endpoints: latency-svc-f9gzg [710.429071ms] Apr 7 13:55:04.561: INFO: Created: latency-svc-fmf89 Apr 7 13:55:04.566: INFO: Got endpoints: latency-svc-fmf89 [786.859545ms] Apr 7 13:55:04.588: INFO: Created: latency-svc-xts57 Apr 7 13:55:04.603: INFO: Got endpoints: latency-svc-xts57 [777.979191ms] Apr 7 13:55:04.623: INFO: Created: latency-svc-nxkft Apr 7 13:55:04.639: INFO: Got endpoints: latency-svc-nxkft [766.460966ms] Apr 7 13:55:04.658: INFO: Created: latency-svc-qpjm4 Apr 7 13:55:04.717: INFO: Got endpoints: latency-svc-qpjm4 [771.636229ms] Apr 7 13:55:04.724: INFO: Created: latency-svc-nbjlh Apr 7 13:55:04.761: INFO: Got endpoints: latency-svc-nbjlh [779.525202ms] Apr 7 13:55:04.803: INFO: Created: latency-svc-vt5mg Apr 7 13:55:04.849: INFO: Got endpoints: latency-svc-vt5mg [837.00251ms] Apr 7 13:55:04.863: INFO: Created: latency-svc-qjms8 Apr 7 13:55:04.874: INFO: Got endpoints: latency-svc-qjms8 [796.826894ms] Apr 7 13:55:04.893: INFO: Created: latency-svc-cgd6w Apr 7 13:55:04.904: INFO: Got endpoints: latency-svc-cgd6w [813.859795ms] Apr 7 13:55:04.929: INFO: Created: latency-svc-6rwbn Apr 7 13:55:04.940: INFO: Got endpoints: latency-svc-6rwbn [797.510747ms] Apr 7 13:55:04.993: INFO: Created: latency-svc-zcp8x Apr 7 13:55:05.001: INFO: Got endpoints: latency-svc-zcp8x [753.991372ms] Apr 7 13:55:05.025: INFO: Created: latency-svc-zlzn4 Apr 7 13:55:05.037: INFO: Got endpoints: latency-svc-zlzn4 [730.241398ms] Apr 7 13:55:05.055: INFO: Created: latency-svc-p7rpx Apr 7 13:55:05.067: INFO: Got endpoints: latency-svc-p7rpx [724.136939ms] Apr 7 13:55:05.085: INFO: Created: latency-svc-2z9bl Apr 7 13:55:05.136: INFO: Got endpoints: latency-svc-2z9bl [738.440525ms] Apr 7 13:55:05.139: INFO: Created: latency-svc-gmfvw Apr 7 13:55:05.158: INFO: Got endpoints: latency-svc-gmfvw [710.005522ms] Apr 7 13:55:05.187: INFO: Created: latency-svc-mztv7 Apr 7 13:55:05.200: INFO: Got endpoints: latency-svc-mztv7 [711.772682ms] Apr 7 13:55:05.200: INFO: Latencies: [53.004771ms 97.033528ms 133.967312ms 159.054476ms 223.46866ms 280.815426ms 308.049814ms 338.091989ms 368.378694ms 458.492012ms 500.931063ms 567.279887ms 609.597351ms 613.049752ms 615.389818ms 634.892621ms 636.54021ms 640.130336ms 640.49097ms 651.790464ms 654.800082ms 661.694574ms 663.204503ms 663.988568ms 665.499409ms 667.0756ms 670.448907ms 675.050776ms 679.790793ms 680.674377ms 682.496838ms 683.674351ms 688.478306ms 688.529025ms 691.233009ms 695.204172ms 699.554362ms 699.668604ms 699.908873ms 700.764336ms 702.866958ms 706.19638ms 710.005522ms 710.429071ms 711.772682ms 712.219278ms 715.840977ms 717.635967ms 717.728868ms 718.257139ms 718.849534ms 719.026878ms 720.050056ms 721.296736ms 724.136939ms 724.540726ms 724.576932ms 725.749253ms 726.057192ms 729.873977ms 730.128897ms 730.169361ms 730.241398ms 730.921326ms 732.419126ms 733.09057ms 733.784966ms 735.889806ms 736.236365ms 736.480736ms 736.775592ms 736.846218ms 736.948133ms 737.662196ms 738.440525ms 739.486923ms 740.992236ms 742.72076ms 743.371754ms 745.467053ms 747.016254ms 747.723604ms 747.950698ms 748.48632ms 752.12883ms 752.432771ms 753.586799ms 753.759522ms 753.991372ms 753.995413ms 754.252941ms 754.303034ms 754.456595ms 754.518262ms 755.214139ms 756.472951ms 759.589651ms 759.935436ms 760.081323ms 760.203082ms 762.063382ms 763.317955ms 766.460966ms 768.780836ms 769.302507ms 771.493706ms 771.636229ms 772.121522ms 772.183438ms 772.29671ms 772.305569ms 772.840537ms 773.961164ms 775.730457ms 777.979191ms 779.434006ms 779.525202ms 783.015111ms 783.376341ms 783.518735ms 783.925478ms 784.126131ms 784.140603ms 784.729429ms 786.859545ms 788.525519ms 789.505965ms 789.796876ms 791.807085ms 793.026729ms 795.471958ms 795.90073ms 796.009493ms 796.041057ms 796.073273ms 796.290095ms 796.453828ms 796.826894ms 797.180172ms 797.510747ms 797.692514ms 801.018579ms 801.894864ms 801.947144ms 802.196106ms 803.685621ms 804.887854ms 807.558741ms 807.667429ms 807.705791ms 808.080878ms 811.044215ms 813.289324ms 813.604507ms 813.761992ms 813.820779ms 813.859795ms 814.365728ms 817.037666ms 825.834275ms 826.459101ms 828.370968ms 832.406759ms 834.626637ms 834.683676ms 835.461193ms 836.785919ms 837.00251ms 837.361675ms 837.447647ms 843.722966ms 846.620845ms 849.704996ms 849.800124ms 856.801402ms 860.551417ms 860.61574ms 868.270277ms 880.330266ms 882.032622ms 888.851491ms 892.460521ms 894.637997ms 905.299933ms 951.316476ms 971.257501ms 1.011466127s 1.037106551s 1.043935179s 1.050174062s 1.083748223s 1.108112634s 1.116962513s 1.136840504s 1.154617444s 1.154956519s 1.166116766s 1.166860383s 1.196917819s 1.202756762s] Apr 7 13:55:05.200: INFO: 50 %ile: 762.063382ms Apr 7 13:55:05.200: INFO: 90 %ile: 888.851491ms Apr 7 13:55:05.200: INFO: 99 %ile: 1.196917819s Apr 7 13:55:05.200: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 7 13:55:05.200: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svc-latency-5393" for this suite. Apr 7 13:55:37.215: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 7 13:55:37.289: INFO: namespace svc-latency-5393 deletion completed in 32.081831972s • [SLOW TEST:46.793 seconds] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 7 13:55:37.289: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test override command Apr 7 13:55:37.372: INFO: Waiting up to 5m0s for pod "client-containers-62de4dd2-2cb4-43e0-9b5d-936147c75e87" in namespace "containers-723" to be "success or failure" Apr 7 13:55:37.388: INFO: Pod "client-containers-62de4dd2-2cb4-43e0-9b5d-936147c75e87": Phase="Pending", Reason="", readiness=false. Elapsed: 15.632894ms Apr 7 13:55:39.392: INFO: Pod "client-containers-62de4dd2-2cb4-43e0-9b5d-936147c75e87": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019901411s Apr 7 13:55:41.396: INFO: Pod "client-containers-62de4dd2-2cb4-43e0-9b5d-936147c75e87": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024071818s STEP: Saw pod success Apr 7 13:55:41.396: INFO: Pod "client-containers-62de4dd2-2cb4-43e0-9b5d-936147c75e87" satisfied condition "success or failure" Apr 7 13:55:41.399: INFO: Trying to get logs from node iruya-worker2 pod client-containers-62de4dd2-2cb4-43e0-9b5d-936147c75e87 container test-container: STEP: delete the pod Apr 7 13:55:41.434: INFO: Waiting for pod client-containers-62de4dd2-2cb4-43e0-9b5d-936147c75e87 to disappear Apr 7 13:55:41.439: INFO: Pod client-containers-62de4dd2-2cb4-43e0-9b5d-936147c75e87 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 7 13:55:41.439: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-723" for this suite. Apr 7 13:55:47.476: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 7 13:55:47.546: INFO: namespace containers-723 deletion completed in 6.100453897s • [SLOW TEST:10.257 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 7 13:55:47.546: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Apr 7 13:55:47.603: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f36882d4-ad5c-42ef-81d6-2965edfe2dd7" in namespace "projected-878" to be "success or failure" Apr 7 13:55:47.617: INFO: Pod "downwardapi-volume-f36882d4-ad5c-42ef-81d6-2965edfe2dd7": Phase="Pending", Reason="", readiness=false. Elapsed: 14.890009ms Apr 7 13:55:49.621: INFO: Pod "downwardapi-volume-f36882d4-ad5c-42ef-81d6-2965edfe2dd7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018545553s Apr 7 13:55:51.625: INFO: Pod "downwardapi-volume-f36882d4-ad5c-42ef-81d6-2965edfe2dd7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.022450488s STEP: Saw pod success Apr 7 13:55:51.625: INFO: Pod "downwardapi-volume-f36882d4-ad5c-42ef-81d6-2965edfe2dd7" satisfied condition "success or failure" Apr 7 13:55:51.628: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-f36882d4-ad5c-42ef-81d6-2965edfe2dd7 container client-container: STEP: delete the pod Apr 7 13:55:51.661: INFO: Waiting for pod downwardapi-volume-f36882d4-ad5c-42ef-81d6-2965edfe2dd7 to disappear Apr 7 13:55:51.673: INFO: Pod downwardapi-volume-f36882d4-ad5c-42ef-81d6-2965edfe2dd7 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 7 13:55:51.673: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-878" for this suite. Apr 7 13:55:57.688: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 7 13:55:57.766: INFO: namespace projected-878 deletion completed in 6.087415515s • [SLOW TEST:10.220 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 7 13:55:57.767: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Apr 7 13:55:57.864: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. Apr 7 13:55:57.889: INFO: Number of nodes with available pods: 0 Apr 7 13:55:57.889: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. Apr 7 13:55:57.924: INFO: Number of nodes with available pods: 0 Apr 7 13:55:57.924: INFO: Node iruya-worker is running more than one daemon pod Apr 7 13:55:59.000: INFO: Number of nodes with available pods: 0 Apr 7 13:55:59.000: INFO: Node iruya-worker is running more than one daemon pod Apr 7 13:55:59.929: INFO: Number of nodes with available pods: 0 Apr 7 13:55:59.929: INFO: Node iruya-worker is running more than one daemon pod Apr 7 13:56:00.929: INFO: Number of nodes with available pods: 0 Apr 7 13:56:00.929: INFO: Node iruya-worker is running more than one daemon pod Apr 7 13:56:01.929: INFO: Number of nodes with available pods: 1 Apr 7 13:56:01.929: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled Apr 7 13:56:01.960: INFO: Number of nodes with available pods: 1 Apr 7 13:56:01.960: INFO: Number of running nodes: 0, number of available pods: 1 Apr 7 13:56:02.963: INFO: Number of nodes with available pods: 0 Apr 7 13:56:02.963: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate Apr 7 13:56:03.018: INFO: Number of nodes with available pods: 0 Apr 7 13:56:03.018: INFO: Node iruya-worker is running more than one daemon pod Apr 7 13:56:04.022: INFO: Number of nodes with available pods: 0 Apr 7 13:56:04.022: INFO: Node iruya-worker is running more than one daemon pod Apr 7 13:56:05.022: INFO: Number of nodes with available pods: 0 Apr 7 13:56:05.022: INFO: Node iruya-worker is running more than one daemon pod Apr 7 13:56:06.022: INFO: Number of nodes with available pods: 0 Apr 7 13:56:06.022: INFO: Node iruya-worker is running more than one daemon pod Apr 7 13:56:07.022: INFO: Number of nodes with available pods: 0 Apr 7 13:56:07.022: INFO: Node iruya-worker is running more than one daemon pod Apr 7 13:56:08.022: INFO: Number of nodes with available pods: 0 Apr 7 13:56:08.022: INFO: Node iruya-worker is running more than one daemon pod Apr 7 13:56:09.022: INFO: Number of nodes with available pods: 0 Apr 7 13:56:09.022: INFO: Node iruya-worker is running more than one daemon pod Apr 7 13:56:10.022: INFO: Number of nodes with available pods: 0 Apr 7 13:56:10.022: INFO: Node iruya-worker is running more than one daemon pod Apr 7 13:56:11.035: INFO: Number of nodes with available pods: 0 Apr 7 13:56:11.035: INFO: Node iruya-worker is running more than one daemon pod Apr 7 13:56:12.022: INFO: Number of nodes with available pods: 0 Apr 7 13:56:12.022: INFO: Node iruya-worker is running more than one daemon pod Apr 7 13:56:13.030: INFO: Number of nodes with available pods: 0 Apr 7 13:56:13.030: INFO: Node iruya-worker is running more than one daemon pod Apr 7 13:56:14.022: INFO: Number of nodes with available pods: 0 Apr 7 13:56:14.022: INFO: Node iruya-worker is running more than one daemon pod Apr 7 13:56:15.022: INFO: Number of nodes with available pods: 0 Apr 7 13:56:15.022: INFO: Node iruya-worker is running more than one daemon pod Apr 7 13:56:16.022: INFO: Number of nodes with available pods: 1 Apr 7 13:56:16.023: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-9599, will wait for the garbage collector to delete the pods Apr 7 13:56:16.088: INFO: Deleting DaemonSet.extensions daemon-set took: 6.013284ms Apr 7 13:56:16.388: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.216563ms Apr 7 13:56:22.191: INFO: Number of nodes with available pods: 0 Apr 7 13:56:22.191: INFO: Number of running nodes: 0, number of available pods: 0 Apr 7 13:56:22.194: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-9599/daemonsets","resourceVersion":"4132657"},"items":null} Apr 7 13:56:22.197: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-9599/pods","resourceVersion":"4132657"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 7 13:56:22.224: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-9599" for this suite. Apr 7 13:56:28.257: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 7 13:56:28.335: INFO: namespace daemonsets-9599 deletion completed in 6.106892031s • [SLOW TEST:30.568 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 7 13:56:28.335: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 Apr 7 13:56:28.378: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Apr 7 13:56:28.400: INFO: Waiting for terminating namespaces to be deleted... Apr 7 13:56:28.402: INFO: Logging pods the kubelet thinks is on node iruya-worker before test Apr 7 13:56:28.406: INFO: kube-proxy-pmz4p from kube-system started at 2020-03-15 18:24:55 +0000 UTC (1 container statuses recorded) Apr 7 13:56:28.406: INFO: Container kube-proxy ready: true, restart count 0 Apr 7 13:56:28.406: INFO: kindnet-gwz5g from kube-system started at 2020-03-15 18:24:55 +0000 UTC (1 container statuses recorded) Apr 7 13:56:28.406: INFO: Container kindnet-cni ready: true, restart count 0 Apr 7 13:56:28.406: INFO: Logging pods the kubelet thinks is on node iruya-worker2 before test Apr 7 13:56:28.412: INFO: kube-proxy-vwbcj from kube-system started at 2020-03-15 18:24:42 +0000 UTC (1 container statuses recorded) Apr 7 13:56:28.412: INFO: Container kube-proxy ready: true, restart count 0 Apr 7 13:56:28.412: INFO: kindnet-mgd8b from kube-system started at 2020-03-15 18:24:43 +0000 UTC (1 container statuses recorded) Apr 7 13:56:28.412: INFO: Container kindnet-cni ready: true, restart count 0 Apr 7 13:56:28.412: INFO: coredns-5d4dd4b4db-gm7vr from kube-system started at 2020-03-15 18:24:52 +0000 UTC (1 container statuses recorded) Apr 7 13:56:28.412: INFO: Container coredns ready: true, restart count 0 Apr 7 13:56:28.412: INFO: coredns-5d4dd4b4db-6jcgz from kube-system started at 2020-03-15 18:24:54 +0000 UTC (1 container statuses recorded) Apr 7 13:56:28.412: INFO: Container coredns ready: true, restart count 0 [It] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: verifying the node has the label node iruya-worker STEP: verifying the node has the label node iruya-worker2 Apr 7 13:56:28.458: INFO: Pod coredns-5d4dd4b4db-6jcgz requesting resource cpu=100m on Node iruya-worker2 Apr 7 13:56:28.458: INFO: Pod coredns-5d4dd4b4db-gm7vr requesting resource cpu=100m on Node iruya-worker2 Apr 7 13:56:28.458: INFO: Pod kindnet-gwz5g requesting resource cpu=100m on Node iruya-worker Apr 7 13:56:28.458: INFO: Pod kindnet-mgd8b requesting resource cpu=100m on Node iruya-worker2 Apr 7 13:56:28.458: INFO: Pod kube-proxy-pmz4p requesting resource cpu=0m on Node iruya-worker Apr 7 13:56:28.458: INFO: Pod kube-proxy-vwbcj requesting resource cpu=0m on Node iruya-worker2 STEP: Starting Pods to consume most of the cluster CPU. STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-395b296a-258b-4569-b467-cc5c71b73f9e.16038e26328be564], Reason = [Scheduled], Message = [Successfully assigned sched-pred-6749/filler-pod-395b296a-258b-4569-b467-cc5c71b73f9e to iruya-worker] STEP: Considering event: Type = [Normal], Name = [filler-pod-395b296a-258b-4569-b467-cc5c71b73f9e.16038e267cf44858], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-395b296a-258b-4569-b467-cc5c71b73f9e.16038e26c4090bd4], Reason = [Created], Message = [Created container filler-pod-395b296a-258b-4569-b467-cc5c71b73f9e] STEP: Considering event: Type = [Normal], Name = [filler-pod-395b296a-258b-4569-b467-cc5c71b73f9e.16038e26d9693ad9], Reason = [Started], Message = [Started container filler-pod-395b296a-258b-4569-b467-cc5c71b73f9e] STEP: Considering event: Type = [Normal], Name = [filler-pod-dcf373cb-13ba-4e8d-a620-64dd16842361.16038e2635cdcf1d], Reason = [Scheduled], Message = [Successfully assigned sched-pred-6749/filler-pod-dcf373cb-13ba-4e8d-a620-64dd16842361 to iruya-worker2] STEP: Considering event: Type = [Normal], Name = [filler-pod-dcf373cb-13ba-4e8d-a620-64dd16842361.16038e26bb52be39], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-dcf373cb-13ba-4e8d-a620-64dd16842361.16038e26e6683dec], Reason = [Created], Message = [Created container filler-pod-dcf373cb-13ba-4e8d-a620-64dd16842361] STEP: Considering event: Type = [Normal], Name = [filler-pod-dcf373cb-13ba-4e8d-a620-64dd16842361.16038e26f57bd367], Reason = [Started], Message = [Started container filler-pod-dcf373cb-13ba-4e8d-a620-64dd16842361] STEP: Considering event: Type = [Warning], Name = [additional-pod.16038e27252fc3a8], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taints that the pod didn't tolerate, 2 Insufficient cpu.] STEP: removing the label node off the node iruya-worker STEP: verifying the node doesn't have the label node STEP: removing the label node off the node iruya-worker2 STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 7 13:56:33.666: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-6749" for this suite. Apr 7 13:56:39.681: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 7 13:56:39.758: INFO: namespace sched-pred-6749 deletion completed in 6.088991423s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72 • [SLOW TEST:11.423 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23 validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 7 13:56:39.759: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-map-99dcb27d-bb16-42c3-98ab-1e9e277242f5 STEP: Creating a pod to test consume secrets Apr 7 13:56:39.888: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-48fb6efb-68c0-4fad-a2ca-5c83cfbedd1d" in namespace "projected-7028" to be "success or failure" Apr 7 13:56:39.892: INFO: Pod "pod-projected-secrets-48fb6efb-68c0-4fad-a2ca-5c83cfbedd1d": Phase="Pending", Reason="", readiness=false. Elapsed: 3.95812ms Apr 7 13:56:41.897: INFO: Pod "pod-projected-secrets-48fb6efb-68c0-4fad-a2ca-5c83cfbedd1d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008886463s Apr 7 13:56:43.901: INFO: Pod "pod-projected-secrets-48fb6efb-68c0-4fad-a2ca-5c83cfbedd1d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013017333s STEP: Saw pod success Apr 7 13:56:43.901: INFO: Pod "pod-projected-secrets-48fb6efb-68c0-4fad-a2ca-5c83cfbedd1d" satisfied condition "success or failure" Apr 7 13:56:43.905: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-secrets-48fb6efb-68c0-4fad-a2ca-5c83cfbedd1d container projected-secret-volume-test: STEP: delete the pod Apr 7 13:56:43.921: INFO: Waiting for pod pod-projected-secrets-48fb6efb-68c0-4fad-a2ca-5c83cfbedd1d to disappear Apr 7 13:56:43.926: INFO: Pod pod-projected-secrets-48fb6efb-68c0-4fad-a2ca-5c83cfbedd1d no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 7 13:56:43.926: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7028" for this suite. Apr 7 13:56:49.942: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 7 13:56:50.018: INFO: namespace projected-7028 deletion completed in 6.088859197s • [SLOW TEST:10.260 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 7 13:56:50.019: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [It] should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a replication controller Apr 7 13:56:50.063: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8540' Apr 7 13:56:50.307: INFO: stderr: "" Apr 7 13:56:50.307: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Apr 7 13:56:50.307: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8540' Apr 7 13:56:50.414: INFO: stderr: "" Apr 7 13:56:50.414: INFO: stdout: "update-demo-nautilus-65k2b update-demo-nautilus-rhlqv " Apr 7 13:56:50.414: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-65k2b -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8540' Apr 7 13:56:50.500: INFO: stderr: "" Apr 7 13:56:50.500: INFO: stdout: "" Apr 7 13:56:50.500: INFO: update-demo-nautilus-65k2b is created but not running Apr 7 13:56:55.500: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8540' Apr 7 13:56:55.599: INFO: stderr: "" Apr 7 13:56:55.599: INFO: stdout: "update-demo-nautilus-65k2b update-demo-nautilus-rhlqv " Apr 7 13:56:55.599: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-65k2b -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8540' Apr 7 13:56:55.692: INFO: stderr: "" Apr 7 13:56:55.692: INFO: stdout: "true" Apr 7 13:56:55.692: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-65k2b -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8540' Apr 7 13:56:55.785: INFO: stderr: "" Apr 7 13:56:55.785: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 7 13:56:55.785: INFO: validating pod update-demo-nautilus-65k2b Apr 7 13:56:55.789: INFO: got data: { "image": "nautilus.jpg" } Apr 7 13:56:55.789: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 7 13:56:55.789: INFO: update-demo-nautilus-65k2b is verified up and running Apr 7 13:56:55.789: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-rhlqv -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8540' Apr 7 13:56:55.879: INFO: stderr: "" Apr 7 13:56:55.879: INFO: stdout: "true" Apr 7 13:56:55.879: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-rhlqv -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8540' Apr 7 13:56:55.973: INFO: stderr: "" Apr 7 13:56:55.973: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 7 13:56:55.973: INFO: validating pod update-demo-nautilus-rhlqv Apr 7 13:56:55.977: INFO: got data: { "image": "nautilus.jpg" } Apr 7 13:56:55.977: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 7 13:56:55.977: INFO: update-demo-nautilus-rhlqv is verified up and running STEP: using delete to clean up resources Apr 7 13:56:55.977: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8540' Apr 7 13:56:56.080: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 7 13:56:56.080: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Apr 7 13:56:56.080: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-8540' Apr 7 13:56:56.183: INFO: stderr: "No resources found.\n" Apr 7 13:56:56.183: INFO: stdout: "" Apr 7 13:56:56.183: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-8540 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Apr 7 13:56:56.276: INFO: stderr: "" Apr 7 13:56:56.276: INFO: stdout: "update-demo-nautilus-65k2b\nupdate-demo-nautilus-rhlqv\n" Apr 7 13:56:56.777: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-8540' Apr 7 13:56:56.876: INFO: stderr: "No resources found.\n" Apr 7 13:56:56.876: INFO: stdout: "" Apr 7 13:56:56.876: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-8540 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Apr 7 13:56:56.961: INFO: stderr: "" Apr 7 13:56:56.961: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 7 13:56:56.961: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8540" for this suite. Apr 7 13:57:02.992: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 7 13:57:03.064: INFO: namespace kubectl-8540 deletion completed in 6.099781471s • [SLOW TEST:13.045 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 7 13:57:03.065: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Apr 7 13:57:03.119: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7212' Apr 7 13:57:03.393: INFO: stderr: "" Apr 7 13:57:03.393: INFO: stdout: "replicationcontroller/redis-master created\n" Apr 7 13:57:03.393: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7212' Apr 7 13:57:03.667: INFO: stderr: "" Apr 7 13:57:03.667: INFO: stdout: "service/redis-master created\n" STEP: Waiting for Redis master to start. Apr 7 13:57:04.672: INFO: Selector matched 1 pods for map[app:redis] Apr 7 13:57:04.672: INFO: Found 0 / 1 Apr 7 13:57:05.681: INFO: Selector matched 1 pods for map[app:redis] Apr 7 13:57:05.681: INFO: Found 0 / 1 Apr 7 13:57:06.683: INFO: Selector matched 1 pods for map[app:redis] Apr 7 13:57:06.683: INFO: Found 1 / 1 Apr 7 13:57:06.683: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Apr 7 13:57:06.686: INFO: Selector matched 1 pods for map[app:redis] Apr 7 13:57:06.686: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Apr 7 13:57:06.687: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe pod redis-master-qgdwc --namespace=kubectl-7212' Apr 7 13:57:09.127: INFO: stderr: "" Apr 7 13:57:09.127: INFO: stdout: "Name: redis-master-qgdwc\nNamespace: kubectl-7212\nPriority: 0\nNode: iruya-worker/172.17.0.6\nStart Time: Tue, 07 Apr 2020 13:57:03 +0000\nLabels: app=redis\n role=master\nAnnotations: \nStatus: Running\nIP: 10.244.2.84\nControlled By: ReplicationController/redis-master\nContainers:\n redis-master:\n Container ID: containerd://6946bd31f24da2d6b3ef17f8717e0fb88ec0b2505131f232dd322140d30731a0\n Image: gcr.io/kubernetes-e2e-test-images/redis:1.0\n Image ID: gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Tue, 07 Apr 2020 13:57:05 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-cr9bb (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-cr9bb:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-cr9bb\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute for 300s\n node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 6s default-scheduler Successfully assigned kubectl-7212/redis-master-qgdwc to iruya-worker\n Normal Pulled 5s kubelet, iruya-worker Container image \"gcr.io/kubernetes-e2e-test-images/redis:1.0\" already present on machine\n Normal Created 4s kubelet, iruya-worker Created container redis-master\n Normal Started 4s kubelet, iruya-worker Started container redis-master\n" Apr 7 13:57:09.127: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe rc redis-master --namespace=kubectl-7212' Apr 7 13:57:09.253: INFO: stderr: "" Apr 7 13:57:09.253: INFO: stdout: "Name: redis-master\nNamespace: kubectl-7212\nSelector: app=redis,role=master\nLabels: app=redis\n role=master\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=redis\n role=master\n Containers:\n redis-master:\n Image: gcr.io/kubernetes-e2e-test-images/redis:1.0\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 6s replication-controller Created pod: redis-master-qgdwc\n" Apr 7 13:57:09.253: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe service redis-master --namespace=kubectl-7212' Apr 7 13:57:09.371: INFO: stderr: "" Apr 7 13:57:09.371: INFO: stdout: "Name: redis-master\nNamespace: kubectl-7212\nLabels: app=redis\n role=master\nAnnotations: \nSelector: app=redis,role=master\nType: ClusterIP\nIP: 10.104.84.52\nPort: 6379/TCP\nTargetPort: redis-server/TCP\nEndpoints: 10.244.2.84:6379\nSession Affinity: None\nEvents: \n" Apr 7 13:57:09.375: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe node iruya-control-plane' Apr 7 13:57:09.497: INFO: stderr: "" Apr 7 13:57:09.497: INFO: stdout: "Name: iruya-control-plane\nRoles: master\nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=iruya-control-plane\n kubernetes.io/os=linux\n node-role.kubernetes.io/master=\nAnnotations: kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Sun, 15 Mar 2020 18:24:20 +0000\nTaints: node-role.kubernetes.io/master:NoSchedule\nUnschedulable: false\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n MemoryPressure False Tue, 07 Apr 2020 13:56:24 +0000 Sun, 15 Mar 2020 18:24:20 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Tue, 07 Apr 2020 13:56:24 +0000 Sun, 15 Mar 2020 18:24:20 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Tue, 07 Apr 2020 13:56:24 +0000 Sun, 15 Mar 2020 18:24:20 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Tue, 07 Apr 2020 13:56:24 +0000 Sun, 15 Mar 2020 18:25:00 +0000 KubeletReady kubelet is posting ready status\nAddresses:\n InternalIP: 172.17.0.7\n Hostname: iruya-control-plane\nCapacity:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nAllocatable:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nSystem Info:\n Machine ID: 09f14f6f4d1640fcaab2243401c9f154\n System UUID: 7c6ca533-492e-400c-b058-c282f97a69ec\n Boot ID: ca2aa731-f890-4956-92a1-ff8c7560d571\n Kernel Version: 4.15.0-88-generic\n OS Image: Ubuntu 19.10\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: containerd://1.3.2\n Kubelet Version: v1.15.7\n Kube-Proxy Version: v1.15.7\nPodCIDR: 10.244.0.0/24\nNon-terminated Pods: (7 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system etcd-iruya-control-plane 0 (0%) 0 (0%) 0 (0%) 0 (0%) 22d\n kube-system kindnet-zn8sx 100m (0%) 100m (0%) 50Mi (0%) 50Mi (0%) 22d\n kube-system kube-apiserver-iruya-control-plane 250m (1%) 0 (0%) 0 (0%) 0 (0%) 22d\n kube-system kube-controller-manager-iruya-control-plane 200m (1%) 0 (0%) 0 (0%) 0 (0%) 22d\n kube-system kube-proxy-46nsr 0 (0%) 0 (0%) 0 (0%) 0 (0%) 22d\n kube-system kube-scheduler-iruya-control-plane 100m (0%) 0 (0%) 0 (0%) 0 (0%) 22d\n local-path-storage local-path-provisioner-d4947b89c-72frh 0 (0%) 0 (0%) 0 (0%) 0 (0%) 22d\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 650m (4%) 100m (0%)\n memory 50Mi (0%) 50Mi (0%)\n ephemeral-storage 0 (0%) 0 (0%)\nEvents: \n" Apr 7 13:57:09.498: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe namespace kubectl-7212' Apr 7 13:57:09.601: INFO: stderr: "" Apr 7 13:57:09.601: INFO: stdout: "Name: kubectl-7212\nLabels: e2e-framework=kubectl\n e2e-run=32c6632e-30c8-403d-ba4a-6086075e4cf4\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo resource limits.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 7 13:57:09.602: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7212" for this suite. Apr 7 13:57:31.618: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 7 13:57:31.757: INFO: namespace kubectl-7212 deletion completed in 22.151720567s • [SLOW TEST:28.692 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl describe /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 7 13:57:31.757: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating Pod STEP: Waiting for the pod running STEP: Geting the pod STEP: Reading file content from the nginx-container Apr 7 13:57:35.866: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec pod-sharedvolume-25a1932b-b62d-41b9-b488-baac1f7d4038 -c busybox-main-container --namespace=emptydir-5865 -- cat /usr/share/volumeshare/shareddata.txt' Apr 7 13:57:36.075: INFO: stderr: "I0407 13:57:36.000928 2193 log.go:172] (0xc000732420) (0xc000730820) Create stream\nI0407 13:57:36.000987 2193 log.go:172] (0xc000732420) (0xc000730820) Stream added, broadcasting: 1\nI0407 13:57:36.003561 2193 log.go:172] (0xc000732420) Reply frame received for 1\nI0407 13:57:36.003605 2193 log.go:172] (0xc000732420) (0xc0003bc140) Create stream\nI0407 13:57:36.003616 2193 log.go:172] (0xc000732420) (0xc0003bc140) Stream added, broadcasting: 3\nI0407 13:57:36.004658 2193 log.go:172] (0xc000732420) Reply frame received for 3\nI0407 13:57:36.004705 2193 log.go:172] (0xc000732420) (0xc000738000) Create stream\nI0407 13:57:36.004738 2193 log.go:172] (0xc000732420) (0xc000738000) Stream added, broadcasting: 5\nI0407 13:57:36.005831 2193 log.go:172] (0xc000732420) Reply frame received for 5\nI0407 13:57:36.069055 2193 log.go:172] (0xc000732420) Data frame received for 5\nI0407 13:57:36.069106 2193 log.go:172] (0xc000738000) (5) Data frame handling\nI0407 13:57:36.069312 2193 log.go:172] (0xc000732420) Data frame received for 3\nI0407 13:57:36.069359 2193 log.go:172] (0xc0003bc140) (3) Data frame handling\nI0407 13:57:36.069377 2193 log.go:172] (0xc0003bc140) (3) Data frame sent\nI0407 13:57:36.069390 2193 log.go:172] (0xc000732420) Data frame received for 3\nI0407 13:57:36.069400 2193 log.go:172] (0xc0003bc140) (3) Data frame handling\nI0407 13:57:36.071175 2193 log.go:172] (0xc000732420) Data frame received for 1\nI0407 13:57:36.071197 2193 log.go:172] (0xc000730820) (1) Data frame handling\nI0407 13:57:36.071213 2193 log.go:172] (0xc000730820) (1) Data frame sent\nI0407 13:57:36.071226 2193 log.go:172] (0xc000732420) (0xc000730820) Stream removed, broadcasting: 1\nI0407 13:57:36.071331 2193 log.go:172] (0xc000732420) Go away received\nI0407 13:57:36.071561 2193 log.go:172] (0xc000732420) (0xc000730820) Stream removed, broadcasting: 1\nI0407 13:57:36.071581 2193 log.go:172] (0xc000732420) (0xc0003bc140) Stream removed, broadcasting: 3\nI0407 13:57:36.071591 2193 log.go:172] (0xc000732420) (0xc000738000) Stream removed, broadcasting: 5\n" Apr 7 13:57:36.075: INFO: stdout: "Hello from the busy-box sub-container\n" [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 7 13:57:36.075: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5865" for this suite. Apr 7 13:57:42.095: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 7 13:57:42.194: INFO: namespace emptydir-5865 deletion completed in 6.115425362s • [SLOW TEST:10.437 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 7 13:57:42.195: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0777 on tmpfs Apr 7 13:57:42.258: INFO: Waiting up to 5m0s for pod "pod-cb77a13b-cbb3-4fe1-9be1-2f350c75c26b" in namespace "emptydir-1249" to be "success or failure" Apr 7 13:57:42.262: INFO: Pod "pod-cb77a13b-cbb3-4fe1-9be1-2f350c75c26b": Phase="Pending", Reason="", readiness=false. Elapsed: 3.298157ms Apr 7 13:57:44.265: INFO: Pod "pod-cb77a13b-cbb3-4fe1-9be1-2f350c75c26b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006659013s Apr 7 13:57:46.270: INFO: Pod "pod-cb77a13b-cbb3-4fe1-9be1-2f350c75c26b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011221682s STEP: Saw pod success Apr 7 13:57:46.270: INFO: Pod "pod-cb77a13b-cbb3-4fe1-9be1-2f350c75c26b" satisfied condition "success or failure" Apr 7 13:57:46.273: INFO: Trying to get logs from node iruya-worker pod pod-cb77a13b-cbb3-4fe1-9be1-2f350c75c26b container test-container: STEP: delete the pod Apr 7 13:57:46.308: INFO: Waiting for pod pod-cb77a13b-cbb3-4fe1-9be1-2f350c75c26b to disappear Apr 7 13:57:46.329: INFO: Pod pod-cb77a13b-cbb3-4fe1-9be1-2f350c75c26b no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 7 13:57:46.330: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1249" for this suite. Apr 7 13:57:52.350: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 7 13:57:52.424: INFO: namespace emptydir-1249 deletion completed in 6.090900736s • [SLOW TEST:10.229 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 7 13:57:52.425: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-upd-41198c47-22a8-45a3-8c3b-311e5d06ef23 STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 7 13:57:56.537: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2903" for this suite. Apr 7 13:58:18.554: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 7 13:58:18.631: INFO: namespace configmap-2903 deletion completed in 22.090259017s • [SLOW TEST:26.206 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 7 13:58:18.632: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Apr 7 13:58:18.697: INFO: Waiting up to 5m0s for pod "downwardapi-volume-07513be6-07bb-4570-96a4-9c2688cc0a1a" in namespace "projected-2572" to be "success or failure" Apr 7 13:58:18.750: INFO: Pod "downwardapi-volume-07513be6-07bb-4570-96a4-9c2688cc0a1a": Phase="Pending", Reason="", readiness=false. Elapsed: 53.23272ms Apr 7 13:58:20.755: INFO: Pod "downwardapi-volume-07513be6-07bb-4570-96a4-9c2688cc0a1a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.058070804s Apr 7 13:58:22.759: INFO: Pod "downwardapi-volume-07513be6-07bb-4570-96a4-9c2688cc0a1a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.062396874s STEP: Saw pod success Apr 7 13:58:22.759: INFO: Pod "downwardapi-volume-07513be6-07bb-4570-96a4-9c2688cc0a1a" satisfied condition "success or failure" Apr 7 13:58:22.762: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-07513be6-07bb-4570-96a4-9c2688cc0a1a container client-container: STEP: delete the pod Apr 7 13:58:22.824: INFO: Waiting for pod downwardapi-volume-07513be6-07bb-4570-96a4-9c2688cc0a1a to disappear Apr 7 13:58:22.832: INFO: Pod downwardapi-volume-07513be6-07bb-4570-96a4-9c2688cc0a1a no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 7 13:58:22.832: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2572" for this suite. Apr 7 13:58:28.847: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 7 13:58:28.925: INFO: namespace projected-2572 deletion completed in 6.090347171s • [SLOW TEST:10.294 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 7 13:58:28.926: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 7 13:58:33.001: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-3239" for this suite. Apr 7 13:58:39.023: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 7 13:58:39.129: INFO: namespace kubelet-test-3239 deletion completed in 6.124510731s • [SLOW TEST:10.203 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78 should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 7 13:58:39.129: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: getting the auto-created API token Apr 7 13:58:39.733: INFO: created pod pod-service-account-defaultsa Apr 7 13:58:39.733: INFO: pod pod-service-account-defaultsa service account token volume mount: true Apr 7 13:58:39.742: INFO: created pod pod-service-account-mountsa Apr 7 13:58:39.742: INFO: pod pod-service-account-mountsa service account token volume mount: true Apr 7 13:58:39.789: INFO: created pod pod-service-account-nomountsa Apr 7 13:58:39.789: INFO: pod pod-service-account-nomountsa service account token volume mount: false Apr 7 13:58:39.807: INFO: created pod pod-service-account-defaultsa-mountspec Apr 7 13:58:39.807: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true Apr 7 13:58:39.888: INFO: created pod pod-service-account-mountsa-mountspec Apr 7 13:58:39.888: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true Apr 7 13:58:39.899: INFO: created pod pod-service-account-nomountsa-mountspec Apr 7 13:58:39.899: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true Apr 7 13:58:39.916: INFO: created pod pod-service-account-defaultsa-nomountspec Apr 7 13:58:39.916: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false Apr 7 13:58:39.951: INFO: created pod pod-service-account-mountsa-nomountspec Apr 7 13:58:39.951: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false Apr 7 13:58:39.968: INFO: created pod pod-service-account-nomountsa-nomountspec Apr 7 13:58:39.968: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 7 13:58:39.968: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-3889" for this suite. Apr 7 13:59:08.085: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 7 13:59:08.160: INFO: namespace svcaccounts-3889 deletion completed in 28.096742235s • [SLOW TEST:29.030 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 7 13:59:08.160: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars Apr 7 13:59:08.218: INFO: Waiting up to 5m0s for pod "downward-api-cae9743c-0cf9-41f7-9c44-244fbae56216" in namespace "downward-api-176" to be "success or failure" Apr 7 13:59:08.234: INFO: Pod "downward-api-cae9743c-0cf9-41f7-9c44-244fbae56216": Phase="Pending", Reason="", readiness=false. Elapsed: 16.23038ms Apr 7 13:59:10.238: INFO: Pod "downward-api-cae9743c-0cf9-41f7-9c44-244fbae56216": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020643934s Apr 7 13:59:12.243: INFO: Pod "downward-api-cae9743c-0cf9-41f7-9c44-244fbae56216": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025277273s STEP: Saw pod success Apr 7 13:59:12.243: INFO: Pod "downward-api-cae9743c-0cf9-41f7-9c44-244fbae56216" satisfied condition "success or failure" Apr 7 13:59:12.246: INFO: Trying to get logs from node iruya-worker2 pod downward-api-cae9743c-0cf9-41f7-9c44-244fbae56216 container dapi-container: STEP: delete the pod Apr 7 13:59:12.303: INFO: Waiting for pod downward-api-cae9743c-0cf9-41f7-9c44-244fbae56216 to disappear Apr 7 13:59:12.309: INFO: Pod downward-api-cae9743c-0cf9-41f7-9c44-244fbae56216 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 7 13:59:12.309: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-176" for this suite. Apr 7 13:59:18.325: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 7 13:59:18.406: INFO: namespace downward-api-176 deletion completed in 6.094088812s • [SLOW TEST:10.246 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 7 13:59:18.406: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Apr 7 13:59:18.468: INFO: Waiting up to 5m0s for pod "downwardapi-volume-440b95ae-cdd7-48a9-9aa0-25be474bfe8c" in namespace "projected-4544" to be "success or failure" Apr 7 13:59:18.472: INFO: Pod "downwardapi-volume-440b95ae-cdd7-48a9-9aa0-25be474bfe8c": Phase="Pending", Reason="", readiness=false. Elapsed: 3.899495ms Apr 7 13:59:20.475: INFO: Pod "downwardapi-volume-440b95ae-cdd7-48a9-9aa0-25be474bfe8c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007100137s Apr 7 13:59:22.479: INFO: Pod "downwardapi-volume-440b95ae-cdd7-48a9-9aa0-25be474bfe8c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011518923s STEP: Saw pod success Apr 7 13:59:22.479: INFO: Pod "downwardapi-volume-440b95ae-cdd7-48a9-9aa0-25be474bfe8c" satisfied condition "success or failure" Apr 7 13:59:22.482: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-440b95ae-cdd7-48a9-9aa0-25be474bfe8c container client-container: STEP: delete the pod Apr 7 13:59:22.516: INFO: Waiting for pod downwardapi-volume-440b95ae-cdd7-48a9-9aa0-25be474bfe8c to disappear Apr 7 13:59:22.531: INFO: Pod downwardapi-volume-440b95ae-cdd7-48a9-9aa0-25be474bfe8c no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 7 13:59:22.531: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4544" for this suite. Apr 7 13:59:28.563: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 7 13:59:28.643: INFO: namespace projected-4544 deletion completed in 6.108968124s • [SLOW TEST:10.236 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 7 13:59:28.643: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Apr 7 13:59:28.731: INFO: (0) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 5.443604ms) Apr 7 13:59:28.735: INFO: (1) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.204324ms) Apr 7 13:59:28.738: INFO: (2) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.375181ms) Apr 7 13:59:28.741: INFO: (3) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 2.957164ms) Apr 7 13:59:28.744: INFO: (4) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.060474ms) Apr 7 13:59:28.747: INFO: (5) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.141788ms) Apr 7 13:59:28.751: INFO: (6) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.168086ms) Apr 7 13:59:28.754: INFO: (7) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.640146ms) Apr 7 13:59:28.758: INFO: (8) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.321536ms) Apr 7 13:59:28.761: INFO: (9) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.048992ms) Apr 7 13:59:28.764: INFO: (10) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 2.774232ms) Apr 7 13:59:28.767: INFO: (11) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.202562ms) Apr 7 13:59:28.770: INFO: (12) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 2.800291ms) Apr 7 13:59:28.773: INFO: (13) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.087775ms) Apr 7 13:59:28.776: INFO: (14) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 2.948432ms) Apr 7 13:59:28.779: INFO: (15) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.069353ms) Apr 7 13:59:28.782: INFO: (16) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.49511ms) Apr 7 13:59:28.805: INFO: (17) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 22.510663ms) Apr 7 13:59:28.808: INFO: (18) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.233103ms) Apr 7 13:59:28.811: INFO: (19) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.353784ms) [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 7 13:59:28.812: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-1022" for this suite. Apr 7 13:59:34.830: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 7 13:59:34.908: INFO: namespace proxy-1022 deletion completed in 6.093259729s • [SLOW TEST:6.265 seconds] [sig-network] Proxy /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:58 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 7 13:59:34.908: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-configmap-h9lh STEP: Creating a pod to test atomic-volume-subpath Apr 7 13:59:34.998: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-h9lh" in namespace "subpath-8113" to be "success or failure" Apr 7 13:59:35.001: INFO: Pod "pod-subpath-test-configmap-h9lh": Phase="Pending", Reason="", readiness=false. Elapsed: 3.270484ms Apr 7 13:59:37.097: INFO: Pod "pod-subpath-test-configmap-h9lh": Phase="Pending", Reason="", readiness=false. Elapsed: 2.099512239s Apr 7 13:59:39.101: INFO: Pod "pod-subpath-test-configmap-h9lh": Phase="Running", Reason="", readiness=true. Elapsed: 4.103743155s Apr 7 13:59:41.105: INFO: Pod "pod-subpath-test-configmap-h9lh": Phase="Running", Reason="", readiness=true. Elapsed: 6.107741493s Apr 7 13:59:43.110: INFO: Pod "pod-subpath-test-configmap-h9lh": Phase="Running", Reason="", readiness=true. Elapsed: 8.112572884s Apr 7 13:59:45.115: INFO: Pod "pod-subpath-test-configmap-h9lh": Phase="Running", Reason="", readiness=true. Elapsed: 10.117086666s Apr 7 13:59:47.119: INFO: Pod "pod-subpath-test-configmap-h9lh": Phase="Running", Reason="", readiness=true. Elapsed: 12.121540262s Apr 7 13:59:49.124: INFO: Pod "pod-subpath-test-configmap-h9lh": Phase="Running", Reason="", readiness=true. Elapsed: 14.126147456s Apr 7 13:59:51.133: INFO: Pod "pod-subpath-test-configmap-h9lh": Phase="Running", Reason="", readiness=true. Elapsed: 16.135859887s Apr 7 13:59:53.138: INFO: Pod "pod-subpath-test-configmap-h9lh": Phase="Running", Reason="", readiness=true. Elapsed: 18.140117854s Apr 7 13:59:55.142: INFO: Pod "pod-subpath-test-configmap-h9lh": Phase="Running", Reason="", readiness=true. Elapsed: 20.144601314s Apr 7 13:59:57.146: INFO: Pod "pod-subpath-test-configmap-h9lh": Phase="Running", Reason="", readiness=true. Elapsed: 22.148789981s Apr 7 13:59:59.150: INFO: Pod "pod-subpath-test-configmap-h9lh": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.152671194s STEP: Saw pod success Apr 7 13:59:59.150: INFO: Pod "pod-subpath-test-configmap-h9lh" satisfied condition "success or failure" Apr 7 13:59:59.153: INFO: Trying to get logs from node iruya-worker2 pod pod-subpath-test-configmap-h9lh container test-container-subpath-configmap-h9lh: STEP: delete the pod Apr 7 13:59:59.175: INFO: Waiting for pod pod-subpath-test-configmap-h9lh to disappear Apr 7 13:59:59.184: INFO: Pod pod-subpath-test-configmap-h9lh no longer exists STEP: Deleting pod pod-subpath-test-configmap-h9lh Apr 7 13:59:59.184: INFO: Deleting pod "pod-subpath-test-configmap-h9lh" in namespace "subpath-8113" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 7 13:59:59.186: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-8113" for this suite. Apr 7 14:00:05.245: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 7 14:00:05.340: INFO: namespace subpath-8113 deletion completed in 6.150812923s • [SLOW TEST:30.432 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 7 14:00:05.340: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Performing setup for networking test in namespace pod-network-test-5077 STEP: creating a selector STEP: Creating the service pods in kubernetes Apr 7 14:00:05.417: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Apr 7 14:00:31.580: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.1.163:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-5077 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 7 14:00:31.580: INFO: >>> kubeConfig: /root/.kube/config I0407 14:00:31.614894 6 log.go:172] (0xc0025546e0) (0xc001d05220) Create stream I0407 14:00:31.614936 6 log.go:172] (0xc0025546e0) (0xc001d05220) Stream added, broadcasting: 1 I0407 14:00:31.618087 6 log.go:172] (0xc0025546e0) Reply frame received for 1 I0407 14:00:31.618147 6 log.go:172] (0xc0025546e0) (0xc001d052c0) Create stream I0407 14:00:31.618166 6 log.go:172] (0xc0025546e0) (0xc001d052c0) Stream added, broadcasting: 3 I0407 14:00:31.619359 6 log.go:172] (0xc0025546e0) Reply frame received for 3 I0407 14:00:31.619408 6 log.go:172] (0xc0025546e0) (0xc001d05360) Create stream I0407 14:00:31.619424 6 log.go:172] (0xc0025546e0) (0xc001d05360) Stream added, broadcasting: 5 I0407 14:00:31.620477 6 log.go:172] (0xc0025546e0) Reply frame received for 5 I0407 14:00:31.709472 6 log.go:172] (0xc0025546e0) Data frame received for 5 I0407 14:00:31.709546 6 log.go:172] (0xc001d05360) (5) Data frame handling I0407 14:00:31.709581 6 log.go:172] (0xc0025546e0) Data frame received for 3 I0407 14:00:31.709598 6 log.go:172] (0xc001d052c0) (3) Data frame handling I0407 14:00:31.709605 6 log.go:172] (0xc001d052c0) (3) Data frame sent I0407 14:00:31.709613 6 log.go:172] (0xc0025546e0) Data frame received for 3 I0407 14:00:31.709617 6 log.go:172] (0xc001d052c0) (3) Data frame handling I0407 14:00:31.711285 6 log.go:172] (0xc0025546e0) Data frame received for 1 I0407 14:00:31.711309 6 log.go:172] (0xc001d05220) (1) Data frame handling I0407 14:00:31.711326 6 log.go:172] (0xc001d05220) (1) Data frame sent I0407 14:00:31.711343 6 log.go:172] (0xc0025546e0) (0xc001d05220) Stream removed, broadcasting: 1 I0407 14:00:31.711401 6 log.go:172] (0xc0025546e0) Go away received I0407 14:00:31.711470 6 log.go:172] (0xc0025546e0) (0xc001d05220) Stream removed, broadcasting: 1 I0407 14:00:31.711487 6 log.go:172] (0xc0025546e0) (0xc001d052c0) Stream removed, broadcasting: 3 I0407 14:00:31.711496 6 log.go:172] (0xc0025546e0) (0xc001d05360) Stream removed, broadcasting: 5 Apr 7 14:00:31.711: INFO: Found all expected endpoints: [netserver-0] Apr 7 14:00:31.714: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.2.94:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-5077 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 7 14:00:31.714: INFO: >>> kubeConfig: /root/.kube/config I0407 14:00:31.761599 6 log.go:172] (0xc00264a6e0) (0xc001041f40) Create stream I0407 14:00:31.761630 6 log.go:172] (0xc00264a6e0) (0xc001041f40) Stream added, broadcasting: 1 I0407 14:00:31.764371 6 log.go:172] (0xc00264a6e0) Reply frame received for 1 I0407 14:00:31.764423 6 log.go:172] (0xc00264a6e0) (0xc001817cc0) Create stream I0407 14:00:31.764438 6 log.go:172] (0xc00264a6e0) (0xc001817cc0) Stream added, broadcasting: 3 I0407 14:00:31.765874 6 log.go:172] (0xc00264a6e0) Reply frame received for 3 I0407 14:00:31.765924 6 log.go:172] (0xc00264a6e0) (0xc00303cbe0) Create stream I0407 14:00:31.765939 6 log.go:172] (0xc00264a6e0) (0xc00303cbe0) Stream added, broadcasting: 5 I0407 14:00:31.766957 6 log.go:172] (0xc00264a6e0) Reply frame received for 5 I0407 14:00:31.828535 6 log.go:172] (0xc00264a6e0) Data frame received for 5 I0407 14:00:31.828561 6 log.go:172] (0xc00303cbe0) (5) Data frame handling I0407 14:00:31.828624 6 log.go:172] (0xc00264a6e0) Data frame received for 3 I0407 14:00:31.828659 6 log.go:172] (0xc001817cc0) (3) Data frame handling I0407 14:00:31.828675 6 log.go:172] (0xc001817cc0) (3) Data frame sent I0407 14:00:31.828682 6 log.go:172] (0xc00264a6e0) Data frame received for 3 I0407 14:00:31.828698 6 log.go:172] (0xc001817cc0) (3) Data frame handling I0407 14:00:31.830675 6 log.go:172] (0xc00264a6e0) Data frame received for 1 I0407 14:00:31.830700 6 log.go:172] (0xc001041f40) (1) Data frame handling I0407 14:00:31.830727 6 log.go:172] (0xc001041f40) (1) Data frame sent I0407 14:00:31.830745 6 log.go:172] (0xc00264a6e0) (0xc001041f40) Stream removed, broadcasting: 1 I0407 14:00:31.830884 6 log.go:172] (0xc00264a6e0) (0xc001041f40) Stream removed, broadcasting: 1 I0407 14:00:31.830908 6 log.go:172] (0xc00264a6e0) (0xc001817cc0) Stream removed, broadcasting: 3 I0407 14:00:31.830931 6 log.go:172] (0xc00264a6e0) (0xc00303cbe0) Stream removed, broadcasting: 5 Apr 7 14:00:31.830: INFO: Found all expected endpoints: [netserver-1] I0407 14:00:31.830968 6 log.go:172] (0xc00264a6e0) Go away received [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 7 14:00:31.831: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-5077" for this suite. Apr 7 14:00:53.860: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 7 14:00:53.933: INFO: namespace pod-network-test-5077 deletion completed in 22.08433511s • [SLOW TEST:48.593 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 7 14:00:53.934: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object Apr 7 14:00:54.050: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-6649,SelfLink:/api/v1/namespaces/watch-6649/configmaps/e2e-watch-test-label-changed,UID:b9194f8a-774a-4b9e-a145-e9cd7e58ea81,ResourceVersion:4133763,Generation:0,CreationTimestamp:2020-04-07 14:00:54 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} Apr 7 14:00:54.050: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-6649,SelfLink:/api/v1/namespaces/watch-6649/configmaps/e2e-watch-test-label-changed,UID:b9194f8a-774a-4b9e-a145-e9cd7e58ea81,ResourceVersion:4133765,Generation:0,CreationTimestamp:2020-04-07 14:00:54 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} Apr 7 14:00:54.050: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-6649,SelfLink:/api/v1/namespaces/watch-6649/configmaps/e2e-watch-test-label-changed,UID:b9194f8a-774a-4b9e-a145-e9cd7e58ea81,ResourceVersion:4133766,Generation:0,CreationTimestamp:2020-04-07 14:00:54 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored Apr 7 14:01:04.087: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-6649,SelfLink:/api/v1/namespaces/watch-6649/configmaps/e2e-watch-test-label-changed,UID:b9194f8a-774a-4b9e-a145-e9cd7e58ea81,ResourceVersion:4133786,Generation:0,CreationTimestamp:2020-04-07 14:00:54 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Apr 7 14:01:04.087: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-6649,SelfLink:/api/v1/namespaces/watch-6649/configmaps/e2e-watch-test-label-changed,UID:b9194f8a-774a-4b9e-a145-e9cd7e58ea81,ResourceVersion:4133787,Generation:0,CreationTimestamp:2020-04-07 14:00:54 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} Apr 7 14:01:04.087: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-6649,SelfLink:/api/v1/namespaces/watch-6649/configmaps/e2e-watch-test-label-changed,UID:b9194f8a-774a-4b9e-a145-e9cd7e58ea81,ResourceVersion:4133788,Generation:0,CreationTimestamp:2020-04-07 14:00:54 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 7 14:01:04.087: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-6649" for this suite. Apr 7 14:01:10.150: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 7 14:01:10.263: INFO: namespace watch-6649 deletion completed in 6.15597328s • [SLOW TEST:16.330 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 7 14:01:10.263: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 7 14:01:10.397: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-5977" for this suite. Apr 7 14:01:16.417: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 7 14:01:16.497: INFO: namespace kubelet-test-5977 deletion completed in 6.094412681s • [SLOW TEST:6.234 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78 should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-api-machinery] Aggregator Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 7 14:01:16.497: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename aggregator STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76 Apr 7 14:01:16.542: INFO: >>> kubeConfig: /root/.kube/config [It] Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Registering the sample API server. Apr 7 14:01:16.990: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set Apr 7 14:01:19.286: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721864876, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721864876, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721864877, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721864876, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 7 14:01:21.925: INFO: Waited 625.423934ms for the sample-apiserver to be ready to handle requests. [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67 [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 7 14:01:22.320: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "aggregator-2501" for this suite. Apr 7 14:01:28.496: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 7 14:01:28.595: INFO: namespace aggregator-2501 deletion completed in 6.142122721s • [SLOW TEST:12.098 seconds] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 7 14:01:28.596: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Apr 7 14:01:28.690: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 7 14:01:28.695: INFO: Number of nodes with available pods: 0 Apr 7 14:01:28.695: INFO: Node iruya-worker is running more than one daemon pod Apr 7 14:01:29.701: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 7 14:01:29.705: INFO: Number of nodes with available pods: 0 Apr 7 14:01:29.705: INFO: Node iruya-worker is running more than one daemon pod Apr 7 14:01:30.700: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 7 14:01:30.706: INFO: Number of nodes with available pods: 0 Apr 7 14:01:30.706: INFO: Node iruya-worker is running more than one daemon pod Apr 7 14:01:31.701: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 7 14:01:31.705: INFO: Number of nodes with available pods: 0 Apr 7 14:01:31.705: INFO: Node iruya-worker is running more than one daemon pod Apr 7 14:01:32.700: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 7 14:01:32.704: INFO: Number of nodes with available pods: 2 Apr 7 14:01:32.704: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Stop a daemon pod, check that the daemon pod is revived. Apr 7 14:01:32.721: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 7 14:01:32.724: INFO: Number of nodes with available pods: 1 Apr 7 14:01:32.724: INFO: Node iruya-worker is running more than one daemon pod Apr 7 14:01:33.729: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 7 14:01:33.733: INFO: Number of nodes with available pods: 1 Apr 7 14:01:33.733: INFO: Node iruya-worker is running more than one daemon pod Apr 7 14:01:34.729: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 7 14:01:34.732: INFO: Number of nodes with available pods: 1 Apr 7 14:01:34.732: INFO: Node iruya-worker is running more than one daemon pod Apr 7 14:01:35.730: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 7 14:01:35.733: INFO: Number of nodes with available pods: 1 Apr 7 14:01:35.733: INFO: Node iruya-worker is running more than one daemon pod Apr 7 14:01:36.728: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 7 14:01:36.731: INFO: Number of nodes with available pods: 1 Apr 7 14:01:36.731: INFO: Node iruya-worker is running more than one daemon pod Apr 7 14:01:37.729: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 7 14:01:37.733: INFO: Number of nodes with available pods: 1 Apr 7 14:01:37.733: INFO: Node iruya-worker is running more than one daemon pod Apr 7 14:01:38.741: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 7 14:01:38.744: INFO: Number of nodes with available pods: 1 Apr 7 14:01:38.744: INFO: Node iruya-worker is running more than one daemon pod Apr 7 14:01:39.729: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 7 14:01:39.732: INFO: Number of nodes with available pods: 2 Apr 7 14:01:39.732: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-1807, will wait for the garbage collector to delete the pods Apr 7 14:01:39.793: INFO: Deleting DaemonSet.extensions daemon-set took: 6.815513ms Apr 7 14:01:40.094: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.232734ms Apr 7 14:01:52.197: INFO: Number of nodes with available pods: 0 Apr 7 14:01:52.197: INFO: Number of running nodes: 0, number of available pods: 0 Apr 7 14:01:52.200: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-1807/daemonsets","resourceVersion":"4134033"},"items":null} Apr 7 14:01:52.202: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-1807/pods","resourceVersion":"4134033"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 7 14:01:52.210: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-1807" for this suite. Apr 7 14:01:58.226: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 7 14:01:58.309: INFO: namespace daemonsets-1807 deletion completed in 6.095585281s • [SLOW TEST:29.713 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 7 14:01:58.309: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-667fa50c-85bc-4811-9ad1-be2f169c06ae STEP: Creating a pod to test consume secrets Apr 7 14:01:58.424: INFO: Waiting up to 5m0s for pod "pod-secrets-5b215134-f8fe-4fa5-a69a-a2f117d15824" in namespace "secrets-4363" to be "success or failure" Apr 7 14:01:58.465: INFO: Pod "pod-secrets-5b215134-f8fe-4fa5-a69a-a2f117d15824": Phase="Pending", Reason="", readiness=false. Elapsed: 41.850384ms Apr 7 14:02:00.513: INFO: Pod "pod-secrets-5b215134-f8fe-4fa5-a69a-a2f117d15824": Phase="Pending", Reason="", readiness=false. Elapsed: 2.089465506s Apr 7 14:02:02.517: INFO: Pod "pod-secrets-5b215134-f8fe-4fa5-a69a-a2f117d15824": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.093760458s STEP: Saw pod success Apr 7 14:02:02.517: INFO: Pod "pod-secrets-5b215134-f8fe-4fa5-a69a-a2f117d15824" satisfied condition "success or failure" Apr 7 14:02:02.521: INFO: Trying to get logs from node iruya-worker2 pod pod-secrets-5b215134-f8fe-4fa5-a69a-a2f117d15824 container secret-volume-test: STEP: delete the pod Apr 7 14:02:02.587: INFO: Waiting for pod pod-secrets-5b215134-f8fe-4fa5-a69a-a2f117d15824 to disappear Apr 7 14:02:02.595: INFO: Pod pod-secrets-5b215134-f8fe-4fa5-a69a-a2f117d15824 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 7 14:02:02.595: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-4363" for this suite. Apr 7 14:02:08.610: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 7 14:02:08.684: INFO: namespace secrets-4363 deletion completed in 6.085789874s • [SLOW TEST:10.375 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 7 14:02:08.685: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-5082e386-3e01-46fc-9e02-8a8fce902eb2 STEP: Creating a pod to test consume secrets Apr 7 14:02:08.747: INFO: Waiting up to 5m0s for pod "pod-secrets-ffbac5ff-bfa5-4721-9427-56b26a4768b0" in namespace "secrets-579" to be "success or failure" Apr 7 14:02:08.751: INFO: Pod "pod-secrets-ffbac5ff-bfa5-4721-9427-56b26a4768b0": Phase="Pending", Reason="", readiness=false. Elapsed: 3.823221ms Apr 7 14:02:10.771: INFO: Pod "pod-secrets-ffbac5ff-bfa5-4721-9427-56b26a4768b0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023811661s Apr 7 14:02:12.775: INFO: Pod "pod-secrets-ffbac5ff-bfa5-4721-9427-56b26a4768b0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.028006472s STEP: Saw pod success Apr 7 14:02:12.775: INFO: Pod "pod-secrets-ffbac5ff-bfa5-4721-9427-56b26a4768b0" satisfied condition "success or failure" Apr 7 14:02:12.778: INFO: Trying to get logs from node iruya-worker2 pod pod-secrets-ffbac5ff-bfa5-4721-9427-56b26a4768b0 container secret-env-test: STEP: delete the pod Apr 7 14:02:12.838: INFO: Waiting for pod pod-secrets-ffbac5ff-bfa5-4721-9427-56b26a4768b0 to disappear Apr 7 14:02:12.847: INFO: Pod pod-secrets-ffbac5ff-bfa5-4721-9427-56b26a4768b0 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 7 14:02:12.848: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-579" for this suite. Apr 7 14:02:18.862: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 7 14:02:18.937: INFO: namespace secrets-579 deletion completed in 6.086324843s • [SLOW TEST:10.252 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31 should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-network] DNS should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 7 14:02:18.937: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a test externalName service STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-1416.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-1416.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-1416.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-1416.svc.cluster.local; sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 7 14:02:25.018: INFO: DNS probes using dns-test-f1b7233b-0909-46c7-b7c9-51f150a544ff succeeded STEP: deleting the pod STEP: changing the externalName to bar.example.com STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-1416.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-1416.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-1416.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-1416.svc.cluster.local; sleep 1; done STEP: creating a second pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 7 14:02:31.148: INFO: File wheezy_udp@dns-test-service-3.dns-1416.svc.cluster.local from pod dns-1416/dns-test-ea8d2142-d680-4b19-99ca-5a065f2b2d42 contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 7 14:02:31.152: INFO: File jessie_udp@dns-test-service-3.dns-1416.svc.cluster.local from pod dns-1416/dns-test-ea8d2142-d680-4b19-99ca-5a065f2b2d42 contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 7 14:02:31.152: INFO: Lookups using dns-1416/dns-test-ea8d2142-d680-4b19-99ca-5a065f2b2d42 failed for: [wheezy_udp@dns-test-service-3.dns-1416.svc.cluster.local jessie_udp@dns-test-service-3.dns-1416.svc.cluster.local] Apr 7 14:02:36.156: INFO: File wheezy_udp@dns-test-service-3.dns-1416.svc.cluster.local from pod dns-1416/dns-test-ea8d2142-d680-4b19-99ca-5a065f2b2d42 contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 7 14:02:36.159: INFO: File jessie_udp@dns-test-service-3.dns-1416.svc.cluster.local from pod dns-1416/dns-test-ea8d2142-d680-4b19-99ca-5a065f2b2d42 contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 7 14:02:36.159: INFO: Lookups using dns-1416/dns-test-ea8d2142-d680-4b19-99ca-5a065f2b2d42 failed for: [wheezy_udp@dns-test-service-3.dns-1416.svc.cluster.local jessie_udp@dns-test-service-3.dns-1416.svc.cluster.local] Apr 7 14:02:41.157: INFO: File wheezy_udp@dns-test-service-3.dns-1416.svc.cluster.local from pod dns-1416/dns-test-ea8d2142-d680-4b19-99ca-5a065f2b2d42 contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 7 14:02:41.161: INFO: File jessie_udp@dns-test-service-3.dns-1416.svc.cluster.local from pod dns-1416/dns-test-ea8d2142-d680-4b19-99ca-5a065f2b2d42 contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 7 14:02:41.161: INFO: Lookups using dns-1416/dns-test-ea8d2142-d680-4b19-99ca-5a065f2b2d42 failed for: [wheezy_udp@dns-test-service-3.dns-1416.svc.cluster.local jessie_udp@dns-test-service-3.dns-1416.svc.cluster.local] Apr 7 14:02:46.157: INFO: File wheezy_udp@dns-test-service-3.dns-1416.svc.cluster.local from pod dns-1416/dns-test-ea8d2142-d680-4b19-99ca-5a065f2b2d42 contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 7 14:02:46.160: INFO: File jessie_udp@dns-test-service-3.dns-1416.svc.cluster.local from pod dns-1416/dns-test-ea8d2142-d680-4b19-99ca-5a065f2b2d42 contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 7 14:02:46.160: INFO: Lookups using dns-1416/dns-test-ea8d2142-d680-4b19-99ca-5a065f2b2d42 failed for: [wheezy_udp@dns-test-service-3.dns-1416.svc.cluster.local jessie_udp@dns-test-service-3.dns-1416.svc.cluster.local] Apr 7 14:02:51.157: INFO: File wheezy_udp@dns-test-service-3.dns-1416.svc.cluster.local from pod dns-1416/dns-test-ea8d2142-d680-4b19-99ca-5a065f2b2d42 contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 7 14:02:51.161: INFO: File jessie_udp@dns-test-service-3.dns-1416.svc.cluster.local from pod dns-1416/dns-test-ea8d2142-d680-4b19-99ca-5a065f2b2d42 contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 7 14:02:51.161: INFO: Lookups using dns-1416/dns-test-ea8d2142-d680-4b19-99ca-5a065f2b2d42 failed for: [wheezy_udp@dns-test-service-3.dns-1416.svc.cluster.local jessie_udp@dns-test-service-3.dns-1416.svc.cluster.local] Apr 7 14:02:56.160: INFO: DNS probes using dns-test-ea8d2142-d680-4b19-99ca-5a065f2b2d42 succeeded STEP: deleting the pod STEP: changing the service to type=ClusterIP STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-1416.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-1416.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-1416.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-1416.svc.cluster.local; sleep 1; done STEP: creating a third pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 7 14:03:02.707: INFO: DNS probes using dns-test-3b768171-21b7-4dda-bbd4-945fae0ef48e succeeded STEP: deleting the pod STEP: deleting the test externalName service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 7 14:03:02.790: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-1416" for this suite. Apr 7 14:03:08.837: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 7 14:03:08.910: INFO: namespace dns-1416 deletion completed in 6.115312295s • [SLOW TEST:49.973 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 7 14:03:08.912: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap that has name configmap-test-emptyKey-1fcff1ed-a45d-43be-ae25-601d122a7334 [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 7 14:03:08.984: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2848" for this suite. Apr 7 14:03:15.003: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 7 14:03:15.080: INFO: namespace configmap-2848 deletion completed in 6.087069787s • [SLOW TEST:6.168 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 7 14:03:15.081: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod Apr 7 14:03:15.117: INFO: PodSpec: initContainers in spec.initContainers Apr 7 14:04:07.335: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-37d28263-2857-4adb-bff9-a32b42dfe0c1", GenerateName:"", Namespace:"init-container-2432", SelfLink:"/api/v1/namespaces/init-container-2432/pods/pod-init-37d28263-2857-4adb-bff9-a32b42dfe0c1", UID:"849e022d-ed1c-42fc-a139-f5b13a69127a", ResourceVersion:"4134530", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63721864995, loc:(*time.Location)(0x7ead8c0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"117748926"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-xdqt7", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc002e96180), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-xdqt7", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-xdqt7", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-xdqt7", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc002ef02d8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"iruya-worker2", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc002eb60c0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc002ef0370)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc002ef0390)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc002ef0398), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc002ef039c), PreemptionPolicy:(*v1.PreemptionPolicy)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721864995, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721864995, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721864995, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721864995, loc:(*time.Location)(0x7ead8c0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.17.0.5", PodIP:"10.244.1.169", StartTime:(*v1.Time)(0xc0024da760), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(0xc0024da7a0), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc002004ee0)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://ff136874f292d43c793869f23807923704c30e8ddbf2f592df925aadcdfb752d"}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0024da7c0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:""}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0024da780), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:""}}, QOSClass:"Guaranteed"}} [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 7 14:04:07.336: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-2432" for this suite. Apr 7 14:04:29.413: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 7 14:04:29.490: INFO: namespace init-container-2432 deletion completed in 22.124250136s • [SLOW TEST:74.409 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 7 14:04:29.490: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Apr 7 14:04:29.570: INFO: Creating ReplicaSet my-hostname-basic-1e920c10-223f-47f6-940a-2471591dd41d Apr 7 14:04:29.586: INFO: Pod name my-hostname-basic-1e920c10-223f-47f6-940a-2471591dd41d: Found 0 pods out of 1 Apr 7 14:04:34.590: INFO: Pod name my-hostname-basic-1e920c10-223f-47f6-940a-2471591dd41d: Found 1 pods out of 1 Apr 7 14:04:34.590: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-1e920c10-223f-47f6-940a-2471591dd41d" is running Apr 7 14:04:34.592: INFO: Pod "my-hostname-basic-1e920c10-223f-47f6-940a-2471591dd41d-57cf6" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-07 14:04:29 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-07 14:04:32 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-07 14:04:32 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-07 14:04:29 +0000 UTC Reason: Message:}]) Apr 7 14:04:34.592: INFO: Trying to dial the pod Apr 7 14:04:39.605: INFO: Controller my-hostname-basic-1e920c10-223f-47f6-940a-2471591dd41d: Got expected result from replica 1 [my-hostname-basic-1e920c10-223f-47f6-940a-2471591dd41d-57cf6]: "my-hostname-basic-1e920c10-223f-47f6-940a-2471591dd41d-57cf6", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 7 14:04:39.605: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-8867" for this suite. Apr 7 14:04:45.621: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 7 14:04:45.690: INFO: namespace replicaset-8867 deletion completed in 6.082095804s • [SLOW TEST:16.200 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 7 14:04:45.692: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir volume type on tmpfs Apr 7 14:04:45.774: INFO: Waiting up to 5m0s for pod "pod-004a4e3a-0418-49e6-a538-a8ee3546d522" in namespace "emptydir-9533" to be "success or failure" Apr 7 14:04:45.778: INFO: Pod "pod-004a4e3a-0418-49e6-a538-a8ee3546d522": Phase="Pending", Reason="", readiness=false. Elapsed: 4.311559ms Apr 7 14:04:47.782: INFO: Pod "pod-004a4e3a-0418-49e6-a538-a8ee3546d522": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008711301s Apr 7 14:04:49.787: INFO: Pod "pod-004a4e3a-0418-49e6-a538-a8ee3546d522": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012828506s STEP: Saw pod success Apr 7 14:04:49.787: INFO: Pod "pod-004a4e3a-0418-49e6-a538-a8ee3546d522" satisfied condition "success or failure" Apr 7 14:04:49.789: INFO: Trying to get logs from node iruya-worker2 pod pod-004a4e3a-0418-49e6-a538-a8ee3546d522 container test-container: STEP: delete the pod Apr 7 14:04:49.825: INFO: Waiting for pod pod-004a4e3a-0418-49e6-a538-a8ee3546d522 to disappear Apr 7 14:04:49.846: INFO: Pod pod-004a4e3a-0418-49e6-a538-a8ee3546d522 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 7 14:04:49.846: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9533" for this suite. Apr 7 14:04:55.865: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 7 14:04:55.942: INFO: namespace emptydir-9533 deletion completed in 6.09228786s • [SLOW TEST:10.251 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 7 14:04:55.942: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Apr 7 14:04:59.038: INFO: Expected: &{} to match Container's Termination Message: -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 7 14:04:59.071: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-6699" for this suite. Apr 7 14:05:05.106: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 7 14:05:05.187: INFO: namespace container-runtime-6699 deletion completed in 6.113406523s • [SLOW TEST:9.245 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129 should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-network] DNS should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 7 14:05:05.188: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-4598.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-4598.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-4598.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-4598.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-4598.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-4598.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-4598.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-4598.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-4598.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-4598.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-4598.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-4598.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4598.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 239.3.101.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.101.3.239_udp@PTR;check="$$(dig +tcp +noall +answer +search 239.3.101.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.101.3.239_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-4598.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-4598.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-4598.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-4598.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-4598.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-4598.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-4598.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-4598.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-4598.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-4598.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-4598.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-4598.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4598.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 239.3.101.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.101.3.239_udp@PTR;check="$$(dig +tcp +noall +answer +search 239.3.101.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.101.3.239_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 7 14:05:11.361: INFO: Unable to read wheezy_udp@dns-test-service.dns-4598.svc.cluster.local from pod dns-4598/dns-test-baa7d093-cd28-40b0-98d9-5cef4d028a6f: the server could not find the requested resource (get pods dns-test-baa7d093-cd28-40b0-98d9-5cef4d028a6f) Apr 7 14:05:11.364: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4598.svc.cluster.local from pod dns-4598/dns-test-baa7d093-cd28-40b0-98d9-5cef4d028a6f: the server could not find the requested resource (get pods dns-test-baa7d093-cd28-40b0-98d9-5cef4d028a6f) Apr 7 14:05:11.367: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4598.svc.cluster.local from pod dns-4598/dns-test-baa7d093-cd28-40b0-98d9-5cef4d028a6f: the server could not find the requested resource (get pods dns-test-baa7d093-cd28-40b0-98d9-5cef4d028a6f) Apr 7 14:05:11.370: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4598.svc.cluster.local from pod dns-4598/dns-test-baa7d093-cd28-40b0-98d9-5cef4d028a6f: the server could not find the requested resource (get pods dns-test-baa7d093-cd28-40b0-98d9-5cef4d028a6f) Apr 7 14:05:11.408: INFO: Unable to read jessie_udp@dns-test-service.dns-4598.svc.cluster.local from pod dns-4598/dns-test-baa7d093-cd28-40b0-98d9-5cef4d028a6f: the server could not find the requested resource (get pods dns-test-baa7d093-cd28-40b0-98d9-5cef4d028a6f) Apr 7 14:05:11.411: INFO: Unable to read jessie_tcp@dns-test-service.dns-4598.svc.cluster.local from pod dns-4598/dns-test-baa7d093-cd28-40b0-98d9-5cef4d028a6f: the server could not find the requested resource (get pods dns-test-baa7d093-cd28-40b0-98d9-5cef4d028a6f) Apr 7 14:05:11.413: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4598.svc.cluster.local from pod dns-4598/dns-test-baa7d093-cd28-40b0-98d9-5cef4d028a6f: the server could not find the requested resource (get pods dns-test-baa7d093-cd28-40b0-98d9-5cef4d028a6f) Apr 7 14:05:11.415: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4598.svc.cluster.local from pod dns-4598/dns-test-baa7d093-cd28-40b0-98d9-5cef4d028a6f: the server could not find the requested resource (get pods dns-test-baa7d093-cd28-40b0-98d9-5cef4d028a6f) Apr 7 14:05:11.430: INFO: Lookups using dns-4598/dns-test-baa7d093-cd28-40b0-98d9-5cef4d028a6f failed for: [wheezy_udp@dns-test-service.dns-4598.svc.cluster.local wheezy_tcp@dns-test-service.dns-4598.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-4598.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-4598.svc.cluster.local jessie_udp@dns-test-service.dns-4598.svc.cluster.local jessie_tcp@dns-test-service.dns-4598.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-4598.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-4598.svc.cluster.local] Apr 7 14:05:16.435: INFO: Unable to read wheezy_udp@dns-test-service.dns-4598.svc.cluster.local from pod dns-4598/dns-test-baa7d093-cd28-40b0-98d9-5cef4d028a6f: the server could not find the requested resource (get pods dns-test-baa7d093-cd28-40b0-98d9-5cef4d028a6f) Apr 7 14:05:16.439: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4598.svc.cluster.local from pod dns-4598/dns-test-baa7d093-cd28-40b0-98d9-5cef4d028a6f: the server could not find the requested resource (get pods dns-test-baa7d093-cd28-40b0-98d9-5cef4d028a6f) Apr 7 14:05:16.442: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4598.svc.cluster.local from pod dns-4598/dns-test-baa7d093-cd28-40b0-98d9-5cef4d028a6f: the server could not find the requested resource (get pods dns-test-baa7d093-cd28-40b0-98d9-5cef4d028a6f) Apr 7 14:05:16.446: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4598.svc.cluster.local from pod dns-4598/dns-test-baa7d093-cd28-40b0-98d9-5cef4d028a6f: the server could not find the requested resource (get pods dns-test-baa7d093-cd28-40b0-98d9-5cef4d028a6f) Apr 7 14:05:16.468: INFO: Unable to read jessie_udp@dns-test-service.dns-4598.svc.cluster.local from pod dns-4598/dns-test-baa7d093-cd28-40b0-98d9-5cef4d028a6f: the server could not find the requested resource (get pods dns-test-baa7d093-cd28-40b0-98d9-5cef4d028a6f) Apr 7 14:05:16.471: INFO: Unable to read jessie_tcp@dns-test-service.dns-4598.svc.cluster.local from pod dns-4598/dns-test-baa7d093-cd28-40b0-98d9-5cef4d028a6f: the server could not find the requested resource (get pods dns-test-baa7d093-cd28-40b0-98d9-5cef4d028a6f) Apr 7 14:05:16.474: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4598.svc.cluster.local from pod dns-4598/dns-test-baa7d093-cd28-40b0-98d9-5cef4d028a6f: the server could not find the requested resource (get pods dns-test-baa7d093-cd28-40b0-98d9-5cef4d028a6f) Apr 7 14:05:16.478: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4598.svc.cluster.local from pod dns-4598/dns-test-baa7d093-cd28-40b0-98d9-5cef4d028a6f: the server could not find the requested resource (get pods dns-test-baa7d093-cd28-40b0-98d9-5cef4d028a6f) Apr 7 14:05:16.498: INFO: Lookups using dns-4598/dns-test-baa7d093-cd28-40b0-98d9-5cef4d028a6f failed for: [wheezy_udp@dns-test-service.dns-4598.svc.cluster.local wheezy_tcp@dns-test-service.dns-4598.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-4598.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-4598.svc.cluster.local jessie_udp@dns-test-service.dns-4598.svc.cluster.local jessie_tcp@dns-test-service.dns-4598.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-4598.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-4598.svc.cluster.local] Apr 7 14:05:21.435: INFO: Unable to read wheezy_udp@dns-test-service.dns-4598.svc.cluster.local from pod dns-4598/dns-test-baa7d093-cd28-40b0-98d9-5cef4d028a6f: the server could not find the requested resource (get pods dns-test-baa7d093-cd28-40b0-98d9-5cef4d028a6f) Apr 7 14:05:21.439: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4598.svc.cluster.local from pod dns-4598/dns-test-baa7d093-cd28-40b0-98d9-5cef4d028a6f: the server could not find the requested resource (get pods dns-test-baa7d093-cd28-40b0-98d9-5cef4d028a6f) Apr 7 14:05:21.443: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4598.svc.cluster.local from pod dns-4598/dns-test-baa7d093-cd28-40b0-98d9-5cef4d028a6f: the server could not find the requested resource (get pods dns-test-baa7d093-cd28-40b0-98d9-5cef4d028a6f) Apr 7 14:05:21.447: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4598.svc.cluster.local from pod dns-4598/dns-test-baa7d093-cd28-40b0-98d9-5cef4d028a6f: the server could not find the requested resource (get pods dns-test-baa7d093-cd28-40b0-98d9-5cef4d028a6f) Apr 7 14:05:21.467: INFO: Unable to read jessie_udp@dns-test-service.dns-4598.svc.cluster.local from pod dns-4598/dns-test-baa7d093-cd28-40b0-98d9-5cef4d028a6f: the server could not find the requested resource (get pods dns-test-baa7d093-cd28-40b0-98d9-5cef4d028a6f) Apr 7 14:05:21.469: INFO: Unable to read jessie_tcp@dns-test-service.dns-4598.svc.cluster.local from pod dns-4598/dns-test-baa7d093-cd28-40b0-98d9-5cef4d028a6f: the server could not find the requested resource (get pods dns-test-baa7d093-cd28-40b0-98d9-5cef4d028a6f) Apr 7 14:05:21.471: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4598.svc.cluster.local from pod dns-4598/dns-test-baa7d093-cd28-40b0-98d9-5cef4d028a6f: the server could not find the requested resource (get pods dns-test-baa7d093-cd28-40b0-98d9-5cef4d028a6f) Apr 7 14:05:21.473: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4598.svc.cluster.local from pod dns-4598/dns-test-baa7d093-cd28-40b0-98d9-5cef4d028a6f: the server could not find the requested resource (get pods dns-test-baa7d093-cd28-40b0-98d9-5cef4d028a6f) Apr 7 14:05:21.488: INFO: Lookups using dns-4598/dns-test-baa7d093-cd28-40b0-98d9-5cef4d028a6f failed for: [wheezy_udp@dns-test-service.dns-4598.svc.cluster.local wheezy_tcp@dns-test-service.dns-4598.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-4598.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-4598.svc.cluster.local jessie_udp@dns-test-service.dns-4598.svc.cluster.local jessie_tcp@dns-test-service.dns-4598.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-4598.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-4598.svc.cluster.local] Apr 7 14:05:26.435: INFO: Unable to read wheezy_udp@dns-test-service.dns-4598.svc.cluster.local from pod dns-4598/dns-test-baa7d093-cd28-40b0-98d9-5cef4d028a6f: the server could not find the requested resource (get pods dns-test-baa7d093-cd28-40b0-98d9-5cef4d028a6f) Apr 7 14:05:26.439: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4598.svc.cluster.local from pod dns-4598/dns-test-baa7d093-cd28-40b0-98d9-5cef4d028a6f: the server could not find the requested resource (get pods dns-test-baa7d093-cd28-40b0-98d9-5cef4d028a6f) Apr 7 14:05:26.443: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4598.svc.cluster.local from pod dns-4598/dns-test-baa7d093-cd28-40b0-98d9-5cef4d028a6f: the server could not find the requested resource (get pods dns-test-baa7d093-cd28-40b0-98d9-5cef4d028a6f) Apr 7 14:05:26.446: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4598.svc.cluster.local from pod dns-4598/dns-test-baa7d093-cd28-40b0-98d9-5cef4d028a6f: the server could not find the requested resource (get pods dns-test-baa7d093-cd28-40b0-98d9-5cef4d028a6f) Apr 7 14:05:26.469: INFO: Unable to read jessie_udp@dns-test-service.dns-4598.svc.cluster.local from pod dns-4598/dns-test-baa7d093-cd28-40b0-98d9-5cef4d028a6f: the server could not find the requested resource (get pods dns-test-baa7d093-cd28-40b0-98d9-5cef4d028a6f) Apr 7 14:05:26.472: INFO: Unable to read jessie_tcp@dns-test-service.dns-4598.svc.cluster.local from pod dns-4598/dns-test-baa7d093-cd28-40b0-98d9-5cef4d028a6f: the server could not find the requested resource (get pods dns-test-baa7d093-cd28-40b0-98d9-5cef4d028a6f) Apr 7 14:05:26.475: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4598.svc.cluster.local from pod dns-4598/dns-test-baa7d093-cd28-40b0-98d9-5cef4d028a6f: the server could not find the requested resource (get pods dns-test-baa7d093-cd28-40b0-98d9-5cef4d028a6f) Apr 7 14:05:26.478: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4598.svc.cluster.local from pod dns-4598/dns-test-baa7d093-cd28-40b0-98d9-5cef4d028a6f: the server could not find the requested resource (get pods dns-test-baa7d093-cd28-40b0-98d9-5cef4d028a6f) Apr 7 14:05:26.498: INFO: Lookups using dns-4598/dns-test-baa7d093-cd28-40b0-98d9-5cef4d028a6f failed for: [wheezy_udp@dns-test-service.dns-4598.svc.cluster.local wheezy_tcp@dns-test-service.dns-4598.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-4598.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-4598.svc.cluster.local jessie_udp@dns-test-service.dns-4598.svc.cluster.local jessie_tcp@dns-test-service.dns-4598.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-4598.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-4598.svc.cluster.local] Apr 7 14:05:31.434: INFO: Unable to read wheezy_udp@dns-test-service.dns-4598.svc.cluster.local from pod dns-4598/dns-test-baa7d093-cd28-40b0-98d9-5cef4d028a6f: the server could not find the requested resource (get pods dns-test-baa7d093-cd28-40b0-98d9-5cef4d028a6f) Apr 7 14:05:31.438: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4598.svc.cluster.local from pod dns-4598/dns-test-baa7d093-cd28-40b0-98d9-5cef4d028a6f: the server could not find the requested resource (get pods dns-test-baa7d093-cd28-40b0-98d9-5cef4d028a6f) Apr 7 14:05:31.441: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4598.svc.cluster.local from pod dns-4598/dns-test-baa7d093-cd28-40b0-98d9-5cef4d028a6f: the server could not find the requested resource (get pods dns-test-baa7d093-cd28-40b0-98d9-5cef4d028a6f) Apr 7 14:05:31.445: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4598.svc.cluster.local from pod dns-4598/dns-test-baa7d093-cd28-40b0-98d9-5cef4d028a6f: the server could not find the requested resource (get pods dns-test-baa7d093-cd28-40b0-98d9-5cef4d028a6f) Apr 7 14:05:31.467: INFO: Unable to read jessie_udp@dns-test-service.dns-4598.svc.cluster.local from pod dns-4598/dns-test-baa7d093-cd28-40b0-98d9-5cef4d028a6f: the server could not find the requested resource (get pods dns-test-baa7d093-cd28-40b0-98d9-5cef4d028a6f) Apr 7 14:05:31.471: INFO: Unable to read jessie_tcp@dns-test-service.dns-4598.svc.cluster.local from pod dns-4598/dns-test-baa7d093-cd28-40b0-98d9-5cef4d028a6f: the server could not find the requested resource (get pods dns-test-baa7d093-cd28-40b0-98d9-5cef4d028a6f) Apr 7 14:05:31.474: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4598.svc.cluster.local from pod dns-4598/dns-test-baa7d093-cd28-40b0-98d9-5cef4d028a6f: the server could not find the requested resource (get pods dns-test-baa7d093-cd28-40b0-98d9-5cef4d028a6f) Apr 7 14:05:31.477: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4598.svc.cluster.local from pod dns-4598/dns-test-baa7d093-cd28-40b0-98d9-5cef4d028a6f: the server could not find the requested resource (get pods dns-test-baa7d093-cd28-40b0-98d9-5cef4d028a6f) Apr 7 14:05:31.495: INFO: Lookups using dns-4598/dns-test-baa7d093-cd28-40b0-98d9-5cef4d028a6f failed for: [wheezy_udp@dns-test-service.dns-4598.svc.cluster.local wheezy_tcp@dns-test-service.dns-4598.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-4598.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-4598.svc.cluster.local jessie_udp@dns-test-service.dns-4598.svc.cluster.local jessie_tcp@dns-test-service.dns-4598.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-4598.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-4598.svc.cluster.local] Apr 7 14:05:36.434: INFO: Unable to read wheezy_udp@dns-test-service.dns-4598.svc.cluster.local from pod dns-4598/dns-test-baa7d093-cd28-40b0-98d9-5cef4d028a6f: the server could not find the requested resource (get pods dns-test-baa7d093-cd28-40b0-98d9-5cef4d028a6f) Apr 7 14:05:36.439: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4598.svc.cluster.local from pod dns-4598/dns-test-baa7d093-cd28-40b0-98d9-5cef4d028a6f: the server could not find the requested resource (get pods dns-test-baa7d093-cd28-40b0-98d9-5cef4d028a6f) Apr 7 14:05:36.442: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4598.svc.cluster.local from pod dns-4598/dns-test-baa7d093-cd28-40b0-98d9-5cef4d028a6f: the server could not find the requested resource (get pods dns-test-baa7d093-cd28-40b0-98d9-5cef4d028a6f) Apr 7 14:05:36.445: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4598.svc.cluster.local from pod dns-4598/dns-test-baa7d093-cd28-40b0-98d9-5cef4d028a6f: the server could not find the requested resource (get pods dns-test-baa7d093-cd28-40b0-98d9-5cef4d028a6f) Apr 7 14:05:36.467: INFO: Unable to read jessie_udp@dns-test-service.dns-4598.svc.cluster.local from pod dns-4598/dns-test-baa7d093-cd28-40b0-98d9-5cef4d028a6f: the server could not find the requested resource (get pods dns-test-baa7d093-cd28-40b0-98d9-5cef4d028a6f) Apr 7 14:05:36.470: INFO: Unable to read jessie_tcp@dns-test-service.dns-4598.svc.cluster.local from pod dns-4598/dns-test-baa7d093-cd28-40b0-98d9-5cef4d028a6f: the server could not find the requested resource (get pods dns-test-baa7d093-cd28-40b0-98d9-5cef4d028a6f) Apr 7 14:05:36.474: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4598.svc.cluster.local from pod dns-4598/dns-test-baa7d093-cd28-40b0-98d9-5cef4d028a6f: the server could not find the requested resource (get pods dns-test-baa7d093-cd28-40b0-98d9-5cef4d028a6f) Apr 7 14:05:36.477: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4598.svc.cluster.local from pod dns-4598/dns-test-baa7d093-cd28-40b0-98d9-5cef4d028a6f: the server could not find the requested resource (get pods dns-test-baa7d093-cd28-40b0-98d9-5cef4d028a6f) Apr 7 14:05:36.495: INFO: Lookups using dns-4598/dns-test-baa7d093-cd28-40b0-98d9-5cef4d028a6f failed for: [wheezy_udp@dns-test-service.dns-4598.svc.cluster.local wheezy_tcp@dns-test-service.dns-4598.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-4598.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-4598.svc.cluster.local jessie_udp@dns-test-service.dns-4598.svc.cluster.local jessie_tcp@dns-test-service.dns-4598.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-4598.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-4598.svc.cluster.local] Apr 7 14:05:41.518: INFO: DNS probes using dns-4598/dns-test-baa7d093-cd28-40b0-98d9-5cef4d028a6f succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 7 14:05:42.207: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-4598" for this suite. Apr 7 14:05:48.226: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 7 14:05:48.308: INFO: namespace dns-4598 deletion completed in 6.096867266s • [SLOW TEST:43.120 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 7 14:05:48.308: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Apr 7 14:05:48.387: INFO: Waiting up to 5m0s for pod "downwardapi-volume-2260081a-a05b-4e6b-9d7b-fc189a3bab61" in namespace "projected-1336" to be "success or failure" Apr 7 14:05:48.390: INFO: Pod "downwardapi-volume-2260081a-a05b-4e6b-9d7b-fc189a3bab61": Phase="Pending", Reason="", readiness=false. Elapsed: 3.442103ms Apr 7 14:05:50.395: INFO: Pod "downwardapi-volume-2260081a-a05b-4e6b-9d7b-fc189a3bab61": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00795199s Apr 7 14:05:52.400: INFO: Pod "downwardapi-volume-2260081a-a05b-4e6b-9d7b-fc189a3bab61": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012737045s STEP: Saw pod success Apr 7 14:05:52.400: INFO: Pod "downwardapi-volume-2260081a-a05b-4e6b-9d7b-fc189a3bab61" satisfied condition "success or failure" Apr 7 14:05:52.403: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-2260081a-a05b-4e6b-9d7b-fc189a3bab61 container client-container: STEP: delete the pod Apr 7 14:05:52.435: INFO: Waiting for pod downwardapi-volume-2260081a-a05b-4e6b-9d7b-fc189a3bab61 to disappear Apr 7 14:05:52.451: INFO: Pod downwardapi-volume-2260081a-a05b-4e6b-9d7b-fc189a3bab61 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 7 14:05:52.451: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1336" for this suite. Apr 7 14:05:58.478: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 7 14:05:58.560: INFO: namespace projected-1336 deletion completed in 6.10601509s • [SLOW TEST:10.252 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 7 14:05:58.560: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Apr 7 14:05:58.615: INFO: Waiting up to 5m0s for pod "downwardapi-volume-6e48d8e1-e655-4e08-87da-e99224c7e3f6" in namespace "downward-api-1359" to be "success or failure" Apr 7 14:05:58.630: INFO: Pod "downwardapi-volume-6e48d8e1-e655-4e08-87da-e99224c7e3f6": Phase="Pending", Reason="", readiness=false. Elapsed: 14.963419ms Apr 7 14:06:00.634: INFO: Pod "downwardapi-volume-6e48d8e1-e655-4e08-87da-e99224c7e3f6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018548373s Apr 7 14:06:02.638: INFO: Pod "downwardapi-volume-6e48d8e1-e655-4e08-87da-e99224c7e3f6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.022256739s STEP: Saw pod success Apr 7 14:06:02.638: INFO: Pod "downwardapi-volume-6e48d8e1-e655-4e08-87da-e99224c7e3f6" satisfied condition "success or failure" Apr 7 14:06:02.640: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-6e48d8e1-e655-4e08-87da-e99224c7e3f6 container client-container: STEP: delete the pod Apr 7 14:06:02.665: INFO: Waiting for pod downwardapi-volume-6e48d8e1-e655-4e08-87da-e99224c7e3f6 to disappear Apr 7 14:06:02.678: INFO: Pod downwardapi-volume-6e48d8e1-e655-4e08-87da-e99224c7e3f6 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 7 14:06:02.678: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1359" for this suite. Apr 7 14:06:08.693: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 7 14:06:08.768: INFO: namespace downward-api-1359 deletion completed in 6.086529904s • [SLOW TEST:10.208 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 7 14:06:08.769: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Apr 7 14:06:08.824: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f5dad899-0472-486b-a54d-66ea681a8a3a" in namespace "projected-5627" to be "success or failure" Apr 7 14:06:08.828: INFO: Pod "downwardapi-volume-f5dad899-0472-486b-a54d-66ea681a8a3a": Phase="Pending", Reason="", readiness=false. Elapsed: 3.73429ms Apr 7 14:06:10.832: INFO: Pod "downwardapi-volume-f5dad899-0472-486b-a54d-66ea681a8a3a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007988813s Apr 7 14:06:12.836: INFO: Pod "downwardapi-volume-f5dad899-0472-486b-a54d-66ea681a8a3a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012397042s STEP: Saw pod success Apr 7 14:06:12.836: INFO: Pod "downwardapi-volume-f5dad899-0472-486b-a54d-66ea681a8a3a" satisfied condition "success or failure" Apr 7 14:06:12.840: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-f5dad899-0472-486b-a54d-66ea681a8a3a container client-container: STEP: delete the pod Apr 7 14:06:12.872: INFO: Waiting for pod downwardapi-volume-f5dad899-0472-486b-a54d-66ea681a8a3a to disappear Apr 7 14:06:12.875: INFO: Pod downwardapi-volume-f5dad899-0472-486b-a54d-66ea681a8a3a no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 7 14:06:12.875: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5627" for this suite. Apr 7 14:06:18.891: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 7 14:06:18.964: INFO: namespace projected-5627 deletion completed in 6.085474956s • [SLOW TEST:10.195 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 7 14:06:18.965: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Performing setup for networking test in namespace pod-network-test-4126 STEP: creating a selector STEP: Creating the service pods in kubernetes Apr 7 14:06:19.011: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Apr 7 14:06:43.179: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.174:8080/dial?request=hostName&protocol=udp&host=10.244.2.104&port=8081&tries=1'] Namespace:pod-network-test-4126 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 7 14:06:43.179: INFO: >>> kubeConfig: /root/.kube/config I0407 14:06:43.212293 6 log.go:172] (0xc0006a9ef0) (0xc001c17040) Create stream I0407 14:06:43.212325 6 log.go:172] (0xc0006a9ef0) (0xc001c17040) Stream added, broadcasting: 1 I0407 14:06:43.214352 6 log.go:172] (0xc0006a9ef0) Reply frame received for 1 I0407 14:06:43.214382 6 log.go:172] (0xc0006a9ef0) (0xc001c17220) Create stream I0407 14:06:43.214394 6 log.go:172] (0xc0006a9ef0) (0xc001c17220) Stream added, broadcasting: 3 I0407 14:06:43.215232 6 log.go:172] (0xc0006a9ef0) Reply frame received for 3 I0407 14:06:43.215270 6 log.go:172] (0xc0006a9ef0) (0xc000a6b360) Create stream I0407 14:06:43.215288 6 log.go:172] (0xc0006a9ef0) (0xc000a6b360) Stream added, broadcasting: 5 I0407 14:06:43.216345 6 log.go:172] (0xc0006a9ef0) Reply frame received for 5 I0407 14:06:43.319543 6 log.go:172] (0xc0006a9ef0) Data frame received for 3 I0407 14:06:43.319577 6 log.go:172] (0xc001c17220) (3) Data frame handling I0407 14:06:43.319606 6 log.go:172] (0xc001c17220) (3) Data frame sent I0407 14:06:43.320511 6 log.go:172] (0xc0006a9ef0) Data frame received for 5 I0407 14:06:43.320547 6 log.go:172] (0xc000a6b360) (5) Data frame handling I0407 14:06:43.320650 6 log.go:172] (0xc0006a9ef0) Data frame received for 3 I0407 14:06:43.320683 6 log.go:172] (0xc001c17220) (3) Data frame handling I0407 14:06:43.322793 6 log.go:172] (0xc0006a9ef0) Data frame received for 1 I0407 14:06:43.322839 6 log.go:172] (0xc001c17040) (1) Data frame handling I0407 14:06:43.322872 6 log.go:172] (0xc001c17040) (1) Data frame sent I0407 14:06:43.322905 6 log.go:172] (0xc0006a9ef0) (0xc001c17040) Stream removed, broadcasting: 1 I0407 14:06:43.322937 6 log.go:172] (0xc0006a9ef0) Go away received I0407 14:06:43.323069 6 log.go:172] (0xc0006a9ef0) (0xc001c17040) Stream removed, broadcasting: 1 I0407 14:06:43.323093 6 log.go:172] (0xc0006a9ef0) (0xc001c17220) Stream removed, broadcasting: 3 I0407 14:06:43.323108 6 log.go:172] (0xc0006a9ef0) (0xc000a6b360) Stream removed, broadcasting: 5 Apr 7 14:06:43.323: INFO: Waiting for endpoints: map[] Apr 7 14:06:43.327: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.174:8080/dial?request=hostName&protocol=udp&host=10.244.1.173&port=8081&tries=1'] Namespace:pod-network-test-4126 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 7 14:06:43.327: INFO: >>> kubeConfig: /root/.kube/config I0407 14:06:43.357361 6 log.go:172] (0xc003534e70) (0xc00039d860) Create stream I0407 14:06:43.357392 6 log.go:172] (0xc003534e70) (0xc00039d860) Stream added, broadcasting: 1 I0407 14:06:43.359073 6 log.go:172] (0xc003534e70) Reply frame received for 1 I0407 14:06:43.359115 6 log.go:172] (0xc003534e70) (0xc00039d900) Create stream I0407 14:06:43.359125 6 log.go:172] (0xc003534e70) (0xc00039d900) Stream added, broadcasting: 3 I0407 14:06:43.359933 6 log.go:172] (0xc003534e70) Reply frame received for 3 I0407 14:06:43.359981 6 log.go:172] (0xc003534e70) (0xc00039d9a0) Create stream I0407 14:06:43.359997 6 log.go:172] (0xc003534e70) (0xc00039d9a0) Stream added, broadcasting: 5 I0407 14:06:43.360866 6 log.go:172] (0xc003534e70) Reply frame received for 5 I0407 14:06:43.439028 6 log.go:172] (0xc003534e70) Data frame received for 3 I0407 14:06:43.439055 6 log.go:172] (0xc00039d900) (3) Data frame handling I0407 14:06:43.439069 6 log.go:172] (0xc00039d900) (3) Data frame sent I0407 14:06:43.439609 6 log.go:172] (0xc003534e70) Data frame received for 3 I0407 14:06:43.439637 6 log.go:172] (0xc00039d900) (3) Data frame handling I0407 14:06:43.439777 6 log.go:172] (0xc003534e70) Data frame received for 5 I0407 14:06:43.439794 6 log.go:172] (0xc00039d9a0) (5) Data frame handling I0407 14:06:43.441961 6 log.go:172] (0xc003534e70) Data frame received for 1 I0407 14:06:43.441974 6 log.go:172] (0xc00039d860) (1) Data frame handling I0407 14:06:43.441985 6 log.go:172] (0xc00039d860) (1) Data frame sent I0407 14:06:43.442113 6 log.go:172] (0xc003534e70) (0xc00039d860) Stream removed, broadcasting: 1 I0407 14:06:43.442232 6 log.go:172] (0xc003534e70) Go away received I0407 14:06:43.442249 6 log.go:172] (0xc003534e70) (0xc00039d860) Stream removed, broadcasting: 1 I0407 14:06:43.442271 6 log.go:172] (0xc003534e70) (0xc00039d900) Stream removed, broadcasting: 3 I0407 14:06:43.442284 6 log.go:172] (0xc003534e70) (0xc00039d9a0) Stream removed, broadcasting: 5 Apr 7 14:06:43.442: INFO: Waiting for endpoints: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 7 14:06:43.442: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-4126" for this suite. Apr 7 14:07:05.465: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 7 14:07:05.552: INFO: namespace pod-network-test-4126 deletion completed in 22.105478158s • [SLOW TEST:46.587 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run job should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 7 14:07:05.552: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1612 [It] should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Apr 7 14:07:05.597: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-7904' Apr 7 14:07:05.707: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Apr 7 14:07:05.707: INFO: stdout: "job.batch/e2e-test-nginx-job created\n" STEP: verifying the job e2e-test-nginx-job was created [AfterEach] [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1617 Apr 7 14:07:05.727: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-nginx-job --namespace=kubectl-7904' Apr 7 14:07:05.820: INFO: stderr: "" Apr 7 14:07:05.820: INFO: stdout: "job.batch \"e2e-test-nginx-job\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 7 14:07:05.820: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7904" for this suite. Apr 7 14:07:11.843: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 7 14:07:11.963: INFO: namespace kubectl-7904 deletion completed in 6.140357967s • [SLOW TEST:6.411 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 7 14:07:11.964: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0666 on node default medium Apr 7 14:07:12.064: INFO: Waiting up to 5m0s for pod "pod-13779b23-0196-44c3-9f75-ed379dd3835d" in namespace "emptydir-5536" to be "success or failure" Apr 7 14:07:12.069: INFO: Pod "pod-13779b23-0196-44c3-9f75-ed379dd3835d": Phase="Pending", Reason="", readiness=false. Elapsed: 5.217638ms Apr 7 14:07:14.073: INFO: Pod "pod-13779b23-0196-44c3-9f75-ed379dd3835d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008687038s Apr 7 14:07:16.077: INFO: Pod "pod-13779b23-0196-44c3-9f75-ed379dd3835d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012885689s STEP: Saw pod success Apr 7 14:07:16.077: INFO: Pod "pod-13779b23-0196-44c3-9f75-ed379dd3835d" satisfied condition "success or failure" Apr 7 14:07:16.080: INFO: Trying to get logs from node iruya-worker pod pod-13779b23-0196-44c3-9f75-ed379dd3835d container test-container: STEP: delete the pod Apr 7 14:07:16.111: INFO: Waiting for pod pod-13779b23-0196-44c3-9f75-ed379dd3835d to disappear Apr 7 14:07:16.135: INFO: Pod pod-13779b23-0196-44c3-9f75-ed379dd3835d no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 7 14:07:16.135: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5536" for this suite. Apr 7 14:07:22.186: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 7 14:07:22.262: INFO: namespace emptydir-5536 deletion completed in 6.123195897s • [SLOW TEST:10.298 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 7 14:07:22.263: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Apr 7 14:07:26.853: INFO: Successfully updated pod "pod-update-ee867f7b-5c06-47da-8b73-e09c481e3bf2" STEP: verifying the updated pod is in kubernetes Apr 7 14:07:26.862: INFO: Pod update OK [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 7 14:07:26.862: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-8089" for this suite. Apr 7 14:07:48.881: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 7 14:07:48.953: INFO: namespace pods-8089 deletion completed in 22.088118965s • [SLOW TEST:26.691 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 7 14:07:48.954: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-map-318933a5-5acf-428d-a282-3def248c831e STEP: Creating a pod to test consume secrets Apr 7 14:07:49.018: INFO: Waiting up to 5m0s for pod "pod-secrets-af681c8b-2a0b-4ebd-9b7d-c9584b87065b" in namespace "secrets-3979" to be "success or failure" Apr 7 14:07:49.034: INFO: Pod "pod-secrets-af681c8b-2a0b-4ebd-9b7d-c9584b87065b": Phase="Pending", Reason="", readiness=false. Elapsed: 16.197097ms Apr 7 14:07:51.038: INFO: Pod "pod-secrets-af681c8b-2a0b-4ebd-9b7d-c9584b87065b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020027776s Apr 7 14:07:53.042: INFO: Pod "pod-secrets-af681c8b-2a0b-4ebd-9b7d-c9584b87065b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024459925s STEP: Saw pod success Apr 7 14:07:53.042: INFO: Pod "pod-secrets-af681c8b-2a0b-4ebd-9b7d-c9584b87065b" satisfied condition "success or failure" Apr 7 14:07:53.045: INFO: Trying to get logs from node iruya-worker pod pod-secrets-af681c8b-2a0b-4ebd-9b7d-c9584b87065b container secret-volume-test: STEP: delete the pod Apr 7 14:07:53.064: INFO: Waiting for pod pod-secrets-af681c8b-2a0b-4ebd-9b7d-c9584b87065b to disappear Apr 7 14:07:53.068: INFO: Pod pod-secrets-af681c8b-2a0b-4ebd-9b7d-c9584b87065b no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 7 14:07:53.068: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-3979" for this suite. Apr 7 14:07:59.084: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 7 14:07:59.165: INFO: namespace secrets-3979 deletion completed in 6.093488435s • [SLOW TEST:10.212 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 7 14:07:59.166: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name s-test-opt-del-845de31c-963a-4793-b15d-a16dadde420f STEP: Creating secret with name s-test-opt-upd-1ccabfe3-3ec5-4d96-8606-ae40471e4c90 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-845de31c-963a-4793-b15d-a16dadde420f STEP: Updating secret s-test-opt-upd-1ccabfe3-3ec5-4d96-8606-ae40471e4c90 STEP: Creating secret with name s-test-opt-create-9a76c8c5-42cc-4b01-aea2-0409f8845b56 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 7 14:09:35.774: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-2755" for this suite. Apr 7 14:09:47.794: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 7 14:09:47.877: INFO: namespace secrets-2755 deletion completed in 12.100031911s • [SLOW TEST:108.712 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 7 14:09:47.878: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0777 on tmpfs Apr 7 14:09:47.940: INFO: Waiting up to 5m0s for pod "pod-88d94540-c356-4c4a-830c-302a4856dd8c" in namespace "emptydir-4498" to be "success or failure" Apr 7 14:09:47.987: INFO: Pod "pod-88d94540-c356-4c4a-830c-302a4856dd8c": Phase="Pending", Reason="", readiness=false. Elapsed: 46.681368ms Apr 7 14:09:49.991: INFO: Pod "pod-88d94540-c356-4c4a-830c-302a4856dd8c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.050452779s Apr 7 14:09:51.995: INFO: Pod "pod-88d94540-c356-4c4a-830c-302a4856dd8c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.054935236s STEP: Saw pod success Apr 7 14:09:51.995: INFO: Pod "pod-88d94540-c356-4c4a-830c-302a4856dd8c" satisfied condition "success or failure" Apr 7 14:09:51.999: INFO: Trying to get logs from node iruya-worker pod pod-88d94540-c356-4c4a-830c-302a4856dd8c container test-container: STEP: delete the pod Apr 7 14:09:52.018: INFO: Waiting for pod pod-88d94540-c356-4c4a-830c-302a4856dd8c to disappear Apr 7 14:09:52.022: INFO: Pod pod-88d94540-c356-4c4a-830c-302a4856dd8c no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 7 14:09:52.022: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4498" for this suite. Apr 7 14:09:58.037: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 7 14:09:58.118: INFO: namespace emptydir-4498 deletion completed in 6.091900084s • [SLOW TEST:10.240 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Guestbook application should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 7 14:09:58.118: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating all guestbook components Apr 7 14:09:58.162: INFO: apiVersion: v1 kind: Service metadata: name: redis-slave labels: app: redis role: slave tier: backend spec: ports: - port: 6379 selector: app: redis role: slave tier: backend Apr 7 14:09:58.162: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2853' Apr 7 14:10:00.984: INFO: stderr: "" Apr 7 14:10:00.984: INFO: stdout: "service/redis-slave created\n" Apr 7 14:10:00.984: INFO: apiVersion: v1 kind: Service metadata: name: redis-master labels: app: redis role: master tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: redis role: master tier: backend Apr 7 14:10:00.985: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2853' Apr 7 14:10:01.259: INFO: stderr: "" Apr 7 14:10:01.259: INFO: stdout: "service/redis-master created\n" Apr 7 14:10:01.260: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend Apr 7 14:10:01.260: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2853' Apr 7 14:10:01.554: INFO: stderr: "" Apr 7 14:10:01.555: INFO: stdout: "service/frontend created\n" Apr 7 14:10:01.555: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: frontend spec: replicas: 3 selector: matchLabels: app: guestbook tier: frontend template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: php-redis image: gcr.io/google-samples/gb-frontend:v6 resources: requests: cpu: 100m memory: 100Mi env: - name: GET_HOSTS_FROM value: dns # If your cluster config does not include a dns service, then to # instead access environment variables to find service host # info, comment out the 'value: dns' line above, and uncomment the # line below: # value: env ports: - containerPort: 80 Apr 7 14:10:01.555: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2853' Apr 7 14:10:01.801: INFO: stderr: "" Apr 7 14:10:01.801: INFO: stdout: "deployment.apps/frontend created\n" Apr 7 14:10:01.801: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: redis-master spec: replicas: 1 selector: matchLabels: app: redis role: master tier: backend template: metadata: labels: app: redis role: master tier: backend spec: containers: - name: master image: gcr.io/kubernetes-e2e-test-images/redis:1.0 resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Apr 7 14:10:01.801: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2853' Apr 7 14:10:02.138: INFO: stderr: "" Apr 7 14:10:02.138: INFO: stdout: "deployment.apps/redis-master created\n" Apr 7 14:10:02.138: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: redis-slave spec: replicas: 2 selector: matchLabels: app: redis role: slave tier: backend template: metadata: labels: app: redis role: slave tier: backend spec: containers: - name: slave image: gcr.io/google-samples/gb-redisslave:v3 resources: requests: cpu: 100m memory: 100Mi env: - name: GET_HOSTS_FROM value: dns # If your cluster config does not include a dns service, then to # instead access an environment variable to find the master # service's host, comment out the 'value: dns' line above, and # uncomment the line below: # value: env ports: - containerPort: 6379 Apr 7 14:10:02.138: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2853' Apr 7 14:10:02.382: INFO: stderr: "" Apr 7 14:10:02.382: INFO: stdout: "deployment.apps/redis-slave created\n" STEP: validating guestbook app Apr 7 14:10:02.382: INFO: Waiting for all frontend pods to be Running. Apr 7 14:10:12.432: INFO: Waiting for frontend to serve content. Apr 7 14:10:12.448: INFO: Trying to add a new entry to the guestbook. Apr 7 14:10:12.464: INFO: Verifying that added entry can be retrieved. STEP: using delete to clean up resources Apr 7 14:10:12.477: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2853' Apr 7 14:10:12.615: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 7 14:10:12.615: INFO: stdout: "service \"redis-slave\" force deleted\n" STEP: using delete to clean up resources Apr 7 14:10:12.616: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2853' Apr 7 14:10:12.780: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 7 14:10:12.780: INFO: stdout: "service \"redis-master\" force deleted\n" STEP: using delete to clean up resources Apr 7 14:10:12.780: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2853' Apr 7 14:10:12.878: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 7 14:10:12.879: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources Apr 7 14:10:12.879: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2853' Apr 7 14:10:12.975: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 7 14:10:12.975: INFO: stdout: "deployment.apps \"frontend\" force deleted\n" STEP: using delete to clean up resources Apr 7 14:10:12.975: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2853' Apr 7 14:10:13.089: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 7 14:10:13.089: INFO: stdout: "deployment.apps \"redis-master\" force deleted\n" STEP: using delete to clean up resources Apr 7 14:10:13.090: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2853' Apr 7 14:10:13.186: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 7 14:10:13.186: INFO: stdout: "deployment.apps \"redis-slave\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 7 14:10:13.186: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2853" for this suite. Apr 7 14:10:53.569: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 7 14:10:53.656: INFO: namespace kubectl-2853 deletion completed in 40.380177561s • [SLOW TEST:55.538 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Guestbook application /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 7 14:10:53.657: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Apr 7 14:11:01.773: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 7 14:11:01.788: INFO: Pod pod-with-prestop-exec-hook still exists Apr 7 14:11:03.788: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 7 14:11:03.792: INFO: Pod pod-with-prestop-exec-hook still exists Apr 7 14:11:05.788: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 7 14:11:05.793: INFO: Pod pod-with-prestop-exec-hook still exists Apr 7 14:11:07.788: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 7 14:11:07.792: INFO: Pod pod-with-prestop-exec-hook still exists Apr 7 14:11:09.788: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 7 14:11:09.792: INFO: Pod pod-with-prestop-exec-hook still exists Apr 7 14:11:11.788: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 7 14:11:11.792: INFO: Pod pod-with-prestop-exec-hook still exists Apr 7 14:11:13.788: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 7 14:11:13.792: INFO: Pod pod-with-prestop-exec-hook still exists Apr 7 14:11:15.788: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 7 14:11:15.792: INFO: Pod pod-with-prestop-exec-hook still exists Apr 7 14:11:17.788: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 7 14:11:17.792: INFO: Pod pod-with-prestop-exec-hook still exists Apr 7 14:11:19.788: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 7 14:11:19.792: INFO: Pod pod-with-prestop-exec-hook still exists Apr 7 14:11:21.788: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 7 14:11:21.792: INFO: Pod pod-with-prestop-exec-hook still exists Apr 7 14:11:23.788: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 7 14:11:23.792: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 7 14:11:23.799: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-2482" for this suite. Apr 7 14:11:45.814: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 7 14:11:45.887: INFO: namespace container-lifecycle-hook-2482 deletion completed in 22.084528674s • [SLOW TEST:52.231 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 7 14:11:45.888: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update Apr 7 14:11:45.999: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:watch-8609,SelfLink:/api/v1/namespaces/watch-8609/configmaps/e2e-watch-test-resource-version,UID:f5dd0e10-b2ea-4419-9663-b59f4fabc276,ResourceVersion:4136105,Generation:0,CreationTimestamp:2020-04-07 14:11:45 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Apr 7 14:11:45.999: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:watch-8609,SelfLink:/api/v1/namespaces/watch-8609/configmaps/e2e-watch-test-resource-version,UID:f5dd0e10-b2ea-4419-9663-b59f4fabc276,ResourceVersion:4136106,Generation:0,CreationTimestamp:2020-04-07 14:11:45 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 7 14:11:45.999: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-8609" for this suite. Apr 7 14:11:52.099: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 7 14:11:52.193: INFO: namespace watch-8609 deletion completed in 6.13449596s • [SLOW TEST:6.305 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 7 14:11:52.194: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Apr 7 14:11:52.261: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9cf88a85-a5d6-4727-9122-f1c9121675f4" in namespace "downward-api-7672" to be "success or failure" Apr 7 14:11:52.282: INFO: Pod "downwardapi-volume-9cf88a85-a5d6-4727-9122-f1c9121675f4": Phase="Pending", Reason="", readiness=false. Elapsed: 20.926244ms Apr 7 14:11:54.286: INFO: Pod "downwardapi-volume-9cf88a85-a5d6-4727-9122-f1c9121675f4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024914279s Apr 7 14:11:56.290: INFO: Pod "downwardapi-volume-9cf88a85-a5d6-4727-9122-f1c9121675f4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.029167675s STEP: Saw pod success Apr 7 14:11:56.290: INFO: Pod "downwardapi-volume-9cf88a85-a5d6-4727-9122-f1c9121675f4" satisfied condition "success or failure" Apr 7 14:11:56.292: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-9cf88a85-a5d6-4727-9122-f1c9121675f4 container client-container: STEP: delete the pod Apr 7 14:11:56.311: INFO: Waiting for pod downwardapi-volume-9cf88a85-a5d6-4727-9122-f1c9121675f4 to disappear Apr 7 14:11:56.315: INFO: Pod downwardapi-volume-9cf88a85-a5d6-4727-9122-f1c9121675f4 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 7 14:11:56.315: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7672" for this suite. Apr 7 14:12:02.331: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 7 14:12:02.412: INFO: namespace downward-api-7672 deletion completed in 6.093745161s • [SLOW TEST:10.218 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-auth] ServiceAccounts should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 7 14:12:02.412: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: getting the auto-created API token STEP: reading a file in the container Apr 7 14:12:07.008: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-179 pod-service-account-ed8cd6a4-7e05-47dc-9124-a7f8973265b1 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token' STEP: reading a file in the container Apr 7 14:12:07.246: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-179 pod-service-account-ed8cd6a4-7e05-47dc-9124-a7f8973265b1 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt' STEP: reading a file in the container Apr 7 14:12:07.442: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-179 pod-service-account-ed8cd6a4-7e05-47dc-9124-a7f8973265b1 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace' [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 7 14:12:07.639: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-179" for this suite. Apr 7 14:12:13.667: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 7 14:12:13.751: INFO: namespace svcaccounts-179 deletion completed in 6.108555464s • [SLOW TEST:11.339 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 7 14:12:13.752: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0644 on node default medium Apr 7 14:12:13.806: INFO: Waiting up to 5m0s for pod "pod-63ce4c1e-6c89-43e7-9e05-6fe69f1fe281" in namespace "emptydir-4621" to be "success or failure" Apr 7 14:12:13.821: INFO: Pod "pod-63ce4c1e-6c89-43e7-9e05-6fe69f1fe281": Phase="Pending", Reason="", readiness=false. Elapsed: 14.574038ms Apr 7 14:12:15.824: INFO: Pod "pod-63ce4c1e-6c89-43e7-9e05-6fe69f1fe281": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018517818s Apr 7 14:12:17.829: INFO: Pod "pod-63ce4c1e-6c89-43e7-9e05-6fe69f1fe281": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.02294227s STEP: Saw pod success Apr 7 14:12:17.829: INFO: Pod "pod-63ce4c1e-6c89-43e7-9e05-6fe69f1fe281" satisfied condition "success or failure" Apr 7 14:12:17.832: INFO: Trying to get logs from node iruya-worker2 pod pod-63ce4c1e-6c89-43e7-9e05-6fe69f1fe281 container test-container: STEP: delete the pod Apr 7 14:12:17.849: INFO: Waiting for pod pod-63ce4c1e-6c89-43e7-9e05-6fe69f1fe281 to disappear Apr 7 14:12:17.851: INFO: Pod pod-63ce4c1e-6c89-43e7-9e05-6fe69f1fe281 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 7 14:12:17.851: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4621" for this suite. Apr 7 14:12:23.862: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 7 14:12:23.942: INFO: namespace emptydir-4621 deletion completed in 6.088190767s • [SLOW TEST:10.190 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 7 14:12:23.942: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Apr 7 14:12:24.003: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a3fbadaa-ee7e-4515-abba-2930bb472bae" in namespace "projected-8472" to be "success or failure" Apr 7 14:12:24.022: INFO: Pod "downwardapi-volume-a3fbadaa-ee7e-4515-abba-2930bb472bae": Phase="Pending", Reason="", readiness=false. Elapsed: 18.940054ms Apr 7 14:12:26.026: INFO: Pod "downwardapi-volume-a3fbadaa-ee7e-4515-abba-2930bb472bae": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022686403s Apr 7 14:12:28.030: INFO: Pod "downwardapi-volume-a3fbadaa-ee7e-4515-abba-2930bb472bae": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.026923875s STEP: Saw pod success Apr 7 14:12:28.030: INFO: Pod "downwardapi-volume-a3fbadaa-ee7e-4515-abba-2930bb472bae" satisfied condition "success or failure" Apr 7 14:12:28.033: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-a3fbadaa-ee7e-4515-abba-2930bb472bae container client-container: STEP: delete the pod Apr 7 14:12:28.072: INFO: Waiting for pod downwardapi-volume-a3fbadaa-ee7e-4515-abba-2930bb472bae to disappear Apr 7 14:12:28.091: INFO: Pod downwardapi-volume-a3fbadaa-ee7e-4515-abba-2930bb472bae no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 7 14:12:28.092: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8472" for this suite. Apr 7 14:12:34.107: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 7 14:12:34.189: INFO: namespace projected-8472 deletion completed in 6.093798374s • [SLOW TEST:10.246 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 7 14:12:34.189: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods STEP: Gathering metrics W0407 14:13:14.859576 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Apr 7 14:13:14.859: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 7 14:13:14.859: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-9850" for this suite. Apr 7 14:13:22.874: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 7 14:13:22.948: INFO: namespace gc-9850 deletion completed in 8.085590556s • [SLOW TEST:48.760 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 7 14:13:22.950: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Apr 7 14:13:23.028: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7964e5f9-4bea-42a4-99df-f1e0436d1d53" in namespace "projected-3729" to be "success or failure" Apr 7 14:13:23.039: INFO: Pod "downwardapi-volume-7964e5f9-4bea-42a4-99df-f1e0436d1d53": Phase="Pending", Reason="", readiness=false. Elapsed: 10.872031ms Apr 7 14:13:25.043: INFO: Pod "downwardapi-volume-7964e5f9-4bea-42a4-99df-f1e0436d1d53": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015051242s Apr 7 14:13:27.048: INFO: Pod "downwardapi-volume-7964e5f9-4bea-42a4-99df-f1e0436d1d53": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.020640448s STEP: Saw pod success Apr 7 14:13:27.048: INFO: Pod "downwardapi-volume-7964e5f9-4bea-42a4-99df-f1e0436d1d53" satisfied condition "success or failure" Apr 7 14:13:27.052: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-7964e5f9-4bea-42a4-99df-f1e0436d1d53 container client-container: STEP: delete the pod Apr 7 14:13:27.086: INFO: Waiting for pod downwardapi-volume-7964e5f9-4bea-42a4-99df-f1e0436d1d53 to disappear Apr 7 14:13:27.092: INFO: Pod downwardapi-volume-7964e5f9-4bea-42a4-99df-f1e0436d1d53 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 7 14:13:27.092: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3729" for this suite. Apr 7 14:13:33.107: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 7 14:13:33.183: INFO: namespace projected-3729 deletion completed in 6.087905668s • [SLOW TEST:10.233 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-apps] Deployment deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 7 14:13:33.183: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Apr 7 14:13:33.251: INFO: Creating deployment "nginx-deployment" Apr 7 14:13:33.266: INFO: Waiting for observed generation 1 Apr 7 14:13:35.276: INFO: Waiting for all required pods to come up Apr 7 14:13:35.281: INFO: Pod name nginx: Found 10 pods out of 10 STEP: ensuring each pod is running Apr 7 14:13:43.291: INFO: Waiting for deployment "nginx-deployment" to complete Apr 7 14:13:43.298: INFO: Updating deployment "nginx-deployment" with a non-existent image Apr 7 14:13:43.304: INFO: Updating deployment nginx-deployment Apr 7 14:13:43.304: INFO: Waiting for observed generation 2 Apr 7 14:13:45.324: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 Apr 7 14:13:45.328: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 Apr 7 14:13:45.331: INFO: Waiting for the first rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas Apr 7 14:13:45.392: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 Apr 7 14:13:45.392: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 Apr 7 14:13:45.394: INFO: Waiting for the second rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas Apr 7 14:13:45.399: INFO: Verifying that deployment "nginx-deployment" has minimum required number of available replicas Apr 7 14:13:45.399: INFO: Scaling up the deployment "nginx-deployment" from 10 to 30 Apr 7 14:13:45.405: INFO: Updating deployment nginx-deployment Apr 7 14:13:45.405: INFO: Waiting for the replicasets of deployment "nginx-deployment" to have desired number of replicas Apr 7 14:13:45.888: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 Apr 7 14:13:45.931: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 Apr 7 14:13:48.245: INFO: Deployment "nginx-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment,GenerateName:,Namespace:deployment-7592,SelfLink:/apis/apps/v1/namespaces/deployment-7592/deployments/nginx-deployment,UID:60c7f30a-e712-40f5-b893-a0ac7d1e8061,ResourceVersion:4136892,Generation:3,CreationTimestamp:2020-04-07 14:13:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*30,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:33,UpdatedReplicas:13,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[{Available False 2020-04-07 14:13:45 +0000 UTC 2020-04-07 14:13:45 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2020-04-07 14:13:46 +0000 UTC 2020-04-07 14:13:33 +0000 UTC ReplicaSetUpdated ReplicaSet "nginx-deployment-55fb7cb77f" is progressing.}],ReadyReplicas:8,CollisionCount:nil,},} Apr 7 14:13:48.249: INFO: New ReplicaSet "nginx-deployment-55fb7cb77f" of Deployment "nginx-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f,GenerateName:,Namespace:deployment-7592,SelfLink:/apis/apps/v1/namespaces/deployment-7592/replicasets/nginx-deployment-55fb7cb77f,UID:e9714f5b-ec15-4654-a355-6355fac334e2,ResourceVersion:4136889,Generation:3,CreationTimestamp:2020-04-07 14:13:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment nginx-deployment 60c7f30a-e712-40f5-b893-a0ac7d1e8061 0xc002feea27 0xc002feea28}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*13,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Apr 7 14:13:48.249: INFO: All old ReplicaSets of Deployment "nginx-deployment": Apr 7 14:13:48.249: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498,GenerateName:,Namespace:deployment-7592,SelfLink:/apis/apps/v1/namespaces/deployment-7592/replicasets/nginx-deployment-7b8c6f4498,UID:8ced4264-312b-41e2-998b-92951f54e9ad,ResourceVersion:4136876,Generation:3,CreationTimestamp:2020-04-07 14:13:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment nginx-deployment 60c7f30a-e712-40f5-b893-a0ac7d1e8061 0xc002feeaf7 0xc002feeaf8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*20,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[],},} Apr 7 14:13:48.255: INFO: Pod "nginx-deployment-55fb7cb77f-7jkd5" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-7jkd5,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-7592,SelfLink:/api/v1/namespaces/deployment-7592/pods/nginx-deployment-55fb7cb77f-7jkd5,UID:95f44960-69e6-4490-810a-b8d9225a62bc,ResourceVersion:4136812,Generation:0,CreationTimestamp:2020-04-07 14:13:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f e9714f5b-ec15-4654-a355-6355fac334e2 0xc002f79ee7 0xc002f79ee8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-kprv4 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-kprv4,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-kprv4 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002f79f60} {node.kubernetes.io/unreachable Exists NoExecute 0xc002f79f80}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-07 14:13:44 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-07 14:13:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-07 14:13:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-07 14:13:43 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:,StartTime:2020-04-07 14:13:44 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 7 14:13:48.256: INFO: Pod "nginx-deployment-55fb7cb77f-8bm7t" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-8bm7t,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-7592,SelfLink:/api/v1/namespaces/deployment-7592/pods/nginx-deployment-55fb7cb77f-8bm7t,UID:9572e3cd-7072-4fb1-9d75-935915f710b0,ResourceVersion:4136893,Generation:0,CreationTimestamp:2020-04-07 14:13:45 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f e9714f5b-ec15-4654-a355-6355fac334e2 0xc0033d0060 0xc0033d0061}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-kprv4 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-kprv4,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-kprv4 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0033d00e0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0033d0100}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-07 14:13:46 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-07 14:13:46 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-07 14:13:46 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-07 14:13:45 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:,StartTime:2020-04-07 14:13:46 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 7 14:13:48.256: INFO: Pod "nginx-deployment-55fb7cb77f-9fqxl" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-9fqxl,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-7592,SelfLink:/api/v1/namespaces/deployment-7592/pods/nginx-deployment-55fb7cb77f-9fqxl,UID:53b10bd1-f53a-4116-8214-f0fc21830757,ResourceVersion:4136890,Generation:0,CreationTimestamp:2020-04-07 14:13:45 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f e9714f5b-ec15-4654-a355-6355fac334e2 0xc0033d01d0 0xc0033d01d1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-kprv4 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-kprv4,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-kprv4 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0033d0250} {node.kubernetes.io/unreachable Exists NoExecute 0xc0033d0270}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-07 14:13:46 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-07 14:13:46 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-07 14:13:46 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-07 14:13:45 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:,StartTime:2020-04-07 14:13:46 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 7 14:13:48.256: INFO: Pod "nginx-deployment-55fb7cb77f-dgsth" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-dgsth,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-7592,SelfLink:/api/v1/namespaces/deployment-7592/pods/nginx-deployment-55fb7cb77f-dgsth,UID:65fa71d7-c823-4c19-bddb-7ad062fbb065,ResourceVersion:4136918,Generation:0,CreationTimestamp:2020-04-07 14:13:45 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f e9714f5b-ec15-4654-a355-6355fac334e2 0xc0033d0340 0xc0033d0341}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-kprv4 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-kprv4,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-kprv4 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0033d03c0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0033d03e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-07 14:13:46 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-07 14:13:46 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-07 14:13:46 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-07 14:13:45 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:,StartTime:2020-04-07 14:13:46 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 7 14:13:48.256: INFO: Pod "nginx-deployment-55fb7cb77f-fpj6v" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-fpj6v,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-7592,SelfLink:/api/v1/namespaces/deployment-7592/pods/nginx-deployment-55fb7cb77f-fpj6v,UID:697fa3aa-8e84-479a-af5f-5224bda857c7,ResourceVersion:4136945,Generation:0,CreationTimestamp:2020-04-07 14:13:45 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f e9714f5b-ec15-4654-a355-6355fac334e2 0xc0033d04b0 0xc0033d04b1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-kprv4 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-kprv4,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-kprv4 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0033d0530} {node.kubernetes.io/unreachable Exists NoExecute 0xc0033d0550}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-07 14:13:46 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-07 14:13:46 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-07 14:13:46 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-07 14:13:45 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:,StartTime:2020-04-07 14:13:46 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 7 14:13:48.256: INFO: Pod "nginx-deployment-55fb7cb77f-kpnh8" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-kpnh8,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-7592,SelfLink:/api/v1/namespaces/deployment-7592/pods/nginx-deployment-55fb7cb77f-kpnh8,UID:6b4b0be1-09f9-4340-8927-41d0445531c8,ResourceVersion:4136796,Generation:0,CreationTimestamp:2020-04-07 14:13:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f e9714f5b-ec15-4654-a355-6355fac334e2 0xc0033d0620 0xc0033d0621}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-kprv4 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-kprv4,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-kprv4 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0033d06a0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0033d06c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-07 14:13:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-07 14:13:43 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-07 14:13:43 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-07 14:13:43 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:,StartTime:2020-04-07 14:13:43 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 7 14:13:48.257: INFO: Pod "nginx-deployment-55fb7cb77f-ksxlk" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-ksxlk,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-7592,SelfLink:/api/v1/namespaces/deployment-7592/pods/nginx-deployment-55fb7cb77f-ksxlk,UID:1e87ab5f-f551-4036-896b-ba57b5f23549,ResourceVersion:4136951,Generation:0,CreationTimestamp:2020-04-07 14:13:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f e9714f5b-ec15-4654-a355-6355fac334e2 0xc0033d07b0 0xc0033d07b1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-kprv4 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-kprv4,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-kprv4 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0033d0830} {node.kubernetes.io/unreachable Exists NoExecute 0xc0033d0850}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-07 14:13:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-07 14:13:43 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-07 14:13:43 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-07 14:13:43 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:10.244.2.125,StartTime:2020-04-07 14:13:43 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = NotFound desc = failed to pull and unpack image "docker.io/library/nginx:404": failed to resolve reference "docker.io/library/nginx:404": docker.io/library/nginx:404: not found,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 7 14:13:48.257: INFO: Pod "nginx-deployment-55fb7cb77f-lrrmn" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-lrrmn,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-7592,SelfLink:/api/v1/namespaces/deployment-7592/pods/nginx-deployment-55fb7cb77f-lrrmn,UID:646a7555-cee7-4f33-a582-9f23db54d588,ResourceVersion:4136801,Generation:0,CreationTimestamp:2020-04-07 14:13:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f e9714f5b-ec15-4654-a355-6355fac334e2 0xc0033d0940 0xc0033d0941}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-kprv4 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-kprv4,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-kprv4 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0033d09c0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0033d09e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-07 14:13:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-07 14:13:43 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-07 14:13:43 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-07 14:13:43 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:,StartTime:2020-04-07 14:13:43 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 7 14:13:48.257: INFO: Pod "nginx-deployment-55fb7cb77f-r48f8" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-r48f8,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-7592,SelfLink:/api/v1/namespaces/deployment-7592/pods/nginx-deployment-55fb7cb77f-r48f8,UID:3e9638cf-1fa7-4287-aba5-212865172f4a,ResourceVersion:4136928,Generation:0,CreationTimestamp:2020-04-07 14:13:45 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f e9714f5b-ec15-4654-a355-6355fac334e2 0xc0033d0ab0 0xc0033d0ab1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-kprv4 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-kprv4,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-kprv4 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0033d0b30} {node.kubernetes.io/unreachable Exists NoExecute 0xc0033d0b50}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-07 14:13:46 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-07 14:13:46 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-07 14:13:46 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-07 14:13:45 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:,StartTime:2020-04-07 14:13:46 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 7 14:13:48.258: INFO: Pod "nginx-deployment-55fb7cb77f-sj2mf" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-sj2mf,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-7592,SelfLink:/api/v1/namespaces/deployment-7592/pods/nginx-deployment-55fb7cb77f-sj2mf,UID:fc1ca58a-b5df-4619-9e28-569cc2b06b48,ResourceVersion:4136908,Generation:0,CreationTimestamp:2020-04-07 14:13:45 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f e9714f5b-ec15-4654-a355-6355fac334e2 0xc0033d0c20 0xc0033d0c21}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-kprv4 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-kprv4,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-kprv4 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0033d0ca0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0033d0cc0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-07 14:13:46 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-07 14:13:46 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-07 14:13:46 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-07 14:13:45 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:,StartTime:2020-04-07 14:13:46 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 7 14:13:48.258: INFO: Pod "nginx-deployment-55fb7cb77f-wr8rs" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-wr8rs,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-7592,SelfLink:/api/v1/namespaces/deployment-7592/pods/nginx-deployment-55fb7cb77f-wr8rs,UID:5a7fcc3f-7004-47a4-90c6-50986a347e2a,ResourceVersion:4136939,Generation:0,CreationTimestamp:2020-04-07 14:13:45 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f e9714f5b-ec15-4654-a355-6355fac334e2 0xc0033d0d90 0xc0033d0d91}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-kprv4 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-kprv4,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-kprv4 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0033d0e10} {node.kubernetes.io/unreachable Exists NoExecute 0xc0033d0e30}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-07 14:13:46 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-07 14:13:46 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-07 14:13:46 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-07 14:13:45 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:,StartTime:2020-04-07 14:13:46 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 7 14:13:48.258: INFO: Pod "nginx-deployment-55fb7cb77f-wrh8r" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-wrh8r,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-7592,SelfLink:/api/v1/namespaces/deployment-7592/pods/nginx-deployment-55fb7cb77f-wrh8r,UID:79ebd47d-f382-480f-8653-73defa5ce345,ResourceVersion:4136814,Generation:0,CreationTimestamp:2020-04-07 14:13:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f e9714f5b-ec15-4654-a355-6355fac334e2 0xc0033d0f00 0xc0033d0f01}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-kprv4 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-kprv4,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-kprv4 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0033d0f80} {node.kubernetes.io/unreachable Exists NoExecute 0xc0033d0fa0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-07 14:13:44 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-07 14:13:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-07 14:13:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-07 14:13:43 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:,StartTime:2020-04-07 14:13:44 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 7 14:13:48.258: INFO: Pod "nginx-deployment-55fb7cb77f-xk749" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-xk749,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-7592,SelfLink:/api/v1/namespaces/deployment-7592/pods/nginx-deployment-55fb7cb77f-xk749,UID:be932605-50b7-4c01-8fd5-ed42ab3ad2e0,ResourceVersion:4136911,Generation:0,CreationTimestamp:2020-04-07 14:13:45 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f e9714f5b-ec15-4654-a355-6355fac334e2 0xc0033d1070 0xc0033d1071}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-kprv4 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-kprv4,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-kprv4 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0033d10f0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0033d1110}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-07 14:13:46 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-07 14:13:46 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-07 14:13:46 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-07 14:13:45 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:,StartTime:2020-04-07 14:13:46 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 7 14:13:48.259: INFO: Pod "nginx-deployment-7b8c6f4498-2mdlg" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-2mdlg,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-7592,SelfLink:/api/v1/namespaces/deployment-7592/pods/nginx-deployment-7b8c6f4498-2mdlg,UID:9e1a32bc-3d12-44cb-b230-8e2b975a381e,ResourceVersion:4136731,Generation:0,CreationTimestamp:2020-04-07 14:13:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 8ced4264-312b-41e2-998b-92951f54e9ad 0xc0033d11e0 0xc0033d11e1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-kprv4 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-kprv4,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-kprv4 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0033d1250} {node.kubernetes.io/unreachable Exists NoExecute 0xc0033d1270}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-07 14:13:33 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-04-07 14:13:40 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-04-07 14:13:40 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-07 14:13:33 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:10.244.1.190,StartTime:2020-04-07 14:13:33 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-04-07 14:13:40 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://7c40abe49ca298f05563d42edd44cf8ddb73f890c2f28aedf808c0f58baa1f33}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 7 14:13:48.259: INFO: Pod "nginx-deployment-7b8c6f4498-4c44s" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-4c44s,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-7592,SelfLink:/api/v1/namespaces/deployment-7592/pods/nginx-deployment-7b8c6f4498-4c44s,UID:e7956a9e-5e2f-4b7e-9ab1-2db72fbe2915,ResourceVersion:4136743,Generation:0,CreationTimestamp:2020-04-07 14:13:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 8ced4264-312b-41e2-998b-92951f54e9ad 0xc0033d1347 0xc0033d1348}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-kprv4 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-kprv4,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-kprv4 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0033d13c0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0033d13e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-07 14:13:33 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-04-07 14:13:41 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-04-07 14:13:41 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-07 14:13:33 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:10.244.2.122,StartTime:2020-04-07 14:13:33 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-04-07 14:13:40 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://615f527a22ee799f1b9e221c9a88c0a8a4bc9bf5c173d28d99474a6504e88c26}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 7 14:13:48.259: INFO: Pod "nginx-deployment-7b8c6f4498-4wd6h" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-4wd6h,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-7592,SelfLink:/api/v1/namespaces/deployment-7592/pods/nginx-deployment-7b8c6f4498-4wd6h,UID:2a127dc1-70de-4c38-aec9-f56419b69a38,ResourceVersion:4136763,Generation:0,CreationTimestamp:2020-04-07 14:13:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 8ced4264-312b-41e2-998b-92951f54e9ad 0xc0033d14b7 0xc0033d14b8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-kprv4 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-kprv4,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-kprv4 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0033d1530} {node.kubernetes.io/unreachable Exists NoExecute 0xc0033d1550}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-07 14:13:33 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-04-07 14:13:42 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-04-07 14:13:42 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-07 14:13:33 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:10.244.2.124,StartTime:2020-04-07 14:13:33 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-04-07 14:13:42 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://e1cf195ffbd116f5b848fb54bafd4a3664fc2fea3f6bb4bdf6fd533040fdd15f}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 7 14:13:48.259: INFO: Pod "nginx-deployment-7b8c6f4498-64njq" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-64njq,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-7592,SelfLink:/api/v1/namespaces/deployment-7592/pods/nginx-deployment-7b8c6f4498-64njq,UID:2451b688-2f76-4071-ba5a-748a1aa90fad,ResourceVersion:4136724,Generation:0,CreationTimestamp:2020-04-07 14:13:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 8ced4264-312b-41e2-998b-92951f54e9ad 0xc0033d1627 0xc0033d1628}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-kprv4 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-kprv4,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-kprv4 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0033d16a0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0033d16c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-07 14:13:33 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-04-07 14:13:39 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-04-07 14:13:39 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-07 14:13:33 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:10.244.2.121,StartTime:2020-04-07 14:13:33 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-04-07 14:13:39 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://4a5ee6c3c4c9972edc32ed5e86b0ad5fabd798a6acd5a4849200c0b1328e4310}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 7 14:13:48.260: INFO: Pod "nginx-deployment-7b8c6f4498-7jx9j" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-7jx9j,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-7592,SelfLink:/api/v1/namespaces/deployment-7592/pods/nginx-deployment-7b8c6f4498-7jx9j,UID:9e79e0b2-32fb-49a6-90a5-6f5629243705,ResourceVersion:4136901,Generation:0,CreationTimestamp:2020-04-07 14:13:45 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 8ced4264-312b-41e2-998b-92951f54e9ad 0xc0033d1797 0xc0033d1798}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-kprv4 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-kprv4,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-kprv4 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0033d1810} {node.kubernetes.io/unreachable Exists NoExecute 0xc0033d1830}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-07 14:13:46 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-07 14:13:46 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-07 14:13:46 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-07 14:13:45 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:,StartTime:2020-04-07 14:13:46 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 7 14:13:48.260: INFO: Pod "nginx-deployment-7b8c6f4498-88kjh" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-88kjh,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-7592,SelfLink:/api/v1/namespaces/deployment-7592/pods/nginx-deployment-7b8c6f4498-88kjh,UID:7f1449ac-0aeb-4d82-b1cb-aeb41d6a82b3,ResourceVersion:4136870,Generation:0,CreationTimestamp:2020-04-07 14:13:45 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 8ced4264-312b-41e2-998b-92951f54e9ad 0xc0033d18f7 0xc0033d18f8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-kprv4 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-kprv4,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-kprv4 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0033d1970} {node.kubernetes.io/unreachable Exists NoExecute 0xc0033d1990}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-07 14:13:45 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 7 14:13:48.260: INFO: Pod "nginx-deployment-7b8c6f4498-8qvv7" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-8qvv7,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-7592,SelfLink:/api/v1/namespaces/deployment-7592/pods/nginx-deployment-7b8c6f4498-8qvv7,UID:5a2142c3-72c9-48f1-a095-c06cda287d70,ResourceVersion:4136941,Generation:0,CreationTimestamp:2020-04-07 14:13:45 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 8ced4264-312b-41e2-998b-92951f54e9ad 0xc0033d1a17 0xc0033d1a18}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-kprv4 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-kprv4,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-kprv4 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0033d1a90} {node.kubernetes.io/unreachable Exists NoExecute 0xc0033d1ab0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-07 14:13:46 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-07 14:13:46 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-07 14:13:46 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-07 14:13:45 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:,StartTime:2020-04-07 14:13:46 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 7 14:13:48.260: INFO: Pod "nginx-deployment-7b8c6f4498-bhmmz" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-bhmmz,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-7592,SelfLink:/api/v1/namespaces/deployment-7592/pods/nginx-deployment-7b8c6f4498-bhmmz,UID:5e5fe16b-13f0-46c0-ab2f-ab9af4195f17,ResourceVersion:4136950,Generation:0,CreationTimestamp:2020-04-07 14:13:45 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 8ced4264-312b-41e2-998b-92951f54e9ad 0xc0033d1b77 0xc0033d1b78}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-kprv4 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-kprv4,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-kprv4 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0033d1bf0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0033d1c10}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-07 14:13:46 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-07 14:13:46 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-07 14:13:46 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-07 14:13:45 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:,StartTime:2020-04-07 14:13:46 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 7 14:13:48.261: INFO: Pod "nginx-deployment-7b8c6f4498-cm8tw" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-cm8tw,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-7592,SelfLink:/api/v1/namespaces/deployment-7592/pods/nginx-deployment-7b8c6f4498-cm8tw,UID:32e948b5-09e9-4fae-a865-dccd1aee9c18,ResourceVersion:4136904,Generation:0,CreationTimestamp:2020-04-07 14:13:45 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 8ced4264-312b-41e2-998b-92951f54e9ad 0xc0033d1cd7 0xc0033d1cd8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-kprv4 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-kprv4,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-kprv4 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0033d1d50} {node.kubernetes.io/unreachable Exists NoExecute 0xc0033d1d70}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-07 14:13:46 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-07 14:13:46 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-07 14:13:46 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-07 14:13:45 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:,StartTime:2020-04-07 14:13:46 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 7 14:13:48.261: INFO: Pod "nginx-deployment-7b8c6f4498-gfxhc" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-gfxhc,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-7592,SelfLink:/api/v1/namespaces/deployment-7592/pods/nginx-deployment-7b8c6f4498-gfxhc,UID:0f80201c-19d7-4fea-99ad-748e50c6251c,ResourceVersion:4136944,Generation:0,CreationTimestamp:2020-04-07 14:13:45 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 8ced4264-312b-41e2-998b-92951f54e9ad 0xc0033d1e37 0xc0033d1e38}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-kprv4 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-kprv4,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-kprv4 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0033d1eb0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0033d1ee0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-07 14:13:46 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-07 14:13:46 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-07 14:13:46 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-07 14:13:45 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:,StartTime:2020-04-07 14:13:46 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 7 14:13:48.261: INFO: Pod "nginx-deployment-7b8c6f4498-ghnsp" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-ghnsp,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-7592,SelfLink:/api/v1/namespaces/deployment-7592/pods/nginx-deployment-7b8c6f4498-ghnsp,UID:dc3380e0-de2d-4ef7-a114-052464ff8f31,ResourceVersion:4136723,Generation:0,CreationTimestamp:2020-04-07 14:13:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 8ced4264-312b-41e2-998b-92951f54e9ad 0xc0033d1fa7 0xc0033d1fa8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-kprv4 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-kprv4,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-kprv4 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0031d6020} {node.kubernetes.io/unreachable Exists NoExecute 0xc0031d6040}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-07 14:13:33 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-04-07 14:13:39 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-04-07 14:13:39 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-07 14:13:33 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:10.244.1.189,StartTime:2020-04-07 14:13:33 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-04-07 14:13:39 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://cfc87550058e2f2dfa99f8daf525d11f038b9a74c63d66665463f56ff75c63d0}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 7 14:13:48.261: INFO: Pod "nginx-deployment-7b8c6f4498-jpw2t" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-jpw2t,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-7592,SelfLink:/api/v1/namespaces/deployment-7592/pods/nginx-deployment-7b8c6f4498-jpw2t,UID:784318a1-86b3-44f8-b56e-056f9fdc6451,ResourceVersion:4136933,Generation:0,CreationTimestamp:2020-04-07 14:13:45 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 8ced4264-312b-41e2-998b-92951f54e9ad 0xc0031d6117 0xc0031d6118}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-kprv4 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-kprv4,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-kprv4 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0031d61a0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0031d61c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-07 14:13:46 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-07 14:13:46 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-07 14:13:46 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-07 14:13:45 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:,StartTime:2020-04-07 14:13:46 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 7 14:13:48.262: INFO: Pod "nginx-deployment-7b8c6f4498-lhtdl" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-lhtdl,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-7592,SelfLink:/api/v1/namespaces/deployment-7592/pods/nginx-deployment-7b8c6f4498-lhtdl,UID:5f6f4aa2-e33b-4601-81ae-7c46f0cdab7b,ResourceVersion:4136883,Generation:0,CreationTimestamp:2020-04-07 14:13:45 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 8ced4264-312b-41e2-998b-92951f54e9ad 0xc0031d6287 0xc0031d6288}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-kprv4 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-kprv4,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-kprv4 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0031d6300} {node.kubernetes.io/unreachable Exists NoExecute 0xc0031d6320}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-07 14:13:45 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-07 14:13:45 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-07 14:13:45 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-07 14:13:45 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:,StartTime:2020-04-07 14:13:45 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 7 14:13:48.262: INFO: Pod "nginx-deployment-7b8c6f4498-mh75n" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-mh75n,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-7592,SelfLink:/api/v1/namespaces/deployment-7592/pods/nginx-deployment-7b8c6f4498-mh75n,UID:4214845d-3497-4fcf-9805-7afc320d0d21,ResourceVersion:4136885,Generation:0,CreationTimestamp:2020-04-07 14:13:45 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 8ced4264-312b-41e2-998b-92951f54e9ad 0xc0031d63e7 0xc0031d63e8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-kprv4 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-kprv4,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-kprv4 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0031d6460} {node.kubernetes.io/unreachable Exists NoExecute 0xc0031d6480}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-07 14:13:45 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-07 14:13:45 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-07 14:13:45 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-07 14:13:45 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:,StartTime:2020-04-07 14:13:45 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 7 14:13:48.262: INFO: Pod "nginx-deployment-7b8c6f4498-pbvcj" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-pbvcj,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-7592,SelfLink:/api/v1/namespaces/deployment-7592/pods/nginx-deployment-7b8c6f4498-pbvcj,UID:40d99aa9-5571-43fb-9b03-95e776ec93e0,ResourceVersion:4136915,Generation:0,CreationTimestamp:2020-04-07 14:13:45 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 8ced4264-312b-41e2-998b-92951f54e9ad 0xc0031d6547 0xc0031d6548}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-kprv4 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-kprv4,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-kprv4 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0031d65c0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0031d65e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-07 14:13:46 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-07 14:13:46 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-07 14:13:46 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-07 14:13:45 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:,StartTime:2020-04-07 14:13:46 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 7 14:13:48.262: INFO: Pod "nginx-deployment-7b8c6f4498-pr22z" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-pr22z,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-7592,SelfLink:/api/v1/namespaces/deployment-7592/pods/nginx-deployment-7b8c6f4498-pr22z,UID:74eb0f72-fb45-42e4-8469-8a46bd9a6713,ResourceVersion:4136756,Generation:0,CreationTimestamp:2020-04-07 14:13:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 8ced4264-312b-41e2-998b-92951f54e9ad 0xc0031d66a7 0xc0031d66a8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-kprv4 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-kprv4,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-kprv4 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0031d6720} {node.kubernetes.io/unreachable Exists NoExecute 0xc0031d6740}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-07 14:13:33 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-04-07 14:13:42 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-04-07 14:13:42 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-07 14:13:33 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:10.244.1.193,StartTime:2020-04-07 14:13:33 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-04-07 14:13:42 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://7ea7749859806c97157bf21291341fe6ddc2d48d59d25d56aaec62785c928058}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 7 14:13:48.263: INFO: Pod "nginx-deployment-7b8c6f4498-rp9zk" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-rp9zk,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-7592,SelfLink:/api/v1/namespaces/deployment-7592/pods/nginx-deployment-7b8c6f4498-rp9zk,UID:e664af53-f5c9-4338-8460-6f7a12450135,ResourceVersion:4136740,Generation:0,CreationTimestamp:2020-04-07 14:13:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 8ced4264-312b-41e2-998b-92951f54e9ad 0xc0031d6817 0xc0031d6818}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-kprv4 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-kprv4,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-kprv4 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0031d6890} {node.kubernetes.io/unreachable Exists NoExecute 0xc0031d68b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-07 14:13:33 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-04-07 14:13:41 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-04-07 14:13:41 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-07 14:13:33 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:10.244.2.123,StartTime:2020-04-07 14:13:33 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-04-07 14:13:41 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://bcc3c07f85921051ae8d001df090edc736e443d0f6300334e8da5666d46e32a2}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 7 14:13:48.263: INFO: Pod "nginx-deployment-7b8c6f4498-sv4zh" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-sv4zh,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-7592,SelfLink:/api/v1/namespaces/deployment-7592/pods/nginx-deployment-7b8c6f4498-sv4zh,UID:a2b83031-04bf-441a-8ccf-4133fd203f5c,ResourceVersion:4136874,Generation:0,CreationTimestamp:2020-04-07 14:13:45 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 8ced4264-312b-41e2-998b-92951f54e9ad 0xc0031d6987 0xc0031d6988}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-kprv4 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-kprv4,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-kprv4 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0031d6a00} {node.kubernetes.io/unreachable Exists NoExecute 0xc0031d6a20}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-07 14:13:45 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-07 14:13:45 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-07 14:13:45 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-07 14:13:45 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:,StartTime:2020-04-07 14:13:45 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 7 14:13:48.263: INFO: Pod "nginx-deployment-7b8c6f4498-wt8gx" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-wt8gx,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-7592,SelfLink:/api/v1/namespaces/deployment-7592/pods/nginx-deployment-7b8c6f4498-wt8gx,UID:30e48450-9efc-4fce-855c-b93b2593cb79,ResourceVersion:4136896,Generation:0,CreationTimestamp:2020-04-07 14:13:45 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 8ced4264-312b-41e2-998b-92951f54e9ad 0xc0031d6b57 0xc0031d6b58}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-kprv4 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-kprv4,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-kprv4 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0031d6c20} {node.kubernetes.io/unreachable Exists NoExecute 0xc0031d6c50}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-07 14:13:46 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-07 14:13:46 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-07 14:13:46 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-07 14:13:45 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:,StartTime:2020-04-07 14:13:46 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 7 14:13:48.264: INFO: Pod "nginx-deployment-7b8c6f4498-z5f82" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-z5f82,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-7592,SelfLink:/api/v1/namespaces/deployment-7592/pods/nginx-deployment-7b8c6f4498-z5f82,UID:31e3ae53-0f28-4cda-b50b-8a33c207ea21,ResourceVersion:4136711,Generation:0,CreationTimestamp:2020-04-07 14:13:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 8ced4264-312b-41e2-998b-92951f54e9ad 0xc0031d6d87 0xc0031d6d88}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-kprv4 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-kprv4,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-kprv4 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0031d6e00} {node.kubernetes.io/unreachable Exists NoExecute 0xc0031d6e20}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-07 14:13:33 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-04-07 14:13:38 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-04-07 14:13:38 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-07 14:13:33 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:10.244.2.120,StartTime:2020-04-07 14:13:33 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-04-07 14:13:37 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://5f718da044a91a8f73522ce6e038b8be891ca80f2fa3d71e89a5c87c5c423185}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 7 14:13:48.264: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-7592" for this suite. Apr 7 14:14:02.439: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 7 14:14:02.510: INFO: namespace deployment-7592 deletion completed in 14.242126356s • [SLOW TEST:29.327 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 7 14:14:02.510: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [It] should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the initial replication controller Apr 7 14:14:02.597: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9468' Apr 7 14:14:03.048: INFO: stderr: "" Apr 7 14:14:03.048: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Apr 7 14:14:03.048: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9468' Apr 7 14:14:03.436: INFO: stderr: "" Apr 7 14:14:03.436: INFO: stdout: "update-demo-nautilus-bjm49 update-demo-nautilus-nltnc " Apr 7 14:14:03.436: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-bjm49 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9468' Apr 7 14:14:03.524: INFO: stderr: "" Apr 7 14:14:03.524: INFO: stdout: "" Apr 7 14:14:03.524: INFO: update-demo-nautilus-bjm49 is created but not running Apr 7 14:14:08.524: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9468' Apr 7 14:14:08.800: INFO: stderr: "" Apr 7 14:14:08.800: INFO: stdout: "update-demo-nautilus-bjm49 update-demo-nautilus-nltnc " Apr 7 14:14:08.800: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-bjm49 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9468' Apr 7 14:14:08.898: INFO: stderr: "" Apr 7 14:14:08.898: INFO: stdout: "true" Apr 7 14:14:08.898: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-bjm49 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9468' Apr 7 14:14:08.991: INFO: stderr: "" Apr 7 14:14:08.991: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 7 14:14:08.991: INFO: validating pod update-demo-nautilus-bjm49 Apr 7 14:14:08.994: INFO: got data: { "image": "nautilus.jpg" } Apr 7 14:14:08.994: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 7 14:14:08.994: INFO: update-demo-nautilus-bjm49 is verified up and running Apr 7 14:14:08.994: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-nltnc -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9468' Apr 7 14:14:09.076: INFO: stderr: "" Apr 7 14:14:09.076: INFO: stdout: "" Apr 7 14:14:09.076: INFO: update-demo-nautilus-nltnc is created but not running Apr 7 14:14:14.076: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9468' Apr 7 14:14:14.169: INFO: stderr: "" Apr 7 14:14:14.169: INFO: stdout: "update-demo-nautilus-bjm49 update-demo-nautilus-nltnc " Apr 7 14:14:14.169: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-bjm49 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9468' Apr 7 14:14:14.257: INFO: stderr: "" Apr 7 14:14:14.257: INFO: stdout: "true" Apr 7 14:14:14.257: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-bjm49 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9468' Apr 7 14:14:14.347: INFO: stderr: "" Apr 7 14:14:14.347: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 7 14:14:14.347: INFO: validating pod update-demo-nautilus-bjm49 Apr 7 14:14:14.350: INFO: got data: { "image": "nautilus.jpg" } Apr 7 14:14:14.350: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 7 14:14:14.350: INFO: update-demo-nautilus-bjm49 is verified up and running Apr 7 14:14:14.350: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-nltnc -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9468' Apr 7 14:14:14.452: INFO: stderr: "" Apr 7 14:14:14.452: INFO: stdout: "true" Apr 7 14:14:14.452: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-nltnc -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9468' Apr 7 14:14:14.558: INFO: stderr: "" Apr 7 14:14:14.558: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 7 14:14:14.558: INFO: validating pod update-demo-nautilus-nltnc Apr 7 14:14:14.562: INFO: got data: { "image": "nautilus.jpg" } Apr 7 14:14:14.562: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 7 14:14:14.562: INFO: update-demo-nautilus-nltnc is verified up and running STEP: rolling-update to new replication controller Apr 7 14:14:14.564: INFO: scanned /root for discovery docs: Apr 7 14:14:14.564: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=kubectl-9468' Apr 7 14:14:37.101: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" Apr 7 14:14:37.101: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n" STEP: waiting for all containers in name=update-demo pods to come up. Apr 7 14:14:37.101: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9468' Apr 7 14:14:37.196: INFO: stderr: "" Apr 7 14:14:37.196: INFO: stdout: "update-demo-kitten-9jgn6 update-demo-kitten-sm5cm " Apr 7 14:14:37.196: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-9jgn6 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9468' Apr 7 14:14:37.300: INFO: stderr: "" Apr 7 14:14:37.300: INFO: stdout: "true" Apr 7 14:14:37.300: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-9jgn6 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9468' Apr 7 14:14:37.394: INFO: stderr: "" Apr 7 14:14:37.394: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" Apr 7 14:14:37.394: INFO: validating pod update-demo-kitten-9jgn6 Apr 7 14:14:37.397: INFO: got data: { "image": "kitten.jpg" } Apr 7 14:14:37.397: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . Apr 7 14:14:37.397: INFO: update-demo-kitten-9jgn6 is verified up and running Apr 7 14:14:37.397: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-sm5cm -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9468' Apr 7 14:14:37.485: INFO: stderr: "" Apr 7 14:14:37.485: INFO: stdout: "true" Apr 7 14:14:37.486: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-sm5cm -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9468' Apr 7 14:14:37.568: INFO: stderr: "" Apr 7 14:14:37.568: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" Apr 7 14:14:37.568: INFO: validating pod update-demo-kitten-sm5cm Apr 7 14:14:37.572: INFO: got data: { "image": "kitten.jpg" } Apr 7 14:14:37.572: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . Apr 7 14:14:37.572: INFO: update-demo-kitten-sm5cm is verified up and running [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 7 14:14:37.572: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9468" for this suite. Apr 7 14:14:59.589: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 7 14:14:59.666: INFO: namespace kubectl-9468 deletion completed in 22.091312952s • [SLOW TEST:57.156 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 7 14:14:59.667: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted STEP: Gathering metrics W0407 14:15:10.826228 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Apr 7 14:15:10.826: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 7 14:15:10.826: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-3007" for this suite. Apr 7 14:15:18.869: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 7 14:15:18.966: INFO: namespace gc-3007 deletion completed in 8.136317698s • [SLOW TEST:19.299 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 7 14:15:18.967: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-1063 [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a new StatefulSet Apr 7 14:15:19.061: INFO: Found 0 stateful pods, waiting for 3 Apr 7 14:15:29.066: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Apr 7 14:15:29.066: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Apr 7 14:15:29.066: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Updating stateful set template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine Apr 7 14:15:29.095: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Not applying an update when the partition is greater than the number of replicas STEP: Performing a canary update Apr 7 14:15:39.174: INFO: Updating stateful set ss2 Apr 7 14:15:39.184: INFO: Waiting for Pod statefulset-1063/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Apr 7 14:15:49.194: INFO: Waiting for Pod statefulset-1063/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c STEP: Restoring Pods to the correct revision when they are deleted Apr 7 14:15:59.334: INFO: Found 2 stateful pods, waiting for 3 Apr 7 14:16:09.339: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Apr 7 14:16:09.339: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Apr 7 14:16:09.339: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Performing a phased rolling update Apr 7 14:16:09.363: INFO: Updating stateful set ss2 Apr 7 14:16:09.373: INFO: Waiting for Pod statefulset-1063/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Apr 7 14:16:19.382: INFO: Waiting for Pod statefulset-1063/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Apr 7 14:16:29.398: INFO: Updating stateful set ss2 Apr 7 14:16:29.427: INFO: Waiting for StatefulSet statefulset-1063/ss2 to complete update Apr 7 14:16:29.427: INFO: Waiting for Pod statefulset-1063/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Apr 7 14:16:39.436: INFO: Waiting for StatefulSet statefulset-1063/ss2 to complete update Apr 7 14:16:39.436: INFO: Waiting for Pod statefulset-1063/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 Apr 7 14:16:49.434: INFO: Deleting all statefulset in ns statefulset-1063 Apr 7 14:16:49.438: INFO: Scaling statefulset ss2 to 0 Apr 7 14:17:19.455: INFO: Waiting for statefulset status.replicas updated to 0 Apr 7 14:17:19.459: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 7 14:17:19.476: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-1063" for this suite. Apr 7 14:17:25.490: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 7 14:17:25.583: INFO: namespace statefulset-1063 deletion completed in 6.103662631s • [SLOW TEST:126.616 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 7 14:17:25.584: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-4184 [It] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a new StatefulSet Apr 7 14:17:25.685: INFO: Found 0 stateful pods, waiting for 3 Apr 7 14:17:35.690: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Apr 7 14:17:35.690: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Apr 7 14:17:35.690: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Apr 7 14:17:45.690: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Apr 7 14:17:45.690: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Apr 7 14:17:45.690: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true Apr 7 14:17:45.698: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4184 ss2-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Apr 7 14:17:45.941: INFO: stderr: "I0407 14:17:45.835428 2933 log.go:172] (0xc000426630) (0xc0006f8b40) Create stream\nI0407 14:17:45.835483 2933 log.go:172] (0xc000426630) (0xc0006f8b40) Stream added, broadcasting: 1\nI0407 14:17:45.840187 2933 log.go:172] (0xc000426630) Reply frame received for 1\nI0407 14:17:45.840240 2933 log.go:172] (0xc000426630) (0xc0006f8280) Create stream\nI0407 14:17:45.840256 2933 log.go:172] (0xc000426630) (0xc0006f8280) Stream added, broadcasting: 3\nI0407 14:17:45.841360 2933 log.go:172] (0xc000426630) Reply frame received for 3\nI0407 14:17:45.841423 2933 log.go:172] (0xc000426630) (0xc00002a000) Create stream\nI0407 14:17:45.841469 2933 log.go:172] (0xc000426630) (0xc00002a000) Stream added, broadcasting: 5\nI0407 14:17:45.842554 2933 log.go:172] (0xc000426630) Reply frame received for 5\nI0407 14:17:45.905495 2933 log.go:172] (0xc000426630) Data frame received for 5\nI0407 14:17:45.905522 2933 log.go:172] (0xc00002a000) (5) Data frame handling\nI0407 14:17:45.905538 2933 log.go:172] (0xc00002a000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0407 14:17:45.933469 2933 log.go:172] (0xc000426630) Data frame received for 3\nI0407 14:17:45.933495 2933 log.go:172] (0xc0006f8280) (3) Data frame handling\nI0407 14:17:45.933512 2933 log.go:172] (0xc0006f8280) (3) Data frame sent\nI0407 14:17:45.933705 2933 log.go:172] (0xc000426630) Data frame received for 3\nI0407 14:17:45.933723 2933 log.go:172] (0xc0006f8280) (3) Data frame handling\nI0407 14:17:45.934002 2933 log.go:172] (0xc000426630) Data frame received for 5\nI0407 14:17:45.934020 2933 log.go:172] (0xc00002a000) (5) Data frame handling\nI0407 14:17:45.935957 2933 log.go:172] (0xc000426630) Data frame received for 1\nI0407 14:17:45.935975 2933 log.go:172] (0xc0006f8b40) (1) Data frame handling\nI0407 14:17:45.935988 2933 log.go:172] (0xc0006f8b40) (1) Data frame sent\nI0407 14:17:45.936182 2933 log.go:172] (0xc000426630) (0xc0006f8b40) Stream removed, broadcasting: 1\nI0407 14:17:45.936237 2933 log.go:172] (0xc000426630) Go away received\nI0407 14:17:45.936487 2933 log.go:172] (0xc000426630) (0xc0006f8b40) Stream removed, broadcasting: 1\nI0407 14:17:45.936508 2933 log.go:172] (0xc000426630) (0xc0006f8280) Stream removed, broadcasting: 3\nI0407 14:17:45.936520 2933 log.go:172] (0xc000426630) (0xc00002a000) Stream removed, broadcasting: 5\n" Apr 7 14:17:45.941: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Apr 7 14:17:45.941: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine Apr 7 14:17:55.974: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order Apr 7 14:18:05.993: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4184 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 7 14:18:06.211: INFO: stderr: "I0407 14:18:06.117056 2953 log.go:172] (0xc00013ae70) (0xc000790820) Create stream\nI0407 14:18:06.117234 2953 log.go:172] (0xc00013ae70) (0xc000790820) Stream added, broadcasting: 1\nI0407 14:18:06.121672 2953 log.go:172] (0xc00013ae70) Reply frame received for 1\nI0407 14:18:06.121731 2953 log.go:172] (0xc00013ae70) (0xc000790000) Create stream\nI0407 14:18:06.121748 2953 log.go:172] (0xc00013ae70) (0xc000790000) Stream added, broadcasting: 3\nI0407 14:18:06.123009 2953 log.go:172] (0xc00013ae70) Reply frame received for 3\nI0407 14:18:06.123051 2953 log.go:172] (0xc00013ae70) (0xc000790140) Create stream\nI0407 14:18:06.123068 2953 log.go:172] (0xc00013ae70) (0xc000790140) Stream added, broadcasting: 5\nI0407 14:18:06.124904 2953 log.go:172] (0xc00013ae70) Reply frame received for 5\nI0407 14:18:06.203766 2953 log.go:172] (0xc00013ae70) Data frame received for 5\nI0407 14:18:06.203817 2953 log.go:172] (0xc000790140) (5) Data frame handling\nI0407 14:18:06.203829 2953 log.go:172] (0xc000790140) (5) Data frame sent\nI0407 14:18:06.203838 2953 log.go:172] (0xc00013ae70) Data frame received for 5\nI0407 14:18:06.203848 2953 log.go:172] (0xc000790140) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0407 14:18:06.203878 2953 log.go:172] (0xc00013ae70) Data frame received for 3\nI0407 14:18:06.203887 2953 log.go:172] (0xc000790000) (3) Data frame handling\nI0407 14:18:06.203896 2953 log.go:172] (0xc000790000) (3) Data frame sent\nI0407 14:18:06.203903 2953 log.go:172] (0xc00013ae70) Data frame received for 3\nI0407 14:18:06.203910 2953 log.go:172] (0xc000790000) (3) Data frame handling\nI0407 14:18:06.205794 2953 log.go:172] (0xc00013ae70) Data frame received for 1\nI0407 14:18:06.205819 2953 log.go:172] (0xc000790820) (1) Data frame handling\nI0407 14:18:06.205843 2953 log.go:172] (0xc000790820) (1) Data frame sent\nI0407 14:18:06.205871 2953 log.go:172] (0xc00013ae70) (0xc000790820) Stream removed, broadcasting: 1\nI0407 14:18:06.205888 2953 log.go:172] (0xc00013ae70) Go away received\nI0407 14:18:06.206373 2953 log.go:172] (0xc00013ae70) (0xc000790820) Stream removed, broadcasting: 1\nI0407 14:18:06.206398 2953 log.go:172] (0xc00013ae70) (0xc000790000) Stream removed, broadcasting: 3\nI0407 14:18:06.206421 2953 log.go:172] (0xc00013ae70) (0xc000790140) Stream removed, broadcasting: 5\n" Apr 7 14:18:06.211: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Apr 7 14:18:06.211: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Apr 7 14:18:26.231: INFO: Waiting for StatefulSet statefulset-4184/ss2 to complete update Apr 7 14:18:26.231: INFO: Waiting for Pod statefulset-4184/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c STEP: Rolling back to a previous revision Apr 7 14:18:36.238: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4184 ss2-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Apr 7 14:18:36.497: INFO: stderr: "I0407 14:18:36.365106 2974 log.go:172] (0xc000a08630) (0xc0005fca00) Create stream\nI0407 14:18:36.365279 2974 log.go:172] (0xc000a08630) (0xc0005fca00) Stream added, broadcasting: 1\nI0407 14:18:36.367605 2974 log.go:172] (0xc000a08630) Reply frame received for 1\nI0407 14:18:36.368437 2974 log.go:172] (0xc000a08630) (0xc000ab6000) Create stream\nI0407 14:18:36.368489 2974 log.go:172] (0xc000a08630) (0xc000ab6000) Stream added, broadcasting: 3\nI0407 14:18:36.370145 2974 log.go:172] (0xc000a08630) Reply frame received for 3\nI0407 14:18:36.370537 2974 log.go:172] (0xc000a08630) (0xc000ab60a0) Create stream\nI0407 14:18:36.370568 2974 log.go:172] (0xc000a08630) (0xc000ab60a0) Stream added, broadcasting: 5\nI0407 14:18:36.371507 2974 log.go:172] (0xc000a08630) Reply frame received for 5\nI0407 14:18:36.434762 2974 log.go:172] (0xc000a08630) Data frame received for 5\nI0407 14:18:36.434794 2974 log.go:172] (0xc000ab60a0) (5) Data frame handling\nI0407 14:18:36.434901 2974 log.go:172] (0xc000ab60a0) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0407 14:18:36.490506 2974 log.go:172] (0xc000a08630) Data frame received for 3\nI0407 14:18:36.490619 2974 log.go:172] (0xc000ab6000) (3) Data frame handling\nI0407 14:18:36.490643 2974 log.go:172] (0xc000ab6000) (3) Data frame sent\nI0407 14:18:36.490674 2974 log.go:172] (0xc000a08630) Data frame received for 5\nI0407 14:18:36.490683 2974 log.go:172] (0xc000ab60a0) (5) Data frame handling\nI0407 14:18:36.490792 2974 log.go:172] (0xc000a08630) Data frame received for 3\nI0407 14:18:36.490814 2974 log.go:172] (0xc000ab6000) (3) Data frame handling\nI0407 14:18:36.492567 2974 log.go:172] (0xc000a08630) Data frame received for 1\nI0407 14:18:36.492597 2974 log.go:172] (0xc0005fca00) (1) Data frame handling\nI0407 14:18:36.492627 2974 log.go:172] (0xc0005fca00) (1) Data frame sent\nI0407 14:18:36.492666 2974 log.go:172] (0xc000a08630) (0xc0005fca00) Stream removed, broadcasting: 1\nI0407 14:18:36.492707 2974 log.go:172] (0xc000a08630) Go away received\nI0407 14:18:36.493042 2974 log.go:172] (0xc000a08630) (0xc0005fca00) Stream removed, broadcasting: 1\nI0407 14:18:36.493057 2974 log.go:172] (0xc000a08630) (0xc000ab6000) Stream removed, broadcasting: 3\nI0407 14:18:36.493066 2974 log.go:172] (0xc000a08630) (0xc000ab60a0) Stream removed, broadcasting: 5\n" Apr 7 14:18:36.497: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Apr 7 14:18:36.497: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Apr 7 14:18:46.529: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order Apr 7 14:18:56.554: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4184 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 7 14:18:56.781: INFO: stderr: "I0407 14:18:56.680233 2996 log.go:172] (0xc0009b0420) (0xc000314820) Create stream\nI0407 14:18:56.680293 2996 log.go:172] (0xc0009b0420) (0xc000314820) Stream added, broadcasting: 1\nI0407 14:18:56.685080 2996 log.go:172] (0xc0009b0420) Reply frame received for 1\nI0407 14:18:56.685439 2996 log.go:172] (0xc0009b0420) (0xc0005e4320) Create stream\nI0407 14:18:56.685455 2996 log.go:172] (0xc0009b0420) (0xc0005e4320) Stream added, broadcasting: 3\nI0407 14:18:56.686309 2996 log.go:172] (0xc0009b0420) Reply frame received for 3\nI0407 14:18:56.686354 2996 log.go:172] (0xc0009b0420) (0xc000314000) Create stream\nI0407 14:18:56.686368 2996 log.go:172] (0xc0009b0420) (0xc000314000) Stream added, broadcasting: 5\nI0407 14:18:56.687152 2996 log.go:172] (0xc0009b0420) Reply frame received for 5\nI0407 14:18:56.775625 2996 log.go:172] (0xc0009b0420) Data frame received for 3\nI0407 14:18:56.775657 2996 log.go:172] (0xc0005e4320) (3) Data frame handling\nI0407 14:18:56.775667 2996 log.go:172] (0xc0005e4320) (3) Data frame sent\nI0407 14:18:56.775674 2996 log.go:172] (0xc0009b0420) Data frame received for 3\nI0407 14:18:56.775680 2996 log.go:172] (0xc0005e4320) (3) Data frame handling\nI0407 14:18:56.775705 2996 log.go:172] (0xc0009b0420) Data frame received for 5\nI0407 14:18:56.775712 2996 log.go:172] (0xc000314000) (5) Data frame handling\nI0407 14:18:56.775734 2996 log.go:172] (0xc000314000) (5) Data frame sent\nI0407 14:18:56.775740 2996 log.go:172] (0xc0009b0420) Data frame received for 5\nI0407 14:18:56.775745 2996 log.go:172] (0xc000314000) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0407 14:18:56.777392 2996 log.go:172] (0xc0009b0420) Data frame received for 1\nI0407 14:18:56.777418 2996 log.go:172] (0xc000314820) (1) Data frame handling\nI0407 14:18:56.777438 2996 log.go:172] (0xc000314820) (1) Data frame sent\nI0407 14:18:56.777471 2996 log.go:172] (0xc0009b0420) (0xc000314820) Stream removed, broadcasting: 1\nI0407 14:18:56.777503 2996 log.go:172] (0xc0009b0420) Go away received\nI0407 14:18:56.777826 2996 log.go:172] (0xc0009b0420) (0xc000314820) Stream removed, broadcasting: 1\nI0407 14:18:56.777853 2996 log.go:172] (0xc0009b0420) (0xc0005e4320) Stream removed, broadcasting: 3\nI0407 14:18:56.777861 2996 log.go:172] (0xc0009b0420) (0xc000314000) Stream removed, broadcasting: 5\n" Apr 7 14:18:56.781: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Apr 7 14:18:56.781: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Apr 7 14:19:16.804: INFO: Waiting for StatefulSet statefulset-4184/ss2 to complete update Apr 7 14:19:16.804: INFO: Waiting for Pod statefulset-4184/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 Apr 7 14:19:26.819: INFO: Deleting all statefulset in ns statefulset-4184 Apr 7 14:19:26.822: INFO: Scaling statefulset ss2 to 0 Apr 7 14:19:56.836: INFO: Waiting for statefulset status.replicas updated to 0 Apr 7 14:19:56.839: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 7 14:19:56.851: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-4184" for this suite. Apr 7 14:20:02.865: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 7 14:20:02.947: INFO: namespace statefulset-4184 deletion completed in 6.093352164s • [SLOW TEST:157.364 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 7 14:20:02.948: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod test-webserver-5468ed63-ec5d-4c49-86c9-dafc25f3a1f0 in namespace container-probe-9993 Apr 7 14:20:07.020: INFO: Started pod test-webserver-5468ed63-ec5d-4c49-86c9-dafc25f3a1f0 in namespace container-probe-9993 STEP: checking the pod's current state and verifying that restartCount is present Apr 7 14:20:07.023: INFO: Initial restart count of pod test-webserver-5468ed63-ec5d-4c49-86c9-dafc25f3a1f0 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 7 14:24:07.590: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-9993" for this suite. Apr 7 14:24:13.665: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 7 14:24:13.739: INFO: namespace container-probe-9993 deletion completed in 6.125054873s • [SLOW TEST:250.791 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 7 14:24:13.739: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-1476 [It] Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating stateful set ss in namespace statefulset-1476 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-1476 Apr 7 14:24:13.834: INFO: Found 0 stateful pods, waiting for 1 Apr 7 14:24:23.839: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod Apr 7 14:24:23.846: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1476 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Apr 7 14:24:26.346: INFO: stderr: "I0407 14:24:26.218903 3016 log.go:172] (0xc000b28420) (0xc000b32780) Create stream\nI0407 14:24:26.218939 3016 log.go:172] (0xc000b28420) (0xc000b32780) Stream added, broadcasting: 1\nI0407 14:24:26.221503 3016 log.go:172] (0xc000b28420) Reply frame received for 1\nI0407 14:24:26.221557 3016 log.go:172] (0xc000b28420) (0xc000b0e000) Create stream\nI0407 14:24:26.221571 3016 log.go:172] (0xc000b28420) (0xc000b0e000) Stream added, broadcasting: 3\nI0407 14:24:26.222624 3016 log.go:172] (0xc000b28420) Reply frame received for 3\nI0407 14:24:26.222658 3016 log.go:172] (0xc000b28420) (0xc000b0e0a0) Create stream\nI0407 14:24:26.222675 3016 log.go:172] (0xc000b28420) (0xc000b0e0a0) Stream added, broadcasting: 5\nI0407 14:24:26.223653 3016 log.go:172] (0xc000b28420) Reply frame received for 5\nI0407 14:24:26.313902 3016 log.go:172] (0xc000b28420) Data frame received for 5\nI0407 14:24:26.313929 3016 log.go:172] (0xc000b0e0a0) (5) Data frame handling\nI0407 14:24:26.313947 3016 log.go:172] (0xc000b0e0a0) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0407 14:24:26.338421 3016 log.go:172] (0xc000b28420) Data frame received for 3\nI0407 14:24:26.338449 3016 log.go:172] (0xc000b0e000) (3) Data frame handling\nI0407 14:24:26.338475 3016 log.go:172] (0xc000b28420) Data frame received for 5\nI0407 14:24:26.338521 3016 log.go:172] (0xc000b0e0a0) (5) Data frame handling\nI0407 14:24:26.338556 3016 log.go:172] (0xc000b0e000) (3) Data frame sent\nI0407 14:24:26.338575 3016 log.go:172] (0xc000b28420) Data frame received for 3\nI0407 14:24:26.338591 3016 log.go:172] (0xc000b0e000) (3) Data frame handling\nI0407 14:24:26.340596 3016 log.go:172] (0xc000b28420) Data frame received for 1\nI0407 14:24:26.340617 3016 log.go:172] (0xc000b32780) (1) Data frame handling\nI0407 14:24:26.340642 3016 log.go:172] (0xc000b32780) (1) Data frame sent\nI0407 14:24:26.340654 3016 log.go:172] (0xc000b28420) (0xc000b32780) Stream removed, broadcasting: 1\nI0407 14:24:26.340854 3016 log.go:172] (0xc000b28420) Go away received\nI0407 14:24:26.340997 3016 log.go:172] (0xc000b28420) (0xc000b32780) Stream removed, broadcasting: 1\nI0407 14:24:26.341048 3016 log.go:172] (0xc000b28420) (0xc000b0e000) Stream removed, broadcasting: 3\nI0407 14:24:26.341074 3016 log.go:172] (0xc000b28420) (0xc000b0e0a0) Stream removed, broadcasting: 5\n" Apr 7 14:24:26.346: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Apr 7 14:24:26.346: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Apr 7 14:24:26.350: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Apr 7 14:24:36.355: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Apr 7 14:24:36.355: INFO: Waiting for statefulset status.replicas updated to 0 Apr 7 14:24:36.375: INFO: POD NODE PHASE GRACE CONDITIONS Apr 7 14:24:36.375: INFO: ss-0 iruya-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-07 14:24:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-07 14:24:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-07 14:24:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-07 14:24:13 +0000 UTC }] Apr 7 14:24:36.375: INFO: ss-1 Pending [] Apr 7 14:24:36.375: INFO: Apr 7 14:24:36.375: INFO: StatefulSet ss has not reached scale 3, at 2 Apr 7 14:24:37.379: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.991367064s Apr 7 14:24:38.384: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.986838915s Apr 7 14:24:39.388: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.981757974s Apr 7 14:24:40.394: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.977859517s Apr 7 14:24:41.399: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.972554147s Apr 7 14:24:42.403: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.967212617s Apr 7 14:24:43.409: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.962693326s Apr 7 14:24:44.414: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.957446135s Apr 7 14:24:45.419: INFO: Verifying statefulset ss doesn't scale past 3 for another 952.331369ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-1476 Apr 7 14:24:46.424: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1476 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 7 14:24:46.645: INFO: stderr: "I0407 14:24:46.555011 3051 log.go:172] (0xc000116e70) (0xc0003b0b40) Create stream\nI0407 14:24:46.555063 3051 log.go:172] (0xc000116e70) (0xc0003b0b40) Stream added, broadcasting: 1\nI0407 14:24:46.558112 3051 log.go:172] (0xc000116e70) Reply frame received for 1\nI0407 14:24:46.558174 3051 log.go:172] (0xc000116e70) (0xc00094c000) Create stream\nI0407 14:24:46.558201 3051 log.go:172] (0xc000116e70) (0xc00094c000) Stream added, broadcasting: 3\nI0407 14:24:46.559281 3051 log.go:172] (0xc000116e70) Reply frame received for 3\nI0407 14:24:46.559325 3051 log.go:172] (0xc000116e70) (0xc00094c0a0) Create stream\nI0407 14:24:46.559339 3051 log.go:172] (0xc000116e70) (0xc00094c0a0) Stream added, broadcasting: 5\nI0407 14:24:46.560436 3051 log.go:172] (0xc000116e70) Reply frame received for 5\nI0407 14:24:46.637726 3051 log.go:172] (0xc000116e70) Data frame received for 5\nI0407 14:24:46.637746 3051 log.go:172] (0xc00094c0a0) (5) Data frame handling\nI0407 14:24:46.637759 3051 log.go:172] (0xc00094c0a0) (5) Data frame sent\nI0407 14:24:46.637766 3051 log.go:172] (0xc000116e70) Data frame received for 5\nI0407 14:24:46.637771 3051 log.go:172] (0xc00094c0a0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0407 14:24:46.638047 3051 log.go:172] (0xc000116e70) Data frame received for 3\nI0407 14:24:46.638073 3051 log.go:172] (0xc00094c000) (3) Data frame handling\nI0407 14:24:46.638092 3051 log.go:172] (0xc00094c000) (3) Data frame sent\nI0407 14:24:46.638103 3051 log.go:172] (0xc000116e70) Data frame received for 3\nI0407 14:24:46.638113 3051 log.go:172] (0xc00094c000) (3) Data frame handling\nI0407 14:24:46.640024 3051 log.go:172] (0xc000116e70) Data frame received for 1\nI0407 14:24:46.640054 3051 log.go:172] (0xc0003b0b40) (1) Data frame handling\nI0407 14:24:46.640082 3051 log.go:172] (0xc0003b0b40) (1) Data frame sent\nI0407 14:24:46.640107 3051 log.go:172] (0xc000116e70) (0xc0003b0b40) Stream removed, broadcasting: 1\nI0407 14:24:46.640134 3051 log.go:172] (0xc000116e70) Go away received\nI0407 14:24:46.640523 3051 log.go:172] (0xc000116e70) (0xc0003b0b40) Stream removed, broadcasting: 1\nI0407 14:24:46.640553 3051 log.go:172] (0xc000116e70) (0xc00094c000) Stream removed, broadcasting: 3\nI0407 14:24:46.640568 3051 log.go:172] (0xc000116e70) (0xc00094c0a0) Stream removed, broadcasting: 5\n" Apr 7 14:24:46.645: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Apr 7 14:24:46.645: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Apr 7 14:24:46.645: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1476 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 7 14:24:46.840: INFO: stderr: "I0407 14:24:46.776645 3072 log.go:172] (0xc000116d10) (0xc00089a5a0) Create stream\nI0407 14:24:46.776703 3072 log.go:172] (0xc000116d10) (0xc00089a5a0) Stream added, broadcasting: 1\nI0407 14:24:46.779189 3072 log.go:172] (0xc000116d10) Reply frame received for 1\nI0407 14:24:46.779235 3072 log.go:172] (0xc000116d10) (0xc000672320) Create stream\nI0407 14:24:46.779254 3072 log.go:172] (0xc000116d10) (0xc000672320) Stream added, broadcasting: 3\nI0407 14:24:46.780258 3072 log.go:172] (0xc000116d10) Reply frame received for 3\nI0407 14:24:46.780299 3072 log.go:172] (0xc000116d10) (0xc0006723c0) Create stream\nI0407 14:24:46.780317 3072 log.go:172] (0xc000116d10) (0xc0006723c0) Stream added, broadcasting: 5\nI0407 14:24:46.781540 3072 log.go:172] (0xc000116d10) Reply frame received for 5\nI0407 14:24:46.833360 3072 log.go:172] (0xc000116d10) Data frame received for 3\nI0407 14:24:46.833403 3072 log.go:172] (0xc000672320) (3) Data frame handling\nI0407 14:24:46.833419 3072 log.go:172] (0xc000672320) (3) Data frame sent\nI0407 14:24:46.833428 3072 log.go:172] (0xc000116d10) Data frame received for 3\nI0407 14:24:46.833436 3072 log.go:172] (0xc000672320) (3) Data frame handling\nI0407 14:24:46.833530 3072 log.go:172] (0xc000116d10) Data frame received for 5\nI0407 14:24:46.833540 3072 log.go:172] (0xc0006723c0) (5) Data frame handling\nI0407 14:24:46.833552 3072 log.go:172] (0xc0006723c0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0407 14:24:46.833590 3072 log.go:172] (0xc000116d10) Data frame received for 5\nI0407 14:24:46.833601 3072 log.go:172] (0xc0006723c0) (5) Data frame handling\nI0407 14:24:46.835505 3072 log.go:172] (0xc000116d10) Data frame received for 1\nI0407 14:24:46.835548 3072 log.go:172] (0xc00089a5a0) (1) Data frame handling\nI0407 14:24:46.835619 3072 log.go:172] (0xc00089a5a0) (1) Data frame sent\nI0407 14:24:46.835652 3072 log.go:172] (0xc000116d10) (0xc00089a5a0) Stream removed, broadcasting: 1\nI0407 14:24:46.835678 3072 log.go:172] (0xc000116d10) Go away received\nI0407 14:24:46.836073 3072 log.go:172] (0xc000116d10) (0xc00089a5a0) Stream removed, broadcasting: 1\nI0407 14:24:46.836095 3072 log.go:172] (0xc000116d10) (0xc000672320) Stream removed, broadcasting: 3\nI0407 14:24:46.836106 3072 log.go:172] (0xc000116d10) (0xc0006723c0) Stream removed, broadcasting: 5\n" Apr 7 14:24:46.840: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Apr 7 14:24:46.840: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Apr 7 14:24:46.840: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1476 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 7 14:24:47.061: INFO: stderr: "I0407 14:24:46.969830 3093 log.go:172] (0xc000962370) (0xc000896640) Create stream\nI0407 14:24:46.969885 3093 log.go:172] (0xc000962370) (0xc000896640) Stream added, broadcasting: 1\nI0407 14:24:46.971963 3093 log.go:172] (0xc000962370) Reply frame received for 1\nI0407 14:24:46.972002 3093 log.go:172] (0xc000962370) (0xc000558280) Create stream\nI0407 14:24:46.972019 3093 log.go:172] (0xc000962370) (0xc000558280) Stream added, broadcasting: 3\nI0407 14:24:46.972831 3093 log.go:172] (0xc000962370) Reply frame received for 3\nI0407 14:24:46.972865 3093 log.go:172] (0xc000962370) (0xc000558320) Create stream\nI0407 14:24:46.972872 3093 log.go:172] (0xc000962370) (0xc000558320) Stream added, broadcasting: 5\nI0407 14:24:46.973997 3093 log.go:172] (0xc000962370) Reply frame received for 5\nI0407 14:24:47.054987 3093 log.go:172] (0xc000962370) Data frame received for 5\nI0407 14:24:47.055030 3093 log.go:172] (0xc000558320) (5) Data frame handling\nI0407 14:24:47.055066 3093 log.go:172] (0xc000558320) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0407 14:24:47.055107 3093 log.go:172] (0xc000962370) Data frame received for 3\nI0407 14:24:47.055164 3093 log.go:172] (0xc000558280) (3) Data frame handling\nI0407 14:24:47.055198 3093 log.go:172] (0xc000962370) Data frame received for 5\nI0407 14:24:47.055240 3093 log.go:172] (0xc000558320) (5) Data frame handling\nI0407 14:24:47.055265 3093 log.go:172] (0xc000558280) (3) Data frame sent\nI0407 14:24:47.055276 3093 log.go:172] (0xc000962370) Data frame received for 3\nI0407 14:24:47.055288 3093 log.go:172] (0xc000558280) (3) Data frame handling\nI0407 14:24:47.056870 3093 log.go:172] (0xc000962370) Data frame received for 1\nI0407 14:24:47.056905 3093 log.go:172] (0xc000896640) (1) Data frame handling\nI0407 14:24:47.056942 3093 log.go:172] (0xc000896640) (1) Data frame sent\nI0407 14:24:47.056970 3093 log.go:172] (0xc000962370) (0xc000896640) Stream removed, broadcasting: 1\nI0407 14:24:47.056989 3093 log.go:172] (0xc000962370) Go away received\nI0407 14:24:47.057489 3093 log.go:172] (0xc000962370) (0xc000896640) Stream removed, broadcasting: 1\nI0407 14:24:47.057512 3093 log.go:172] (0xc000962370) (0xc000558280) Stream removed, broadcasting: 3\nI0407 14:24:47.057526 3093 log.go:172] (0xc000962370) (0xc000558320) Stream removed, broadcasting: 5\n" Apr 7 14:24:47.061: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Apr 7 14:24:47.061: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Apr 7 14:24:47.065: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Apr 7 14:24:47.065: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Apr 7 14:24:47.065: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod Apr 7 14:24:47.068: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1476 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Apr 7 14:24:47.297: INFO: stderr: "I0407 14:24:47.214793 3111 log.go:172] (0xc0009ee6e0) (0xc00067caa0) Create stream\nI0407 14:24:47.214842 3111 log.go:172] (0xc0009ee6e0) (0xc00067caa0) Stream added, broadcasting: 1\nI0407 14:24:47.219518 3111 log.go:172] (0xc0009ee6e0) Reply frame received for 1\nI0407 14:24:47.219571 3111 log.go:172] (0xc0009ee6e0) (0xc00067c1e0) Create stream\nI0407 14:24:47.219587 3111 log.go:172] (0xc0009ee6e0) (0xc00067c1e0) Stream added, broadcasting: 3\nI0407 14:24:47.220403 3111 log.go:172] (0xc0009ee6e0) Reply frame received for 3\nI0407 14:24:47.220443 3111 log.go:172] (0xc0009ee6e0) (0xc0001e6000) Create stream\nI0407 14:24:47.220462 3111 log.go:172] (0xc0009ee6e0) (0xc0001e6000) Stream added, broadcasting: 5\nI0407 14:24:47.221642 3111 log.go:172] (0xc0009ee6e0) Reply frame received for 5\nI0407 14:24:47.290982 3111 log.go:172] (0xc0009ee6e0) Data frame received for 5\nI0407 14:24:47.291031 3111 log.go:172] (0xc0001e6000) (5) Data frame handling\nI0407 14:24:47.291049 3111 log.go:172] (0xc0001e6000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0407 14:24:47.291072 3111 log.go:172] (0xc0009ee6e0) Data frame received for 3\nI0407 14:24:47.291080 3111 log.go:172] (0xc00067c1e0) (3) Data frame handling\nI0407 14:24:47.291085 3111 log.go:172] (0xc00067c1e0) (3) Data frame sent\nI0407 14:24:47.291092 3111 log.go:172] (0xc0009ee6e0) Data frame received for 3\nI0407 14:24:47.291097 3111 log.go:172] (0xc00067c1e0) (3) Data frame handling\nI0407 14:24:47.291344 3111 log.go:172] (0xc0009ee6e0) Data frame received for 5\nI0407 14:24:47.291374 3111 log.go:172] (0xc0001e6000) (5) Data frame handling\nI0407 14:24:47.292731 3111 log.go:172] (0xc0009ee6e0) Data frame received for 1\nI0407 14:24:47.292754 3111 log.go:172] (0xc00067caa0) (1) Data frame handling\nI0407 14:24:47.292768 3111 log.go:172] (0xc00067caa0) (1) Data frame sent\nI0407 14:24:47.292787 3111 log.go:172] (0xc0009ee6e0) (0xc00067caa0) Stream removed, broadcasting: 1\nI0407 14:24:47.292807 3111 log.go:172] (0xc0009ee6e0) Go away received\nI0407 14:24:47.293290 3111 log.go:172] (0xc0009ee6e0) (0xc00067caa0) Stream removed, broadcasting: 1\nI0407 14:24:47.293329 3111 log.go:172] (0xc0009ee6e0) (0xc00067c1e0) Stream removed, broadcasting: 3\nI0407 14:24:47.293346 3111 log.go:172] (0xc0009ee6e0) (0xc0001e6000) Stream removed, broadcasting: 5\n" Apr 7 14:24:47.297: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Apr 7 14:24:47.297: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Apr 7 14:24:47.297: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1476 ss-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Apr 7 14:24:47.530: INFO: stderr: "I0407 14:24:47.431126 3131 log.go:172] (0xc0009fe6e0) (0xc000688a00) Create stream\nI0407 14:24:47.431184 3131 log.go:172] (0xc0009fe6e0) (0xc000688a00) Stream added, broadcasting: 1\nI0407 14:24:47.434744 3131 log.go:172] (0xc0009fe6e0) Reply frame received for 1\nI0407 14:24:47.434775 3131 log.go:172] (0xc0009fe6e0) (0xc000688280) Create stream\nI0407 14:24:47.434785 3131 log.go:172] (0xc0009fe6e0) (0xc000688280) Stream added, broadcasting: 3\nI0407 14:24:47.435815 3131 log.go:172] (0xc0009fe6e0) Reply frame received for 3\nI0407 14:24:47.435850 3131 log.go:172] (0xc0009fe6e0) (0xc000768000) Create stream\nI0407 14:24:47.435859 3131 log.go:172] (0xc0009fe6e0) (0xc000768000) Stream added, broadcasting: 5\nI0407 14:24:47.436953 3131 log.go:172] (0xc0009fe6e0) Reply frame received for 5\nI0407 14:24:47.497729 3131 log.go:172] (0xc0009fe6e0) Data frame received for 5\nI0407 14:24:47.497760 3131 log.go:172] (0xc000768000) (5) Data frame handling\nI0407 14:24:47.497780 3131 log.go:172] (0xc000768000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0407 14:24:47.524705 3131 log.go:172] (0xc0009fe6e0) Data frame received for 3\nI0407 14:24:47.524718 3131 log.go:172] (0xc000688280) (3) Data frame handling\nI0407 14:24:47.524730 3131 log.go:172] (0xc000688280) (3) Data frame sent\nI0407 14:24:47.524734 3131 log.go:172] (0xc0009fe6e0) Data frame received for 3\nI0407 14:24:47.524738 3131 log.go:172] (0xc000688280) (3) Data frame handling\nI0407 14:24:47.524790 3131 log.go:172] (0xc0009fe6e0) Data frame received for 5\nI0407 14:24:47.524824 3131 log.go:172] (0xc000768000) (5) Data frame handling\nI0407 14:24:47.526355 3131 log.go:172] (0xc0009fe6e0) Data frame received for 1\nI0407 14:24:47.526375 3131 log.go:172] (0xc000688a00) (1) Data frame handling\nI0407 14:24:47.526392 3131 log.go:172] (0xc000688a00) (1) Data frame sent\nI0407 14:24:47.526407 3131 log.go:172] (0xc0009fe6e0) (0xc000688a00) Stream removed, broadcasting: 1\nI0407 14:24:47.526423 3131 log.go:172] (0xc0009fe6e0) Go away received\nI0407 14:24:47.526665 3131 log.go:172] (0xc0009fe6e0) (0xc000688a00) Stream removed, broadcasting: 1\nI0407 14:24:47.526679 3131 log.go:172] (0xc0009fe6e0) (0xc000688280) Stream removed, broadcasting: 3\nI0407 14:24:47.526684 3131 log.go:172] (0xc0009fe6e0) (0xc000768000) Stream removed, broadcasting: 5\n" Apr 7 14:24:47.530: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Apr 7 14:24:47.530: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Apr 7 14:24:47.530: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1476 ss-2 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Apr 7 14:24:47.779: INFO: stderr: "I0407 14:24:47.680005 3152 log.go:172] (0xc000a2e580) (0xc00078ab40) Create stream\nI0407 14:24:47.680065 3152 log.go:172] (0xc000a2e580) (0xc00078ab40) Stream added, broadcasting: 1\nI0407 14:24:47.682219 3152 log.go:172] (0xc000a2e580) Reply frame received for 1\nI0407 14:24:47.682271 3152 log.go:172] (0xc000a2e580) (0xc0009ec000) Create stream\nI0407 14:24:47.682289 3152 log.go:172] (0xc000a2e580) (0xc0009ec000) Stream added, broadcasting: 3\nI0407 14:24:47.683173 3152 log.go:172] (0xc000a2e580) Reply frame received for 3\nI0407 14:24:47.683206 3152 log.go:172] (0xc000a2e580) (0xc00078abe0) Create stream\nI0407 14:24:47.683215 3152 log.go:172] (0xc000a2e580) (0xc00078abe0) Stream added, broadcasting: 5\nI0407 14:24:47.684019 3152 log.go:172] (0xc000a2e580) Reply frame received for 5\nI0407 14:24:47.746088 3152 log.go:172] (0xc000a2e580) Data frame received for 5\nI0407 14:24:47.746113 3152 log.go:172] (0xc00078abe0) (5) Data frame handling\nI0407 14:24:47.746131 3152 log.go:172] (0xc00078abe0) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0407 14:24:47.770967 3152 log.go:172] (0xc000a2e580) Data frame received for 3\nI0407 14:24:47.771023 3152 log.go:172] (0xc0009ec000) (3) Data frame handling\nI0407 14:24:47.771052 3152 log.go:172] (0xc0009ec000) (3) Data frame sent\nI0407 14:24:47.771088 3152 log.go:172] (0xc000a2e580) Data frame received for 5\nI0407 14:24:47.771100 3152 log.go:172] (0xc00078abe0) (5) Data frame handling\nI0407 14:24:47.771139 3152 log.go:172] (0xc000a2e580) Data frame received for 3\nI0407 14:24:47.771171 3152 log.go:172] (0xc0009ec000) (3) Data frame handling\nI0407 14:24:47.772745 3152 log.go:172] (0xc000a2e580) Data frame received for 1\nI0407 14:24:47.772775 3152 log.go:172] (0xc00078ab40) (1) Data frame handling\nI0407 14:24:47.772804 3152 log.go:172] (0xc00078ab40) (1) Data frame sent\nI0407 14:24:47.772934 3152 log.go:172] (0xc000a2e580) (0xc00078ab40) Stream removed, broadcasting: 1\nI0407 14:24:47.772974 3152 log.go:172] (0xc000a2e580) Go away received\nI0407 14:24:47.775060 3152 log.go:172] (0xc000a2e580) (0xc00078ab40) Stream removed, broadcasting: 1\nI0407 14:24:47.775098 3152 log.go:172] (0xc000a2e580) (0xc0009ec000) Stream removed, broadcasting: 3\nI0407 14:24:47.775198 3152 log.go:172] (0xc000a2e580) (0xc00078abe0) Stream removed, broadcasting: 5\n" Apr 7 14:24:47.779: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Apr 7 14:24:47.779: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Apr 7 14:24:47.779: INFO: Waiting for statefulset status.replicas updated to 0 Apr 7 14:24:47.782: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 3 Apr 7 14:24:57.790: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Apr 7 14:24:57.790: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Apr 7 14:24:57.790: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Apr 7 14:24:57.800: INFO: POD NODE PHASE GRACE CONDITIONS Apr 7 14:24:57.800: INFO: ss-0 iruya-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-07 14:24:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-07 14:24:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-07 14:24:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-07 14:24:13 +0000 UTC }] Apr 7 14:24:57.800: INFO: ss-1 iruya-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-07 14:24:36 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-07 14:24:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-07 14:24:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-07 14:24:36 +0000 UTC }] Apr 7 14:24:57.800: INFO: ss-2 iruya-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-07 14:24:36 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-07 14:24:48 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-07 14:24:48 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-07 14:24:36 +0000 UTC }] Apr 7 14:24:57.800: INFO: Apr 7 14:24:57.800: INFO: StatefulSet ss has not reached scale 0, at 3 Apr 7 14:24:58.880: INFO: POD NODE PHASE GRACE CONDITIONS Apr 7 14:24:58.880: INFO: ss-0 iruya-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-07 14:24:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-07 14:24:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-07 14:24:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-07 14:24:13 +0000 UTC }] Apr 7 14:24:58.880: INFO: ss-1 iruya-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-07 14:24:36 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-07 14:24:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-07 14:24:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-07 14:24:36 +0000 UTC }] Apr 7 14:24:58.880: INFO: ss-2 iruya-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-07 14:24:36 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-07 14:24:48 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-07 14:24:48 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-07 14:24:36 +0000 UTC }] Apr 7 14:24:58.880: INFO: Apr 7 14:24:58.880: INFO: StatefulSet ss has not reached scale 0, at 3 Apr 7 14:24:59.886: INFO: POD NODE PHASE GRACE CONDITIONS Apr 7 14:24:59.886: INFO: ss-0 iruya-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-07 14:24:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-07 14:24:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-07 14:24:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-07 14:24:13 +0000 UTC }] Apr 7 14:24:59.886: INFO: ss-1 iruya-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-07 14:24:36 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-07 14:24:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-07 14:24:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-07 14:24:36 +0000 UTC }] Apr 7 14:24:59.886: INFO: ss-2 iruya-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-07 14:24:36 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-07 14:24:48 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-07 14:24:48 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-07 14:24:36 +0000 UTC }] Apr 7 14:24:59.886: INFO: Apr 7 14:24:59.886: INFO: StatefulSet ss has not reached scale 0, at 3 Apr 7 14:25:00.890: INFO: Verifying statefulset ss doesn't scale past 0 for another 6.910207957s Apr 7 14:25:01.894: INFO: Verifying statefulset ss doesn't scale past 0 for another 5.90600797s Apr 7 14:25:02.899: INFO: Verifying statefulset ss doesn't scale past 0 for another 4.90163651s Apr 7 14:25:03.904: INFO: Verifying statefulset ss doesn't scale past 0 for another 3.896884573s Apr 7 14:25:04.908: INFO: Verifying statefulset ss doesn't scale past 0 for another 2.892229783s Apr 7 14:25:05.912: INFO: Verifying statefulset ss doesn't scale past 0 for another 1.888028055s Apr 7 14:25:06.917: INFO: Verifying statefulset ss doesn't scale past 0 for another 883.336306ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-1476 Apr 7 14:25:07.921: INFO: Scaling statefulset ss to 0 Apr 7 14:25:07.931: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 Apr 7 14:25:07.934: INFO: Deleting all statefulset in ns statefulset-1476 Apr 7 14:25:07.936: INFO: Scaling statefulset ss to 0 Apr 7 14:25:07.945: INFO: Waiting for statefulset status.replicas updated to 0 Apr 7 14:25:07.948: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 7 14:25:07.956: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-1476" for this suite. Apr 7 14:25:13.970: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 7 14:25:14.046: INFO: namespace statefulset-1476 deletion completed in 6.087068085s • [SLOW TEST:60.307 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 7 14:25:14.047: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-downwardapi-4fzg STEP: Creating a pod to test atomic-volume-subpath Apr 7 14:25:14.139: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-4fzg" in namespace "subpath-8135" to be "success or failure" Apr 7 14:25:14.155: INFO: Pod "pod-subpath-test-downwardapi-4fzg": Phase="Pending", Reason="", readiness=false. Elapsed: 16.680208ms Apr 7 14:25:16.160: INFO: Pod "pod-subpath-test-downwardapi-4fzg": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021506692s Apr 7 14:25:18.164: INFO: Pod "pod-subpath-test-downwardapi-4fzg": Phase="Running", Reason="", readiness=true. Elapsed: 4.025665754s Apr 7 14:25:20.168: INFO: Pod "pod-subpath-test-downwardapi-4fzg": Phase="Running", Reason="", readiness=true. Elapsed: 6.029880799s Apr 7 14:25:22.172: INFO: Pod "pod-subpath-test-downwardapi-4fzg": Phase="Running", Reason="", readiness=true. Elapsed: 8.033524846s Apr 7 14:25:24.176: INFO: Pod "pod-subpath-test-downwardapi-4fzg": Phase="Running", Reason="", readiness=true. Elapsed: 10.037533568s Apr 7 14:25:26.180: INFO: Pod "pod-subpath-test-downwardapi-4fzg": Phase="Running", Reason="", readiness=true. Elapsed: 12.041761396s Apr 7 14:25:28.185: INFO: Pod "pod-subpath-test-downwardapi-4fzg": Phase="Running", Reason="", readiness=true. Elapsed: 14.04631283s Apr 7 14:25:30.190: INFO: Pod "pod-subpath-test-downwardapi-4fzg": Phase="Running", Reason="", readiness=true. Elapsed: 16.05095994s Apr 7 14:25:32.194: INFO: Pod "pod-subpath-test-downwardapi-4fzg": Phase="Running", Reason="", readiness=true. Elapsed: 18.055473636s Apr 7 14:25:34.199: INFO: Pod "pod-subpath-test-downwardapi-4fzg": Phase="Running", Reason="", readiness=true. Elapsed: 20.060138883s Apr 7 14:25:36.203: INFO: Pod "pod-subpath-test-downwardapi-4fzg": Phase="Running", Reason="", readiness=true. Elapsed: 22.064337343s Apr 7 14:25:38.207: INFO: Pod "pod-subpath-test-downwardapi-4fzg": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.068579107s STEP: Saw pod success Apr 7 14:25:38.207: INFO: Pod "pod-subpath-test-downwardapi-4fzg" satisfied condition "success or failure" Apr 7 14:25:38.211: INFO: Trying to get logs from node iruya-worker2 pod pod-subpath-test-downwardapi-4fzg container test-container-subpath-downwardapi-4fzg: STEP: delete the pod Apr 7 14:25:38.231: INFO: Waiting for pod pod-subpath-test-downwardapi-4fzg to disappear Apr 7 14:25:38.236: INFO: Pod pod-subpath-test-downwardapi-4fzg no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-4fzg Apr 7 14:25:38.236: INFO: Deleting pod "pod-subpath-test-downwardapi-4fzg" in namespace "subpath-8135" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 7 14:25:38.238: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-8135" for this suite. Apr 7 14:25:44.252: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 7 14:25:44.323: INFO: namespace subpath-8135 deletion completed in 6.081513631s • [SLOW TEST:30.276 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 7 14:25:44.323: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods Apr 7 14:25:44.922: INFO: Pod name wrapped-volume-race-09dc7d9a-783d-4413-9605-25ff20734325: Found 0 pods out of 5 Apr 7 14:25:49.930: INFO: Pod name wrapped-volume-race-09dc7d9a-783d-4413-9605-25ff20734325: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-09dc7d9a-783d-4413-9605-25ff20734325 in namespace emptydir-wrapper-7196, will wait for the garbage collector to delete the pods Apr 7 14:26:04.012: INFO: Deleting ReplicationController wrapped-volume-race-09dc7d9a-783d-4413-9605-25ff20734325 took: 7.381713ms Apr 7 14:26:04.312: INFO: Terminating ReplicationController wrapped-volume-race-09dc7d9a-783d-4413-9605-25ff20734325 pods took: 300.22612ms STEP: Creating RC which spawns configmap-volume pods Apr 7 14:26:43.266: INFO: Pod name wrapped-volume-race-2d631478-0b23-4049-9df9-0eb2cf9ddef9: Found 0 pods out of 5 Apr 7 14:26:48.296: INFO: Pod name wrapped-volume-race-2d631478-0b23-4049-9df9-0eb2cf9ddef9: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-2d631478-0b23-4049-9df9-0eb2cf9ddef9 in namespace emptydir-wrapper-7196, will wait for the garbage collector to delete the pods Apr 7 14:27:02.388: INFO: Deleting ReplicationController wrapped-volume-race-2d631478-0b23-4049-9df9-0eb2cf9ddef9 took: 16.709397ms Apr 7 14:27:02.688: INFO: Terminating ReplicationController wrapped-volume-race-2d631478-0b23-4049-9df9-0eb2cf9ddef9 pods took: 300.232701ms STEP: Creating RC which spawns configmap-volume pods Apr 7 14:27:42.438: INFO: Pod name wrapped-volume-race-aa37a9ef-ab5a-46ff-a795-fee10aecfc88: Found 0 pods out of 5 Apr 7 14:27:47.447: INFO: Pod name wrapped-volume-race-aa37a9ef-ab5a-46ff-a795-fee10aecfc88: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-aa37a9ef-ab5a-46ff-a795-fee10aecfc88 in namespace emptydir-wrapper-7196, will wait for the garbage collector to delete the pods Apr 7 14:28:01.561: INFO: Deleting ReplicationController wrapped-volume-race-aa37a9ef-ab5a-46ff-a795-fee10aecfc88 took: 39.022461ms Apr 7 14:28:01.861: INFO: Terminating ReplicationController wrapped-volume-race-aa37a9ef-ab5a-46ff-a795-fee10aecfc88 pods took: 300.523295ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 7 14:28:42.969: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-7196" for this suite. Apr 7 14:28:50.986: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 7 14:28:51.074: INFO: namespace emptydir-wrapper-7196 deletion completed in 8.096745214s • [SLOW TEST:186.750 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 7 14:28:51.074: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-00d210b0-1b9e-4132-ba39-70d30fb24178 STEP: Creating a pod to test consume configMaps Apr 7 14:28:51.152: INFO: Waiting up to 5m0s for pod "pod-configmaps-092597b9-1934-4ad1-a298-cac3945647fb" in namespace "configmap-3105" to be "success or failure" Apr 7 14:28:51.168: INFO: Pod "pod-configmaps-092597b9-1934-4ad1-a298-cac3945647fb": Phase="Pending", Reason="", readiness=false. Elapsed: 15.705355ms Apr 7 14:28:53.172: INFO: Pod "pod-configmaps-092597b9-1934-4ad1-a298-cac3945647fb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020105695s Apr 7 14:28:55.176: INFO: Pod "pod-configmaps-092597b9-1934-4ad1-a298-cac3945647fb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024348902s STEP: Saw pod success Apr 7 14:28:55.177: INFO: Pod "pod-configmaps-092597b9-1934-4ad1-a298-cac3945647fb" satisfied condition "success or failure" Apr 7 14:28:55.179: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-092597b9-1934-4ad1-a298-cac3945647fb container configmap-volume-test: STEP: delete the pod Apr 7 14:28:55.249: INFO: Waiting for pod pod-configmaps-092597b9-1934-4ad1-a298-cac3945647fb to disappear Apr 7 14:28:55.258: INFO: Pod pod-configmaps-092597b9-1934-4ad1-a298-cac3945647fb no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 7 14:28:55.258: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3105" for this suite. Apr 7 14:29:01.274: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 7 14:29:01.355: INFO: namespace configmap-3105 deletion completed in 6.094057283s • [SLOW TEST:10.281 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 7 14:29:01.355: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating secret secrets-1643/secret-test-e7c5968e-1462-48c1-9afd-602fbf4fa96f STEP: Creating a pod to test consume secrets Apr 7 14:29:01.419: INFO: Waiting up to 5m0s for pod "pod-configmaps-03225eea-8965-4548-b7f7-1d08acf776a4" in namespace "secrets-1643" to be "success or failure" Apr 7 14:29:01.423: INFO: Pod "pod-configmaps-03225eea-8965-4548-b7f7-1d08acf776a4": Phase="Pending", Reason="", readiness=false. Elapsed: 3.26923ms Apr 7 14:29:03.428: INFO: Pod "pod-configmaps-03225eea-8965-4548-b7f7-1d08acf776a4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008056475s Apr 7 14:29:05.432: INFO: Pod "pod-configmaps-03225eea-8965-4548-b7f7-1d08acf776a4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012435995s STEP: Saw pod success Apr 7 14:29:05.432: INFO: Pod "pod-configmaps-03225eea-8965-4548-b7f7-1d08acf776a4" satisfied condition "success or failure" Apr 7 14:29:05.436: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-03225eea-8965-4548-b7f7-1d08acf776a4 container env-test: STEP: delete the pod Apr 7 14:29:05.460: INFO: Waiting for pod pod-configmaps-03225eea-8965-4548-b7f7-1d08acf776a4 to disappear Apr 7 14:29:05.481: INFO: Pod pod-configmaps-03225eea-8965-4548-b7f7-1d08acf776a4 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 7 14:29:05.481: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1643" for this suite. Apr 7 14:29:11.498: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 7 14:29:11.564: INFO: namespace secrets-1643 deletion completed in 6.07975055s • [SLOW TEST:10.209 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 7 14:29:11.565: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0777 on node default medium Apr 7 14:29:11.617: INFO: Waiting up to 5m0s for pod "pod-b5fce1bf-3806-4c32-840e-2018c3d23595" in namespace "emptydir-7423" to be "success or failure" Apr 7 14:29:11.627: INFO: Pod "pod-b5fce1bf-3806-4c32-840e-2018c3d23595": Phase="Pending", Reason="", readiness=false. Elapsed: 9.798764ms Apr 7 14:29:13.631: INFO: Pod "pod-b5fce1bf-3806-4c32-840e-2018c3d23595": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013800608s Apr 7 14:29:15.635: INFO: Pod "pod-b5fce1bf-3806-4c32-840e-2018c3d23595": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018010092s STEP: Saw pod success Apr 7 14:29:15.635: INFO: Pod "pod-b5fce1bf-3806-4c32-840e-2018c3d23595" satisfied condition "success or failure" Apr 7 14:29:15.638: INFO: Trying to get logs from node iruya-worker2 pod pod-b5fce1bf-3806-4c32-840e-2018c3d23595 container test-container: STEP: delete the pod Apr 7 14:29:15.672: INFO: Waiting for pod pod-b5fce1bf-3806-4c32-840e-2018c3d23595 to disappear Apr 7 14:29:15.687: INFO: Pod pod-b5fce1bf-3806-4c32-840e-2018c3d23595 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 7 14:29:15.687: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7423" for this suite. Apr 7 14:29:21.702: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 7 14:29:21.774: INFO: namespace emptydir-7423 deletion completed in 6.083981235s • [SLOW TEST:10.210 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 7 14:29:21.775: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:167 [It] should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating server pod server in namespace prestop-9927 STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace prestop-9927 STEP: Deleting pre-stop pod Apr 7 14:29:34.921: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 7 14:29:34.925: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "prestop-9927" for this suite. Apr 7 14:30:02.963: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 7 14:30:03.044: INFO: namespace prestop-9927 deletion completed in 28.10525275s • [SLOW TEST:41.269 seconds] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 7 14:30:03.045: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with configMap that has name projected-configmap-test-upd-62989210-4f52-4318-a992-f00387d7eb3b STEP: Creating the pod STEP: Updating configmap projected-configmap-test-upd-62989210-4f52-4318-a992-f00387d7eb3b STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 7 14:31:29.595: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4689" for this suite. Apr 7 14:31:51.614: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 7 14:31:51.699: INFO: namespace projected-4689 deletion completed in 22.099020697s • [SLOW TEST:108.654 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 7 14:31:51.699: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Apr 7 14:32:13.844: INFO: Container started at 2020-04-07 14:31:53 +0000 UTC, pod became ready at 2020-04-07 14:32:13 +0000 UTC [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 7 14:32:13.845: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-5590" for this suite. Apr 7 14:32:35.860: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 7 14:32:35.971: INFO: namespace container-probe-5590 deletion completed in 22.12360303s • [SLOW TEST:44.272 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 7 14:32:35.972: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating the pod Apr 7 14:32:40.594: INFO: Successfully updated pod "annotationupdatea3b9730d-96ff-4f81-a330-77c2b5fc49ab" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 7 14:32:42.627: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7679" for this suite. Apr 7 14:33:04.644: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 7 14:33:04.719: INFO: namespace downward-api-7679 deletion completed in 22.087722337s • [SLOW TEST:28.747 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run rc should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 7 14:33:04.719: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1456 [It] should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Apr 7 14:33:04.762: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=kubectl-5193' Apr 7 14:33:04.864: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Apr 7 14:33:04.864: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n" STEP: verifying the rc e2e-test-nginx-rc was created STEP: verifying the pod controlled by rc e2e-test-nginx-rc was created STEP: confirm that you can get logs from an rc Apr 7 14:33:04.891: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-nginx-rc-mbcww] Apr 7 14:33:04.891: INFO: Waiting up to 5m0s for pod "e2e-test-nginx-rc-mbcww" in namespace "kubectl-5193" to be "running and ready" Apr 7 14:33:04.940: INFO: Pod "e2e-test-nginx-rc-mbcww": Phase="Pending", Reason="", readiness=false. Elapsed: 49.283292ms Apr 7 14:33:06.945: INFO: Pod "e2e-test-nginx-rc-mbcww": Phase="Pending", Reason="", readiness=false. Elapsed: 2.053984022s Apr 7 14:33:08.949: INFO: Pod "e2e-test-nginx-rc-mbcww": Phase="Running", Reason="", readiness=true. Elapsed: 4.05825459s Apr 7 14:33:08.949: INFO: Pod "e2e-test-nginx-rc-mbcww" satisfied condition "running and ready" Apr 7 14:33:08.949: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-nginx-rc-mbcww] Apr 7 14:33:08.949: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-nginx-rc --namespace=kubectl-5193' Apr 7 14:33:09.057: INFO: stderr: "" Apr 7 14:33:09.057: INFO: stdout: "" [AfterEach] [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1461 Apr 7 14:33:09.057: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=kubectl-5193' Apr 7 14:33:09.166: INFO: stderr: "" Apr 7 14:33:09.166: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 7 14:33:09.166: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5193" for this suite. Apr 7 14:33:31.191: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 7 14:33:31.262: INFO: namespace kubectl-5193 deletion completed in 22.092977179s • [SLOW TEST:26.542 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 7 14:33:31.262: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars Apr 7 14:33:31.345: INFO: Waiting up to 5m0s for pod "downward-api-857407f0-a714-4424-9cb2-7db7df597e36" in namespace "downward-api-1463" to be "success or failure" Apr 7 14:33:31.349: INFO: Pod "downward-api-857407f0-a714-4424-9cb2-7db7df597e36": Phase="Pending", Reason="", readiness=false. Elapsed: 4.53601ms Apr 7 14:33:33.353: INFO: Pod "downward-api-857407f0-a714-4424-9cb2-7db7df597e36": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008551066s Apr 7 14:33:35.357: INFO: Pod "downward-api-857407f0-a714-4424-9cb2-7db7df597e36": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012673693s STEP: Saw pod success Apr 7 14:33:35.357: INFO: Pod "downward-api-857407f0-a714-4424-9cb2-7db7df597e36" satisfied condition "success or failure" Apr 7 14:33:35.360: INFO: Trying to get logs from node iruya-worker2 pod downward-api-857407f0-a714-4424-9cb2-7db7df597e36 container dapi-container: STEP: delete the pod Apr 7 14:33:35.391: INFO: Waiting for pod downward-api-857407f0-a714-4424-9cb2-7db7df597e36 to disappear Apr 7 14:33:35.408: INFO: Pod downward-api-857407f0-a714-4424-9cb2-7db7df597e36 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 7 14:33:35.408: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1463" for this suite. Apr 7 14:33:41.424: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 7 14:33:41.507: INFO: namespace downward-api-1463 deletion completed in 6.095858829s • [SLOW TEST:10.246 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 7 14:33:41.508: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name cm-test-opt-del-b3165598-ef81-477f-a80a-78a681d81e00 STEP: Creating configMap with name cm-test-opt-upd-607932d0-3e4b-43e1-b26c-96e8aacfc5df STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-b3165598-ef81-477f-a80a-78a681d81e00 STEP: Updating configmap cm-test-opt-upd-607932d0-3e4b-43e1-b26c-96e8aacfc5df STEP: Creating configMap with name cm-test-opt-create-3bb6210b-f1a4-4167-a993-53bfc9e91589 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 7 14:34:58.068: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1192" for this suite. Apr 7 14:35:20.090: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 7 14:35:20.174: INFO: namespace projected-1192 deletion completed in 22.102416564s • [SLOW TEST:98.666 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 7 14:35:20.174: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-map-a6e14308-0a47-4638-9b45-a3ce005049c9 STEP: Creating a pod to test consume configMaps Apr 7 14:35:20.260: INFO: Waiting up to 5m0s for pod "pod-configmaps-bbe81262-8442-4aa0-9580-080e32e1b800" in namespace "configmap-8389" to be "success or failure" Apr 7 14:35:20.275: INFO: Pod "pod-configmaps-bbe81262-8442-4aa0-9580-080e32e1b800": Phase="Pending", Reason="", readiness=false. Elapsed: 15.375135ms Apr 7 14:35:22.289: INFO: Pod "pod-configmaps-bbe81262-8442-4aa0-9580-080e32e1b800": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029579235s Apr 7 14:35:24.294: INFO: Pod "pod-configmaps-bbe81262-8442-4aa0-9580-080e32e1b800": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.033868569s STEP: Saw pod success Apr 7 14:35:24.294: INFO: Pod "pod-configmaps-bbe81262-8442-4aa0-9580-080e32e1b800" satisfied condition "success or failure" Apr 7 14:35:24.297: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-bbe81262-8442-4aa0-9580-080e32e1b800 container configmap-volume-test: STEP: delete the pod Apr 7 14:35:24.316: INFO: Waiting for pod pod-configmaps-bbe81262-8442-4aa0-9580-080e32e1b800 to disappear Apr 7 14:35:24.320: INFO: Pod pod-configmaps-bbe81262-8442-4aa0-9580-080e32e1b800 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 7 14:35:24.320: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8389" for this suite. Apr 7 14:35:30.354: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 7 14:35:30.435: INFO: namespace configmap-8389 deletion completed in 6.111859516s • [SLOW TEST:10.261 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 7 14:35:30.436: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 7 14:35:34.567: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-3685" for this suite. Apr 7 14:35:40.611: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 7 14:35:40.715: INFO: namespace emptydir-wrapper-3685 deletion completed in 6.125822472s • [SLOW TEST:10.280 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 7 14:35:40.716: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Apr 7 14:35:40.832: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"8cd75960-62a4-4003-9c90-d72e745ec64b", Controller:(*bool)(0xc002912682), BlockOwnerDeletion:(*bool)(0xc002912683)}} Apr 7 14:35:40.842: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"d08c608e-bf4e-40c2-a6fd-c35009243386", Controller:(*bool)(0xc00321b64a), BlockOwnerDeletion:(*bool)(0xc00321b64b)}} Apr 7 14:35:40.860: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"1e10dd0f-0482-416d-b68b-5311df0a9a12", Controller:(*bool)(0xc00291282a), BlockOwnerDeletion:(*bool)(0xc00291282b)}} [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 7 14:35:45.890: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-8844" for this suite. Apr 7 14:35:51.917: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 7 14:35:51.995: INFO: namespace gc-8844 deletion completed in 6.100136388s • [SLOW TEST:11.279 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSApr 7 14:35:51.995: INFO: Running AfterSuite actions on all nodes Apr 7 14:35:51.995: INFO: Running AfterSuite actions on node 1 Apr 7 14:35:51.995: INFO: Skipping dumping logs from cluster Ran 215 of 4412 Specs in 6012.934 seconds SUCCESS! -- 215 Passed | 0 Failed | 0 Pending | 4197 Skipped PASS