I0427 12:55:44.093597 6 e2e.go:243] Starting e2e run "00b06017-6fc2-42fc-89bc-40cdf40a9134" on Ginkgo node 1 Running Suite: Kubernetes e2e suite =================================== Random Seed: 1587992143 - Will randomize all specs Will run 215 of 4412 specs Apr 27 12:55:44.275: INFO: >>> kubeConfig: /root/.kube/config Apr 27 12:55:44.279: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Apr 27 12:55:44.302: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Apr 27 12:55:44.332: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Apr 27 12:55:44.332: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Apr 27 12:55:44.332: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Apr 27 12:55:44.339: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) Apr 27 12:55:44.339: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Apr 27 12:55:44.339: INFO: e2e test version: v1.15.11 Apr 27 12:55:44.340: INFO: kube-apiserver version: v1.15.7 SSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 27 12:55:44.340: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath Apr 27 12:55:44.434: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-configmap-krt5 STEP: Creating a pod to test atomic-volume-subpath Apr 27 12:55:44.485: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-krt5" in namespace "subpath-2757" to be "success or failure" Apr 27 12:55:44.494: INFO: Pod "pod-subpath-test-configmap-krt5": Phase="Pending", Reason="", readiness=false. Elapsed: 8.937862ms Apr 27 12:55:46.498: INFO: Pod "pod-subpath-test-configmap-krt5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013287566s Apr 27 12:55:48.502: INFO: Pod "pod-subpath-test-configmap-krt5": Phase="Running", Reason="", readiness=true. Elapsed: 4.017103119s Apr 27 12:55:50.507: INFO: Pod "pod-subpath-test-configmap-krt5": Phase="Running", Reason="", readiness=true. Elapsed: 6.021495323s Apr 27 12:55:52.510: INFO: Pod "pod-subpath-test-configmap-krt5": Phase="Running", Reason="", readiness=true. Elapsed: 8.025186436s Apr 27 12:55:54.515: INFO: Pod "pod-subpath-test-configmap-krt5": Phase="Running", Reason="", readiness=true. Elapsed: 10.029428727s Apr 27 12:55:56.522: INFO: Pod "pod-subpath-test-configmap-krt5": Phase="Running", Reason="", readiness=true. Elapsed: 12.036871049s Apr 27 12:55:58.526: INFO: Pod "pod-subpath-test-configmap-krt5": Phase="Running", Reason="", readiness=true. Elapsed: 14.041339418s Apr 27 12:56:00.531: INFO: Pod "pod-subpath-test-configmap-krt5": Phase="Running", Reason="", readiness=true. Elapsed: 16.045763547s Apr 27 12:56:02.535: INFO: Pod "pod-subpath-test-configmap-krt5": Phase="Running", Reason="", readiness=true. Elapsed: 18.049997184s Apr 27 12:56:04.539: INFO: Pod "pod-subpath-test-configmap-krt5": Phase="Running", Reason="", readiness=true. Elapsed: 20.054333147s Apr 27 12:56:06.544: INFO: Pod "pod-subpath-test-configmap-krt5": Phase="Running", Reason="", readiness=true. Elapsed: 22.058974873s Apr 27 12:56:08.549: INFO: Pod "pod-subpath-test-configmap-krt5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.063872023s STEP: Saw pod success Apr 27 12:56:08.549: INFO: Pod "pod-subpath-test-configmap-krt5" satisfied condition "success or failure" Apr 27 12:56:08.551: INFO: Trying to get logs from node iruya-worker2 pod pod-subpath-test-configmap-krt5 container test-container-subpath-configmap-krt5: STEP: delete the pod Apr 27 12:56:08.694: INFO: Waiting for pod pod-subpath-test-configmap-krt5 to disappear Apr 27 12:56:08.697: INFO: Pod pod-subpath-test-configmap-krt5 no longer exists STEP: Deleting pod pod-subpath-test-configmap-krt5 Apr 27 12:56:08.697: INFO: Deleting pod "pod-subpath-test-configmap-krt5" in namespace "subpath-2757" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 27 12:56:08.700: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-2757" for this suite. Apr 27 12:56:14.725: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 27 12:56:14.802: INFO: namespace subpath-2757 deletion completed in 6.098466792s • [SLOW TEST:30.462 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 27 12:56:14.802: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating secret secrets-1613/secret-test-69ab5d6d-2a4f-4c75-9422-ac42b0bf8e55 STEP: Creating a pod to test consume secrets Apr 27 12:56:14.895: INFO: Waiting up to 5m0s for pod "pod-configmaps-b9eecd01-1b60-422c-b85a-5cc3e400d536" in namespace "secrets-1613" to be "success or failure" Apr 27 12:56:14.901: INFO: Pod "pod-configmaps-b9eecd01-1b60-422c-b85a-5cc3e400d536": Phase="Pending", Reason="", readiness=false. Elapsed: 6.068224ms Apr 27 12:56:16.905: INFO: Pod "pod-configmaps-b9eecd01-1b60-422c-b85a-5cc3e400d536": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01038364s Apr 27 12:56:18.909: INFO: Pod "pod-configmaps-b9eecd01-1b60-422c-b85a-5cc3e400d536": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013971613s STEP: Saw pod success Apr 27 12:56:18.909: INFO: Pod "pod-configmaps-b9eecd01-1b60-422c-b85a-5cc3e400d536" satisfied condition "success or failure" Apr 27 12:56:18.911: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-b9eecd01-1b60-422c-b85a-5cc3e400d536 container env-test: STEP: delete the pod Apr 27 12:56:18.933: INFO: Waiting for pod pod-configmaps-b9eecd01-1b60-422c-b85a-5cc3e400d536 to disappear Apr 27 12:56:18.938: INFO: Pod pod-configmaps-b9eecd01-1b60-422c-b85a-5cc3e400d536 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 27 12:56:18.938: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1613" for this suite. Apr 27 12:56:24.996: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 27 12:56:25.070: INFO: namespace secrets-1613 deletion completed in 6.127511432s • [SLOW TEST:10.268 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 27 12:56:25.070: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name s-test-opt-del-451c2137-a837-48bd-a4e9-5177a6fe7161 STEP: Creating secret with name s-test-opt-upd-23ee4146-3109-4f4b-9a84-68d2a6560440 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-451c2137-a837-48bd-a4e9-5177a6fe7161 STEP: Updating secret s-test-opt-upd-23ee4146-3109-4f4b-9a84-68d2a6560440 STEP: Creating secret with name s-test-opt-create-b182a848-3cb8-4aaa-a553-f53ac203fc45 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 27 12:56:33.288: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-5112" for this suite. Apr 27 12:56:55.322: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 27 12:56:55.398: INFO: namespace secrets-5112 deletion completed in 22.106974087s • [SLOW TEST:30.328 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 27 12:56:55.399: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 Apr 27 12:56:55.446: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Apr 27 12:56:55.580: INFO: Waiting for terminating namespaces to be deleted... Apr 27 12:56:55.583: INFO: Logging pods the kubelet thinks is on node iruya-worker before test Apr 27 12:56:55.588: INFO: kube-proxy-pmz4p from kube-system started at 2020-03-15 18:24:55 +0000 UTC (1 container statuses recorded) Apr 27 12:56:55.588: INFO: Container kube-proxy ready: true, restart count 0 Apr 27 12:56:55.588: INFO: kindnet-gwz5g from kube-system started at 2020-03-15 18:24:55 +0000 UTC (1 container statuses recorded) Apr 27 12:56:55.588: INFO: Container kindnet-cni ready: true, restart count 0 Apr 27 12:56:55.588: INFO: Logging pods the kubelet thinks is on node iruya-worker2 before test Apr 27 12:56:55.593: INFO: coredns-5d4dd4b4db-gm7vr from kube-system started at 2020-03-15 18:24:52 +0000 UTC (1 container statuses recorded) Apr 27 12:56:55.593: INFO: Container coredns ready: true, restart count 0 Apr 27 12:56:55.593: INFO: coredns-5d4dd4b4db-6jcgz from kube-system started at 2020-03-15 18:24:54 +0000 UTC (1 container statuses recorded) Apr 27 12:56:55.593: INFO: Container coredns ready: true, restart count 0 Apr 27 12:56:55.593: INFO: kube-proxy-vwbcj from kube-system started at 2020-03-15 18:24:42 +0000 UTC (1 container statuses recorded) Apr 27 12:56:55.593: INFO: Container kube-proxy ready: true, restart count 0 Apr 27 12:56:55.593: INFO: kindnet-mgd8b from kube-system started at 2020-03-15 18:24:43 +0000 UTC (1 container statuses recorded) Apr 27 12:56:55.593: INFO: Container kindnet-cni ready: true, restart count 0 [It] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: verifying the node has the label node iruya-worker STEP: verifying the node has the label node iruya-worker2 Apr 27 12:56:55.655: INFO: Pod coredns-5d4dd4b4db-6jcgz requesting resource cpu=100m on Node iruya-worker2 Apr 27 12:56:55.655: INFO: Pod coredns-5d4dd4b4db-gm7vr requesting resource cpu=100m on Node iruya-worker2 Apr 27 12:56:55.655: INFO: Pod kindnet-gwz5g requesting resource cpu=100m on Node iruya-worker Apr 27 12:56:55.655: INFO: Pod kindnet-mgd8b requesting resource cpu=100m on Node iruya-worker2 Apr 27 12:56:55.655: INFO: Pod kube-proxy-pmz4p requesting resource cpu=0m on Node iruya-worker Apr 27 12:56:55.655: INFO: Pod kube-proxy-vwbcj requesting resource cpu=0m on Node iruya-worker2 STEP: Starting Pods to consume most of the cluster CPU. STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-405f1c99-cc91-4373-b2a9-abeb8e7d89e2.1609ae81b10e6024], Reason = [Scheduled], Message = [Successfully assigned sched-pred-8239/filler-pod-405f1c99-cc91-4373-b2a9-abeb8e7d89e2 to iruya-worker2] STEP: Considering event: Type = [Normal], Name = [filler-pod-405f1c99-cc91-4373-b2a9-abeb8e7d89e2.1609ae81fc6c51d9], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-405f1c99-cc91-4373-b2a9-abeb8e7d89e2.1609ae82369cdcbe], Reason = [Created], Message = [Created container filler-pod-405f1c99-cc91-4373-b2a9-abeb8e7d89e2] STEP: Considering event: Type = [Normal], Name = [filler-pod-405f1c99-cc91-4373-b2a9-abeb8e7d89e2.1609ae824c57b081], Reason = [Started], Message = [Started container filler-pod-405f1c99-cc91-4373-b2a9-abeb8e7d89e2] STEP: Considering event: Type = [Normal], Name = [filler-pod-869d4075-aa98-4384-87f2-8b0071b0a3fa.1609ae81b4a32613], Reason = [Scheduled], Message = [Successfully assigned sched-pred-8239/filler-pod-869d4075-aa98-4384-87f2-8b0071b0a3fa to iruya-worker] STEP: Considering event: Type = [Normal], Name = [filler-pod-869d4075-aa98-4384-87f2-8b0071b0a3fa.1609ae823c65d987], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-869d4075-aa98-4384-87f2-8b0071b0a3fa.1609ae826861eaa1], Reason = [Created], Message = [Created container filler-pod-869d4075-aa98-4384-87f2-8b0071b0a3fa] STEP: Considering event: Type = [Normal], Name = [filler-pod-869d4075-aa98-4384-87f2-8b0071b0a3fa.1609ae8277209414], Reason = [Started], Message = [Started container filler-pod-869d4075-aa98-4384-87f2-8b0071b0a3fa] STEP: Considering event: Type = [Warning], Name = [additional-pod.1609ae82a3d27d93], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taints that the pod didn't tolerate, 2 Insufficient cpu.] STEP: removing the label node off the node iruya-worker STEP: verifying the node doesn't have the label node STEP: removing the label node off the node iruya-worker2 STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 27 12:57:00.802: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-8239" for this suite. Apr 27 12:57:06.846: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 27 12:57:06.914: INFO: namespace sched-pred-8239 deletion completed in 6.108312464s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72 • [SLOW TEST:11.515 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23 validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 27 12:57:06.915: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Apr 27 12:57:07.031: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a8530f18-5b18-4995-a1b7-ce67d9bde2c4" in namespace "downward-api-381" to be "success or failure" Apr 27 12:57:07.037: INFO: Pod "downwardapi-volume-a8530f18-5b18-4995-a1b7-ce67d9bde2c4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.05471ms Apr 27 12:57:09.040: INFO: Pod "downwardapi-volume-a8530f18-5b18-4995-a1b7-ce67d9bde2c4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009471884s Apr 27 12:57:11.045: INFO: Pod "downwardapi-volume-a8530f18-5b18-4995-a1b7-ce67d9bde2c4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014115618s STEP: Saw pod success Apr 27 12:57:11.045: INFO: Pod "downwardapi-volume-a8530f18-5b18-4995-a1b7-ce67d9bde2c4" satisfied condition "success or failure" Apr 27 12:57:11.048: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-a8530f18-5b18-4995-a1b7-ce67d9bde2c4 container client-container: STEP: delete the pod Apr 27 12:57:11.108: INFO: Waiting for pod downwardapi-volume-a8530f18-5b18-4995-a1b7-ce67d9bde2c4 to disappear Apr 27 12:57:11.113: INFO: Pod downwardapi-volume-a8530f18-5b18-4995-a1b7-ce67d9bde2c4 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 27 12:57:11.113: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-381" for this suite. Apr 27 12:57:17.131: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 27 12:57:17.199: INFO: namespace downward-api-381 deletion completed in 6.082218701s • [SLOW TEST:10.284 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 27 12:57:17.199: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating the pod Apr 27 12:57:21.817: INFO: Successfully updated pod "labelsupdatef18506a8-f0bf-49f1-acb2-900895e59401" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 27 12:57:23.835: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5938" for this suite. Apr 27 12:57:45.858: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 27 12:57:45.928: INFO: namespace projected-5938 deletion completed in 22.089371973s • [SLOW TEST:28.729 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 27 12:57:45.928: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0666 on tmpfs Apr 27 12:57:45.971: INFO: Waiting up to 5m0s for pod "pod-6d9596b6-4c79-4b5b-a0a5-c2b77d8169b4" in namespace "emptydir-173" to be "success or failure" Apr 27 12:57:45.989: INFO: Pod "pod-6d9596b6-4c79-4b5b-a0a5-c2b77d8169b4": Phase="Pending", Reason="", readiness=false. Elapsed: 18.654134ms Apr 27 12:57:47.993: INFO: Pod "pod-6d9596b6-4c79-4b5b-a0a5-c2b77d8169b4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022301358s Apr 27 12:57:49.998: INFO: Pod "pod-6d9596b6-4c79-4b5b-a0a5-c2b77d8169b4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.026720445s STEP: Saw pod success Apr 27 12:57:49.998: INFO: Pod "pod-6d9596b6-4c79-4b5b-a0a5-c2b77d8169b4" satisfied condition "success or failure" Apr 27 12:57:50.001: INFO: Trying to get logs from node iruya-worker pod pod-6d9596b6-4c79-4b5b-a0a5-c2b77d8169b4 container test-container: STEP: delete the pod Apr 27 12:57:50.026: INFO: Waiting for pod pod-6d9596b6-4c79-4b5b-a0a5-c2b77d8169b4 to disappear Apr 27 12:57:50.045: INFO: Pod pod-6d9596b6-4c79-4b5b-a0a5-c2b77d8169b4 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 27 12:57:50.045: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-173" for this suite. Apr 27 12:57:56.070: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 27 12:57:56.166: INFO: namespace emptydir-173 deletion completed in 6.118208357s • [SLOW TEST:10.238 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 27 12:57:56.167: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods Apr 27 12:57:56.760: INFO: Pod name wrapped-volume-race-4f95578a-4451-4d9d-a861-d0fff182d9d2: Found 0 pods out of 5 Apr 27 12:58:01.781: INFO: Pod name wrapped-volume-race-4f95578a-4451-4d9d-a861-d0fff182d9d2: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-4f95578a-4451-4d9d-a861-d0fff182d9d2 in namespace emptydir-wrapper-9409, will wait for the garbage collector to delete the pods Apr 27 12:58:13.872: INFO: Deleting ReplicationController wrapped-volume-race-4f95578a-4451-4d9d-a861-d0fff182d9d2 took: 8.500756ms Apr 27 12:58:14.173: INFO: Terminating ReplicationController wrapped-volume-race-4f95578a-4451-4d9d-a861-d0fff182d9d2 pods took: 300.242551ms STEP: Creating RC which spawns configmap-volume pods Apr 27 12:58:53.341: INFO: Pod name wrapped-volume-race-0c2f049b-c2f5-43d4-a359-f07cfab25ed5: Found 0 pods out of 5 Apr 27 12:58:58.348: INFO: Pod name wrapped-volume-race-0c2f049b-c2f5-43d4-a359-f07cfab25ed5: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-0c2f049b-c2f5-43d4-a359-f07cfab25ed5 in namespace emptydir-wrapper-9409, will wait for the garbage collector to delete the pods Apr 27 12:59:12.518: INFO: Deleting ReplicationController wrapped-volume-race-0c2f049b-c2f5-43d4-a359-f07cfab25ed5 took: 6.722906ms Apr 27 12:59:13.018: INFO: Terminating ReplicationController wrapped-volume-race-0c2f049b-c2f5-43d4-a359-f07cfab25ed5 pods took: 500.26382ms STEP: Creating RC which spawns configmap-volume pods Apr 27 12:59:52.695: INFO: Pod name wrapped-volume-race-6444d1a9-a34f-4401-8f25-117dbad8ca06: Found 0 pods out of 5 Apr 27 12:59:57.706: INFO: Pod name wrapped-volume-race-6444d1a9-a34f-4401-8f25-117dbad8ca06: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-6444d1a9-a34f-4401-8f25-117dbad8ca06 in namespace emptydir-wrapper-9409, will wait for the garbage collector to delete the pods Apr 27 13:00:11.793: INFO: Deleting ReplicationController wrapped-volume-race-6444d1a9-a34f-4401-8f25-117dbad8ca06 took: 10.680724ms Apr 27 13:00:12.094: INFO: Terminating ReplicationController wrapped-volume-race-6444d1a9-a34f-4401-8f25-117dbad8ca06 pods took: 300.221462ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 27 13:00:52.958: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-9409" for this suite. Apr 27 13:01:00.972: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 27 13:01:01.053: INFO: namespace emptydir-wrapper-9409 deletion completed in 8.089272208s • [SLOW TEST:184.886 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 27 13:01:01.054: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0777 on node default medium Apr 27 13:01:01.148: INFO: Waiting up to 5m0s for pod "pod-ddba9d2c-da04-408d-a194-57ffa807b0c1" in namespace "emptydir-8778" to be "success or failure" Apr 27 13:01:01.155: INFO: Pod "pod-ddba9d2c-da04-408d-a194-57ffa807b0c1": Phase="Pending", Reason="", readiness=false. Elapsed: 6.675549ms Apr 27 13:01:03.159: INFO: Pod "pod-ddba9d2c-da04-408d-a194-57ffa807b0c1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011056221s Apr 27 13:01:05.164: INFO: Pod "pod-ddba9d2c-da04-408d-a194-57ffa807b0c1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.015294205s STEP: Saw pod success Apr 27 13:01:05.164: INFO: Pod "pod-ddba9d2c-da04-408d-a194-57ffa807b0c1" satisfied condition "success or failure" Apr 27 13:01:05.167: INFO: Trying to get logs from node iruya-worker2 pod pod-ddba9d2c-da04-408d-a194-57ffa807b0c1 container test-container: STEP: delete the pod Apr 27 13:01:05.285: INFO: Waiting for pod pod-ddba9d2c-da04-408d-a194-57ffa807b0c1 to disappear Apr 27 13:01:05.298: INFO: Pod pod-ddba9d2c-da04-408d-a194-57ffa807b0c1 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 27 13:01:05.299: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8778" for this suite. Apr 27 13:01:11.314: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 27 13:01:11.392: INFO: namespace emptydir-8778 deletion completed in 6.089235153s • [SLOW TEST:10.338 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 27 13:01:11.392: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-1430 [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a new StatefulSet Apr 27 13:01:11.519: INFO: Found 0 stateful pods, waiting for 3 Apr 27 13:01:21.535: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Apr 27 13:01:21.535: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Apr 27 13:01:21.535: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=false Apr 27 13:01:31.524: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Apr 27 13:01:31.524: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Apr 27 13:01:31.524: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Updating stateful set template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine Apr 27 13:01:31.574: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Not applying an update when the partition is greater than the number of replicas STEP: Performing a canary update Apr 27 13:01:41.619: INFO: Updating stateful set ss2 Apr 27 13:01:41.647: INFO: Waiting for Pod statefulset-1430/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Apr 27 13:01:51.656: INFO: Waiting for Pod statefulset-1430/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c STEP: Restoring Pods to the correct revision when they are deleted Apr 27 13:02:01.791: INFO: Found 2 stateful pods, waiting for 3 Apr 27 13:02:11.795: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Apr 27 13:02:11.795: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Apr 27 13:02:11.795: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Performing a phased rolling update Apr 27 13:02:11.818: INFO: Updating stateful set ss2 Apr 27 13:02:11.847: INFO: Waiting for Pod statefulset-1430/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Apr 27 13:02:21.872: INFO: Updating stateful set ss2 Apr 27 13:02:21.902: INFO: Waiting for StatefulSet statefulset-1430/ss2 to complete update Apr 27 13:02:21.902: INFO: Waiting for Pod statefulset-1430/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Apr 27 13:02:31.911: INFO: Waiting for StatefulSet statefulset-1430/ss2 to complete update Apr 27 13:02:31.911: INFO: Waiting for Pod statefulset-1430/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 Apr 27 13:02:41.910: INFO: Deleting all statefulset in ns statefulset-1430 Apr 27 13:02:41.913: INFO: Scaling statefulset ss2 to 0 Apr 27 13:03:01.949: INFO: Waiting for statefulset status.replicas updated to 0 Apr 27 13:03:01.952: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 27 13:03:01.968: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-1430" for this suite. Apr 27 13:03:07.981: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 27 13:03:08.057: INFO: namespace statefulset-1430 deletion completed in 6.08686939s • [SLOW TEST:116.665 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 27 13:03:08.058: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Given a Pod with a 'name' label pod-adoption is created STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 27 13:03:13.224: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-468" for this suite. Apr 27 13:03:35.242: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 27 13:03:35.316: INFO: namespace replication-controller-468 deletion completed in 22.088591432s • [SLOW TEST:27.258 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 27 13:03:35.316: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes Apr 27 13:03:35.378: INFO: observed the pod list STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice STEP: verifying pod deletion was observed [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 27 13:03:51.872: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-2313" for this suite. Apr 27 13:03:57.892: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 27 13:03:57.958: INFO: namespace pods-2313 deletion completed in 6.083135223s • [SLOW TEST:22.642 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 27 13:03:57.958: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-secret-rh62 STEP: Creating a pod to test atomic-volume-subpath Apr 27 13:03:58.029: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-rh62" in namespace "subpath-7422" to be "success or failure" Apr 27 13:03:58.032: INFO: Pod "pod-subpath-test-secret-rh62": Phase="Pending", Reason="", readiness=false. Elapsed: 2.38671ms Apr 27 13:04:00.036: INFO: Pod "pod-subpath-test-secret-rh62": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006381315s Apr 27 13:04:02.040: INFO: Pod "pod-subpath-test-secret-rh62": Phase="Running", Reason="", readiness=true. Elapsed: 4.010526069s Apr 27 13:04:04.044: INFO: Pod "pod-subpath-test-secret-rh62": Phase="Running", Reason="", readiness=true. Elapsed: 6.014771737s Apr 27 13:04:06.048: INFO: Pod "pod-subpath-test-secret-rh62": Phase="Running", Reason="", readiness=true. Elapsed: 8.019147447s Apr 27 13:04:08.053: INFO: Pod "pod-subpath-test-secret-rh62": Phase="Running", Reason="", readiness=true. Elapsed: 10.023372067s Apr 27 13:04:10.058: INFO: Pod "pod-subpath-test-secret-rh62": Phase="Running", Reason="", readiness=true. Elapsed: 12.02837597s Apr 27 13:04:12.062: INFO: Pod "pod-subpath-test-secret-rh62": Phase="Running", Reason="", readiness=true. Elapsed: 14.032953251s Apr 27 13:04:14.067: INFO: Pod "pod-subpath-test-secret-rh62": Phase="Running", Reason="", readiness=true. Elapsed: 16.03745194s Apr 27 13:04:16.071: INFO: Pod "pod-subpath-test-secret-rh62": Phase="Running", Reason="", readiness=true. Elapsed: 18.041896521s Apr 27 13:04:18.075: INFO: Pod "pod-subpath-test-secret-rh62": Phase="Running", Reason="", readiness=true. Elapsed: 20.046022622s Apr 27 13:04:20.080: INFO: Pod "pod-subpath-test-secret-rh62": Phase="Running", Reason="", readiness=true. Elapsed: 22.050372302s Apr 27 13:04:22.084: INFO: Pod "pod-subpath-test-secret-rh62": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.054711589s STEP: Saw pod success Apr 27 13:04:22.084: INFO: Pod "pod-subpath-test-secret-rh62" satisfied condition "success or failure" Apr 27 13:04:22.087: INFO: Trying to get logs from node iruya-worker pod pod-subpath-test-secret-rh62 container test-container-subpath-secret-rh62: STEP: delete the pod Apr 27 13:04:22.110: INFO: Waiting for pod pod-subpath-test-secret-rh62 to disappear Apr 27 13:04:22.120: INFO: Pod pod-subpath-test-secret-rh62 no longer exists STEP: Deleting pod pod-subpath-test-secret-rh62 Apr 27 13:04:22.120: INFO: Deleting pod "pod-subpath-test-secret-rh62" in namespace "subpath-7422" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 27 13:04:22.139: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-7422" for this suite. Apr 27 13:04:28.175: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 27 13:04:28.263: INFO: namespace subpath-7422 deletion completed in 6.120292232s • [SLOW TEST:30.304 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 27 13:04:28.263: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-map-42c2f768-9de5-41c7-a173-e17b64fd5e6e STEP: Creating a pod to test consume secrets Apr 27 13:04:28.355: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-7f200929-e27f-4efd-9e56-8378fccb4820" in namespace "projected-6091" to be "success or failure" Apr 27 13:04:28.375: INFO: Pod "pod-projected-secrets-7f200929-e27f-4efd-9e56-8378fccb4820": Phase="Pending", Reason="", readiness=false. Elapsed: 18.963456ms Apr 27 13:04:30.379: INFO: Pod "pod-projected-secrets-7f200929-e27f-4efd-9e56-8378fccb4820": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0236454s Apr 27 13:04:32.384: INFO: Pod "pod-projected-secrets-7f200929-e27f-4efd-9e56-8378fccb4820": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.02821693s STEP: Saw pod success Apr 27 13:04:32.384: INFO: Pod "pod-projected-secrets-7f200929-e27f-4efd-9e56-8378fccb4820" satisfied condition "success or failure" Apr 27 13:04:32.387: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-secrets-7f200929-e27f-4efd-9e56-8378fccb4820 container projected-secret-volume-test: STEP: delete the pod Apr 27 13:04:32.422: INFO: Waiting for pod pod-projected-secrets-7f200929-e27f-4efd-9e56-8378fccb4820 to disappear Apr 27 13:04:32.434: INFO: Pod pod-projected-secrets-7f200929-e27f-4efd-9e56-8378fccb4820 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 27 13:04:32.434: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6091" for this suite. Apr 27 13:04:38.467: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 27 13:04:38.540: INFO: namespace projected-6091 deletion completed in 6.102082814s • [SLOW TEST:10.277 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 27 13:04:38.540: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Apr 27 13:04:46.710: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 27 13:04:46.727: INFO: Pod pod-with-poststart-exec-hook still exists Apr 27 13:04:48.727: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 27 13:04:48.731: INFO: Pod pod-with-poststart-exec-hook still exists Apr 27 13:04:50.728: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 27 13:04:50.731: INFO: Pod pod-with-poststart-exec-hook still exists Apr 27 13:04:52.727: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 27 13:04:52.732: INFO: Pod pod-with-poststart-exec-hook still exists Apr 27 13:04:54.727: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 27 13:04:54.732: INFO: Pod pod-with-poststart-exec-hook still exists Apr 27 13:04:56.727: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 27 13:04:56.731: INFO: Pod pod-with-poststart-exec-hook still exists Apr 27 13:04:58.727: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 27 13:04:58.732: INFO: Pod pod-with-poststart-exec-hook still exists Apr 27 13:05:00.727: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 27 13:05:00.732: INFO: Pod pod-with-poststart-exec-hook still exists Apr 27 13:05:02.727: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 27 13:05:02.731: INFO: Pod pod-with-poststart-exec-hook still exists Apr 27 13:05:04.727: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 27 13:05:04.732: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 27 13:05:04.732: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-2843" for this suite. Apr 27 13:05:26.753: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 27 13:05:26.832: INFO: namespace container-lifecycle-hook-2843 deletion completed in 22.096073033s • [SLOW TEST:48.292 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 27 13:05:26.833: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars Apr 27 13:05:26.908: INFO: Waiting up to 5m0s for pod "downward-api-e73326e0-65e1-4588-bf59-5d584592eee3" in namespace "downward-api-3945" to be "success or failure" Apr 27 13:05:26.912: INFO: Pod "downward-api-e73326e0-65e1-4588-bf59-5d584592eee3": Phase="Pending", Reason="", readiness=false. Elapsed: 3.162954ms Apr 27 13:05:28.916: INFO: Pod "downward-api-e73326e0-65e1-4588-bf59-5d584592eee3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007220647s Apr 27 13:05:30.920: INFO: Pod "downward-api-e73326e0-65e1-4588-bf59-5d584592eee3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011881836s STEP: Saw pod success Apr 27 13:05:30.920: INFO: Pod "downward-api-e73326e0-65e1-4588-bf59-5d584592eee3" satisfied condition "success or failure" Apr 27 13:05:30.924: INFO: Trying to get logs from node iruya-worker2 pod downward-api-e73326e0-65e1-4588-bf59-5d584592eee3 container dapi-container: STEP: delete the pod Apr 27 13:05:30.989: INFO: Waiting for pod downward-api-e73326e0-65e1-4588-bf59-5d584592eee3 to disappear Apr 27 13:05:30.991: INFO: Pod downward-api-e73326e0-65e1-4588-bf59-5d584592eee3 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 27 13:05:30.992: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3945" for this suite. Apr 27 13:05:37.007: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 27 13:05:37.104: INFO: namespace downward-api-3945 deletion completed in 6.109145656s • [SLOW TEST:10.272 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 27 13:05:37.105: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Apr 27 13:05:41.205: INFO: Expected: &{OK} to match Container's Termination Message: OK -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 27 13:05:41.243: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-4658" for this suite. Apr 27 13:05:47.273: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 27 13:05:47.369: INFO: namespace container-runtime-4658 deletion completed in 6.123306419s • [SLOW TEST:10.264 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129 should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 27 13:05:47.370: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Apr 27 13:05:47.498: INFO: Waiting up to 5m0s for pod "downwardapi-volume-727e5214-908a-41fe-931c-010e2d4aa8ac" in namespace "projected-5162" to be "success or failure" Apr 27 13:05:47.501: INFO: Pod "downwardapi-volume-727e5214-908a-41fe-931c-010e2d4aa8ac": Phase="Pending", Reason="", readiness=false. Elapsed: 3.014594ms Apr 27 13:05:49.506: INFO: Pod "downwardapi-volume-727e5214-908a-41fe-931c-010e2d4aa8ac": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007491154s Apr 27 13:05:51.512: INFO: Pod "downwardapi-volume-727e5214-908a-41fe-931c-010e2d4aa8ac": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013488891s STEP: Saw pod success Apr 27 13:05:51.512: INFO: Pod "downwardapi-volume-727e5214-908a-41fe-931c-010e2d4aa8ac" satisfied condition "success or failure" Apr 27 13:05:51.514: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-727e5214-908a-41fe-931c-010e2d4aa8ac container client-container: STEP: delete the pod Apr 27 13:05:51.555: INFO: Waiting for pod downwardapi-volume-727e5214-908a-41fe-931c-010e2d4aa8ac to disappear Apr 27 13:05:51.559: INFO: Pod downwardapi-volume-727e5214-908a-41fe-931c-010e2d4aa8ac no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 27 13:05:51.559: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5162" for this suite. Apr 27 13:05:57.574: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 27 13:05:57.649: INFO: namespace projected-5162 deletion completed in 6.088105307s • [SLOW TEST:10.279 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 27 13:05:57.650: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0644 on node default medium Apr 27 13:05:57.738: INFO: Waiting up to 5m0s for pod "pod-496ee3c1-1bab-4ee2-8b54-bf2444fb7cf7" in namespace "emptydir-2850" to be "success or failure" Apr 27 13:05:57.745: INFO: Pod "pod-496ee3c1-1bab-4ee2-8b54-bf2444fb7cf7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.064779ms Apr 27 13:05:59.749: INFO: Pod "pod-496ee3c1-1bab-4ee2-8b54-bf2444fb7cf7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010371912s Apr 27 13:06:01.753: INFO: Pod "pod-496ee3c1-1bab-4ee2-8b54-bf2444fb7cf7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01486152s STEP: Saw pod success Apr 27 13:06:01.753: INFO: Pod "pod-496ee3c1-1bab-4ee2-8b54-bf2444fb7cf7" satisfied condition "success or failure" Apr 27 13:06:01.756: INFO: Trying to get logs from node iruya-worker2 pod pod-496ee3c1-1bab-4ee2-8b54-bf2444fb7cf7 container test-container: STEP: delete the pod Apr 27 13:06:01.776: INFO: Waiting for pod pod-496ee3c1-1bab-4ee2-8b54-bf2444fb7cf7 to disappear Apr 27 13:06:01.802: INFO: Pod pod-496ee3c1-1bab-4ee2-8b54-bf2444fb7cf7 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 27 13:06:01.802: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2850" for this suite. Apr 27 13:06:07.830: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 27 13:06:07.938: INFO: namespace emptydir-2850 deletion completed in 6.132811837s • [SLOW TEST:10.289 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-network] Services should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 27 13:06:07.938: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88 [It] should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 27 13:06:08.010: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-3095" for this suite. Apr 27 13:06:14.025: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 27 13:06:14.100: INFO: namespace services-3095 deletion completed in 6.086592057s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92 • [SLOW TEST:6.162 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 27 13:06:14.101: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name secret-emptykey-test-a57ee817-e5be-41b3-addb-e8d3c0994594 [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 27 13:06:14.152: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-4853" for this suite. Apr 27 13:06:20.193: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 27 13:06:20.269: INFO: namespace secrets-4853 deletion completed in 6.089790274s • [SLOW TEST:6.168 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31 should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 27 13:06:20.270: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod busybox-40ded0b8-6067-46b5-b42d-efb9bdf05672 in namespace container-probe-7098 Apr 27 13:06:24.362: INFO: Started pod busybox-40ded0b8-6067-46b5-b42d-efb9bdf05672 in namespace container-probe-7098 STEP: checking the pod's current state and verifying that restartCount is present Apr 27 13:06:24.365: INFO: Initial restart count of pod busybox-40ded0b8-6067-46b5-b42d-efb9bdf05672 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 27 13:10:24.978: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-7098" for this suite. Apr 27 13:10:31.040: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 27 13:10:31.103: INFO: namespace container-probe-7098 deletion completed in 6.119344489s • [SLOW TEST:250.833 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 27 13:10:31.103: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-projected-t279 STEP: Creating a pod to test atomic-volume-subpath Apr 27 13:10:31.201: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-t279" in namespace "subpath-9582" to be "success or failure" Apr 27 13:10:31.204: INFO: Pod "pod-subpath-test-projected-t279": Phase="Pending", Reason="", readiness=false. Elapsed: 3.301958ms Apr 27 13:10:33.208: INFO: Pod "pod-subpath-test-projected-t279": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007067939s Apr 27 13:10:35.213: INFO: Pod "pod-subpath-test-projected-t279": Phase="Running", Reason="", readiness=true. Elapsed: 4.0119636s Apr 27 13:10:37.217: INFO: Pod "pod-subpath-test-projected-t279": Phase="Running", Reason="", readiness=true. Elapsed: 6.016411255s Apr 27 13:10:39.222: INFO: Pod "pod-subpath-test-projected-t279": Phase="Running", Reason="", readiness=true. Elapsed: 8.020564228s Apr 27 13:10:41.306: INFO: Pod "pod-subpath-test-projected-t279": Phase="Running", Reason="", readiness=true. Elapsed: 10.104531676s Apr 27 13:10:43.310: INFO: Pod "pod-subpath-test-projected-t279": Phase="Running", Reason="", readiness=true. Elapsed: 12.108960604s Apr 27 13:10:45.314: INFO: Pod "pod-subpath-test-projected-t279": Phase="Running", Reason="", readiness=true. Elapsed: 14.113101152s Apr 27 13:10:47.319: INFO: Pod "pod-subpath-test-projected-t279": Phase="Running", Reason="", readiness=true. Elapsed: 16.117638697s Apr 27 13:10:49.323: INFO: Pod "pod-subpath-test-projected-t279": Phase="Running", Reason="", readiness=true. Elapsed: 18.122197418s Apr 27 13:10:51.328: INFO: Pod "pod-subpath-test-projected-t279": Phase="Running", Reason="", readiness=true. Elapsed: 20.126733013s Apr 27 13:10:53.332: INFO: Pod "pod-subpath-test-projected-t279": Phase="Running", Reason="", readiness=true. Elapsed: 22.131194621s Apr 27 13:10:55.337: INFO: Pod "pod-subpath-test-projected-t279": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.135922177s STEP: Saw pod success Apr 27 13:10:55.337: INFO: Pod "pod-subpath-test-projected-t279" satisfied condition "success or failure" Apr 27 13:10:55.340: INFO: Trying to get logs from node iruya-worker2 pod pod-subpath-test-projected-t279 container test-container-subpath-projected-t279: STEP: delete the pod Apr 27 13:10:55.373: INFO: Waiting for pod pod-subpath-test-projected-t279 to disappear Apr 27 13:10:55.398: INFO: Pod pod-subpath-test-projected-t279 no longer exists STEP: Deleting pod pod-subpath-test-projected-t279 Apr 27 13:10:55.398: INFO: Deleting pod "pod-subpath-test-projected-t279" in namespace "subpath-9582" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 27 13:10:55.400: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-9582" for this suite. Apr 27 13:11:01.419: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 27 13:11:01.506: INFO: namespace subpath-9582 deletion completed in 6.102470273s • [SLOW TEST:30.403 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 27 13:11:01.506: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0666 on tmpfs Apr 27 13:11:01.603: INFO: Waiting up to 5m0s for pod "pod-74f361d1-70df-4673-93ee-4b6d94c1b492" in namespace "emptydir-6431" to be "success or failure" Apr 27 13:11:01.612: INFO: Pod "pod-74f361d1-70df-4673-93ee-4b6d94c1b492": Phase="Pending", Reason="", readiness=false. Elapsed: 9.555572ms Apr 27 13:11:03.686: INFO: Pod "pod-74f361d1-70df-4673-93ee-4b6d94c1b492": Phase="Pending", Reason="", readiness=false. Elapsed: 2.083335829s Apr 27 13:11:05.691: INFO: Pod "pod-74f361d1-70df-4673-93ee-4b6d94c1b492": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.087877528s STEP: Saw pod success Apr 27 13:11:05.691: INFO: Pod "pod-74f361d1-70df-4673-93ee-4b6d94c1b492" satisfied condition "success or failure" Apr 27 13:11:05.694: INFO: Trying to get logs from node iruya-worker pod pod-74f361d1-70df-4673-93ee-4b6d94c1b492 container test-container: STEP: delete the pod Apr 27 13:11:05.754: INFO: Waiting for pod pod-74f361d1-70df-4673-93ee-4b6d94c1b492 to disappear Apr 27 13:11:05.757: INFO: Pod pod-74f361d1-70df-4673-93ee-4b6d94c1b492 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 27 13:11:05.757: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6431" for this suite. Apr 27 13:11:11.773: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 27 13:11:11.855: INFO: namespace emptydir-6431 deletion completed in 6.091902935s • [SLOW TEST:10.349 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 27 13:11:11.857: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod liveness-631ed997-abd6-4682-8162-d4aba9480523 in namespace container-probe-4744 Apr 27 13:11:15.928: INFO: Started pod liveness-631ed997-abd6-4682-8162-d4aba9480523 in namespace container-probe-4744 STEP: checking the pod's current state and verifying that restartCount is present Apr 27 13:11:15.931: INFO: Initial restart count of pod liveness-631ed997-abd6-4682-8162-d4aba9480523 is 0 Apr 27 13:11:27.960: INFO: Restart count of pod container-probe-4744/liveness-631ed997-abd6-4682-8162-d4aba9480523 is now 1 (12.028537633s elapsed) Apr 27 13:11:48.010: INFO: Restart count of pod container-probe-4744/liveness-631ed997-abd6-4682-8162-d4aba9480523 is now 2 (32.078856388s elapsed) Apr 27 13:12:08.052: INFO: Restart count of pod container-probe-4744/liveness-631ed997-abd6-4682-8162-d4aba9480523 is now 3 (52.121064567s elapsed) Apr 27 13:12:28.095: INFO: Restart count of pod container-probe-4744/liveness-631ed997-abd6-4682-8162-d4aba9480523 is now 4 (1m12.164386064s elapsed) Apr 27 13:13:28.581: INFO: Restart count of pod container-probe-4744/liveness-631ed997-abd6-4682-8162-d4aba9480523 is now 5 (2m12.649942757s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 27 13:13:28.600: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-4744" for this suite. Apr 27 13:13:34.620: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 27 13:13:34.695: INFO: namespace container-probe-4744 deletion completed in 6.084652767s • [SLOW TEST:142.838 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 27 13:13:34.696: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-6420 [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace statefulset-6420 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-6420 Apr 27 13:13:34.799: INFO: Found 0 stateful pods, waiting for 1 Apr 27 13:13:44.803: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod Apr 27 13:13:44.806: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6420 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Apr 27 13:13:47.759: INFO: stderr: "I0427 13:13:47.643401 39 log.go:172] (0xc000ac8630) (0xc000aa0960) Create stream\nI0427 13:13:47.643448 39 log.go:172] (0xc000ac8630) (0xc000aa0960) Stream added, broadcasting: 1\nI0427 13:13:47.646407 39 log.go:172] (0xc000ac8630) Reply frame received for 1\nI0427 13:13:47.646444 39 log.go:172] (0xc000ac8630) (0xc000aa0a00) Create stream\nI0427 13:13:47.646468 39 log.go:172] (0xc000ac8630) (0xc000aa0a00) Stream added, broadcasting: 3\nI0427 13:13:47.647643 39 log.go:172] (0xc000ac8630) Reply frame received for 3\nI0427 13:13:47.647680 39 log.go:172] (0xc000ac8630) (0xc000aa0aa0) Create stream\nI0427 13:13:47.647704 39 log.go:172] (0xc000ac8630) (0xc000aa0aa0) Stream added, broadcasting: 5\nI0427 13:13:47.649454 39 log.go:172] (0xc000ac8630) Reply frame received for 5\nI0427 13:13:47.724434 39 log.go:172] (0xc000ac8630) Data frame received for 5\nI0427 13:13:47.724467 39 log.go:172] (0xc000aa0aa0) (5) Data frame handling\nI0427 13:13:47.724485 39 log.go:172] (0xc000aa0aa0) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0427 13:13:47.749729 39 log.go:172] (0xc000ac8630) Data frame received for 3\nI0427 13:13:47.749781 39 log.go:172] (0xc000aa0a00) (3) Data frame handling\nI0427 13:13:47.749799 39 log.go:172] (0xc000aa0a00) (3) Data frame sent\nI0427 13:13:47.749811 39 log.go:172] (0xc000ac8630) Data frame received for 3\nI0427 13:13:47.749821 39 log.go:172] (0xc000aa0a00) (3) Data frame handling\nI0427 13:13:47.749875 39 log.go:172] (0xc000ac8630) Data frame received for 5\nI0427 13:13:47.749908 39 log.go:172] (0xc000aa0aa0) (5) Data frame handling\nI0427 13:13:47.751776 39 log.go:172] (0xc000ac8630) Data frame received for 1\nI0427 13:13:47.751801 39 log.go:172] (0xc000aa0960) (1) Data frame handling\nI0427 13:13:47.751831 39 log.go:172] (0xc000aa0960) (1) Data frame sent\nI0427 13:13:47.751853 39 log.go:172] (0xc000ac8630) (0xc000aa0960) Stream removed, broadcasting: 1\nI0427 13:13:47.751936 39 log.go:172] (0xc000ac8630) Go away received\nI0427 13:13:47.752283 39 log.go:172] (0xc000ac8630) (0xc000aa0960) Stream removed, broadcasting: 1\nI0427 13:13:47.752302 39 log.go:172] (0xc000ac8630) (0xc000aa0a00) Stream removed, broadcasting: 3\nI0427 13:13:47.752314 39 log.go:172] (0xc000ac8630) (0xc000aa0aa0) Stream removed, broadcasting: 5\n" Apr 27 13:13:47.759: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Apr 27 13:13:47.759: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Apr 27 13:13:47.763: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Apr 27 13:13:57.767: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Apr 27 13:13:57.767: INFO: Waiting for statefulset status.replicas updated to 0 Apr 27 13:13:57.788: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999618s Apr 27 13:13:58.793: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.988075745s Apr 27 13:13:59.797: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.98327474s Apr 27 13:14:00.802: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.979646175s Apr 27 13:14:01.807: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.974655348s Apr 27 13:14:02.812: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.969398916s Apr 27 13:14:03.816: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.964794076s Apr 27 13:14:04.821: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.960303207s Apr 27 13:14:05.826: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.955404689s Apr 27 13:14:06.831: INFO: Verifying statefulset ss doesn't scale past 1 for another 949.942888ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-6420 Apr 27 13:14:07.835: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6420 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 27 13:14:08.066: INFO: stderr: "I0427 13:14:07.970970 68 log.go:172] (0xc0008e8370) (0xc0007246e0) Create stream\nI0427 13:14:07.971042 68 log.go:172] (0xc0008e8370) (0xc0007246e0) Stream added, broadcasting: 1\nI0427 13:14:07.975097 68 log.go:172] (0xc0008e8370) Reply frame received for 1\nI0427 13:14:07.975167 68 log.go:172] (0xc0008e8370) (0xc0005dc280) Create stream\nI0427 13:14:07.975194 68 log.go:172] (0xc0008e8370) (0xc0005dc280) Stream added, broadcasting: 3\nI0427 13:14:07.976252 68 log.go:172] (0xc0008e8370) Reply frame received for 3\nI0427 13:14:07.976292 68 log.go:172] (0xc0008e8370) (0xc0005dc320) Create stream\nI0427 13:14:07.976308 68 log.go:172] (0xc0008e8370) (0xc0005dc320) Stream added, broadcasting: 5\nI0427 13:14:07.977399 68 log.go:172] (0xc0008e8370) Reply frame received for 5\nI0427 13:14:08.058565 68 log.go:172] (0xc0008e8370) Data frame received for 5\nI0427 13:14:08.058602 68 log.go:172] (0xc0005dc320) (5) Data frame handling\nI0427 13:14:08.058620 68 log.go:172] (0xc0005dc320) (5) Data frame sent\nI0427 13:14:08.058633 68 log.go:172] (0xc0008e8370) Data frame received for 5\nI0427 13:14:08.058653 68 log.go:172] (0xc0005dc320) (5) Data frame handling\nI0427 13:14:08.058677 68 log.go:172] (0xc0008e8370) Data frame received for 3\nI0427 13:14:08.058714 68 log.go:172] (0xc0005dc280) (3) Data frame handling\nI0427 13:14:08.058729 68 log.go:172] (0xc0005dc280) (3) Data frame sent\nI0427 13:14:08.058741 68 log.go:172] (0xc0008e8370) Data frame received for 3\nI0427 13:14:08.058753 68 log.go:172] (0xc0005dc280) (3) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0427 13:14:08.059920 68 log.go:172] (0xc0008e8370) Data frame received for 1\nI0427 13:14:08.059954 68 log.go:172] (0xc0007246e0) (1) Data frame handling\nI0427 13:14:08.059970 68 log.go:172] (0xc0007246e0) (1) Data frame sent\nI0427 13:14:08.059994 68 log.go:172] (0xc0008e8370) (0xc0007246e0) Stream removed, broadcasting: 1\nI0427 13:14:08.060023 68 log.go:172] (0xc0008e8370) Go away received\nI0427 13:14:08.060461 68 log.go:172] (0xc0008e8370) (0xc0007246e0) Stream removed, broadcasting: 1\nI0427 13:14:08.060485 68 log.go:172] (0xc0008e8370) (0xc0005dc280) Stream removed, broadcasting: 3\nI0427 13:14:08.060497 68 log.go:172] (0xc0008e8370) (0xc0005dc320) Stream removed, broadcasting: 5\n" Apr 27 13:14:08.066: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Apr 27 13:14:08.066: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Apr 27 13:14:08.069: INFO: Found 1 stateful pods, waiting for 3 Apr 27 13:14:18.074: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Apr 27 13:14:18.074: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Apr 27 13:14:18.074: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod Apr 27 13:14:18.080: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6420 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Apr 27 13:14:18.279: INFO: stderr: "I0427 13:14:18.198737 88 log.go:172] (0xc00098c2c0) (0xc00081a5a0) Create stream\nI0427 13:14:18.198796 88 log.go:172] (0xc00098c2c0) (0xc00081a5a0) Stream added, broadcasting: 1\nI0427 13:14:18.201438 88 log.go:172] (0xc00098c2c0) Reply frame received for 1\nI0427 13:14:18.201468 88 log.go:172] (0xc00098c2c0) (0xc000916000) Create stream\nI0427 13:14:18.201477 88 log.go:172] (0xc00098c2c0) (0xc000916000) Stream added, broadcasting: 3\nI0427 13:14:18.202511 88 log.go:172] (0xc00098c2c0) Reply frame received for 3\nI0427 13:14:18.202549 88 log.go:172] (0xc00098c2c0) (0xc0002ac280) Create stream\nI0427 13:14:18.202562 88 log.go:172] (0xc00098c2c0) (0xc0002ac280) Stream added, broadcasting: 5\nI0427 13:14:18.203588 88 log.go:172] (0xc00098c2c0) Reply frame received for 5\nI0427 13:14:18.271575 88 log.go:172] (0xc00098c2c0) Data frame received for 5\nI0427 13:14:18.271633 88 log.go:172] (0xc0002ac280) (5) Data frame handling\nI0427 13:14:18.271656 88 log.go:172] (0xc0002ac280) (5) Data frame sent\nI0427 13:14:18.271676 88 log.go:172] (0xc00098c2c0) Data frame received for 5\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0427 13:14:18.271698 88 log.go:172] (0xc0002ac280) (5) Data frame handling\nI0427 13:14:18.271763 88 log.go:172] (0xc00098c2c0) Data frame received for 3\nI0427 13:14:18.271787 88 log.go:172] (0xc000916000) (3) Data frame handling\nI0427 13:14:18.271819 88 log.go:172] (0xc000916000) (3) Data frame sent\nI0427 13:14:18.271841 88 log.go:172] (0xc00098c2c0) Data frame received for 3\nI0427 13:14:18.271856 88 log.go:172] (0xc000916000) (3) Data frame handling\nI0427 13:14:18.273641 88 log.go:172] (0xc00098c2c0) Data frame received for 1\nI0427 13:14:18.273672 88 log.go:172] (0xc00081a5a0) (1) Data frame handling\nI0427 13:14:18.273687 88 log.go:172] (0xc00081a5a0) (1) Data frame sent\nI0427 13:14:18.273719 88 log.go:172] (0xc00098c2c0) (0xc00081a5a0) Stream removed, broadcasting: 1\nI0427 13:14:18.273751 88 log.go:172] (0xc00098c2c0) Go away received\nI0427 13:14:18.274106 88 log.go:172] (0xc00098c2c0) (0xc00081a5a0) Stream removed, broadcasting: 1\nI0427 13:14:18.274129 88 log.go:172] (0xc00098c2c0) (0xc000916000) Stream removed, broadcasting: 3\nI0427 13:14:18.274137 88 log.go:172] (0xc00098c2c0) (0xc0002ac280) Stream removed, broadcasting: 5\n" Apr 27 13:14:18.279: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Apr 27 13:14:18.279: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Apr 27 13:14:18.279: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6420 ss-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Apr 27 13:14:18.511: INFO: stderr: "I0427 13:14:18.415911 107 log.go:172] (0xc0006ba630) (0xc000602aa0) Create stream\nI0427 13:14:18.415985 107 log.go:172] (0xc0006ba630) (0xc000602aa0) Stream added, broadcasting: 1\nI0427 13:14:18.418862 107 log.go:172] (0xc0006ba630) Reply frame received for 1\nI0427 13:14:18.418907 107 log.go:172] (0xc0006ba630) (0xc000718000) Create stream\nI0427 13:14:18.418920 107 log.go:172] (0xc0006ba630) (0xc000718000) Stream added, broadcasting: 3\nI0427 13:14:18.420024 107 log.go:172] (0xc0006ba630) Reply frame received for 3\nI0427 13:14:18.420099 107 log.go:172] (0xc0006ba630) (0xc000888000) Create stream\nI0427 13:14:18.420117 107 log.go:172] (0xc0006ba630) (0xc000888000) Stream added, broadcasting: 5\nI0427 13:14:18.421034 107 log.go:172] (0xc0006ba630) Reply frame received for 5\nI0427 13:14:18.475028 107 log.go:172] (0xc0006ba630) Data frame received for 5\nI0427 13:14:18.475077 107 log.go:172] (0xc000888000) (5) Data frame handling\nI0427 13:14:18.475110 107 log.go:172] (0xc000888000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0427 13:14:18.503762 107 log.go:172] (0xc0006ba630) Data frame received for 3\nI0427 13:14:18.503795 107 log.go:172] (0xc000718000) (3) Data frame handling\nI0427 13:14:18.503827 107 log.go:172] (0xc000718000) (3) Data frame sent\nI0427 13:14:18.503846 107 log.go:172] (0xc0006ba630) Data frame received for 3\nI0427 13:14:18.503862 107 log.go:172] (0xc000718000) (3) Data frame handling\nI0427 13:14:18.503884 107 log.go:172] (0xc0006ba630) Data frame received for 5\nI0427 13:14:18.504007 107 log.go:172] (0xc000888000) (5) Data frame handling\nI0427 13:14:18.506039 107 log.go:172] (0xc0006ba630) Data frame received for 1\nI0427 13:14:18.506071 107 log.go:172] (0xc000602aa0) (1) Data frame handling\nI0427 13:14:18.506095 107 log.go:172] (0xc000602aa0) (1) Data frame sent\nI0427 13:14:18.506109 107 log.go:172] (0xc0006ba630) (0xc000602aa0) Stream removed, broadcasting: 1\nI0427 13:14:18.506131 107 log.go:172] (0xc0006ba630) Go away received\nI0427 13:14:18.507145 107 log.go:172] (0xc0006ba630) (0xc000602aa0) Stream removed, broadcasting: 1\nI0427 13:14:18.507195 107 log.go:172] (0xc0006ba630) (0xc000718000) Stream removed, broadcasting: 3\nI0427 13:14:18.507244 107 log.go:172] (0xc0006ba630) (0xc000888000) Stream removed, broadcasting: 5\n" Apr 27 13:14:18.511: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Apr 27 13:14:18.511: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Apr 27 13:14:18.511: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6420 ss-2 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Apr 27 13:14:18.745: INFO: stderr: "I0427 13:14:18.642285 127 log.go:172] (0xc0009a4630) (0xc000534b40) Create stream\nI0427 13:14:18.642341 127 log.go:172] (0xc0009a4630) (0xc000534b40) Stream added, broadcasting: 1\nI0427 13:14:18.645003 127 log.go:172] (0xc0009a4630) Reply frame received for 1\nI0427 13:14:18.645041 127 log.go:172] (0xc0009a4630) (0xc000a46000) Create stream\nI0427 13:14:18.645062 127 log.go:172] (0xc0009a4630) (0xc000a46000) Stream added, broadcasting: 3\nI0427 13:14:18.646250 127 log.go:172] (0xc0009a4630) Reply frame received for 3\nI0427 13:14:18.646291 127 log.go:172] (0xc0009a4630) (0xc000a460a0) Create stream\nI0427 13:14:18.646302 127 log.go:172] (0xc0009a4630) (0xc000a460a0) Stream added, broadcasting: 5\nI0427 13:14:18.647365 127 log.go:172] (0xc0009a4630) Reply frame received for 5\nI0427 13:14:18.710409 127 log.go:172] (0xc0009a4630) Data frame received for 5\nI0427 13:14:18.710440 127 log.go:172] (0xc000a460a0) (5) Data frame handling\nI0427 13:14:18.710461 127 log.go:172] (0xc000a460a0) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0427 13:14:18.737833 127 log.go:172] (0xc0009a4630) Data frame received for 3\nI0427 13:14:18.737877 127 log.go:172] (0xc000a46000) (3) Data frame handling\nI0427 13:14:18.737893 127 log.go:172] (0xc000a46000) (3) Data frame sent\nI0427 13:14:18.737904 127 log.go:172] (0xc0009a4630) Data frame received for 3\nI0427 13:14:18.737915 127 log.go:172] (0xc000a46000) (3) Data frame handling\nI0427 13:14:18.737932 127 log.go:172] (0xc0009a4630) Data frame received for 5\nI0427 13:14:18.737960 127 log.go:172] (0xc000a460a0) (5) Data frame handling\nI0427 13:14:18.740165 127 log.go:172] (0xc0009a4630) Data frame received for 1\nI0427 13:14:18.740191 127 log.go:172] (0xc000534b40) (1) Data frame handling\nI0427 13:14:18.740201 127 log.go:172] (0xc000534b40) (1) Data frame sent\nI0427 13:14:18.740215 127 log.go:172] (0xc0009a4630) (0xc000534b40) Stream removed, broadcasting: 1\nI0427 13:14:18.740230 127 log.go:172] (0xc0009a4630) Go away received\nI0427 13:14:18.740742 127 log.go:172] (0xc0009a4630) (0xc000534b40) Stream removed, broadcasting: 1\nI0427 13:14:18.740766 127 log.go:172] (0xc0009a4630) (0xc000a46000) Stream removed, broadcasting: 3\nI0427 13:14:18.740778 127 log.go:172] (0xc0009a4630) (0xc000a460a0) Stream removed, broadcasting: 5\n" Apr 27 13:14:18.745: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Apr 27 13:14:18.745: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Apr 27 13:14:18.745: INFO: Waiting for statefulset status.replicas updated to 0 Apr 27 13:14:18.749: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 3 Apr 27 13:14:28.758: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Apr 27 13:14:28.758: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Apr 27 13:14:28.758: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Apr 27 13:14:28.771: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999691s Apr 27 13:14:29.775: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.993402814s Apr 27 13:14:30.780: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.989618805s Apr 27 13:14:31.786: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.984466368s Apr 27 13:14:32.791: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.979172478s Apr 27 13:14:33.796: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.973374549s Apr 27 13:14:34.801: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.96909155s Apr 27 13:14:35.810: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.963755842s Apr 27 13:14:36.815: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.954997421s Apr 27 13:14:37.820: INFO: Verifying statefulset ss doesn't scale past 3 for another 949.626046ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-6420 Apr 27 13:14:38.840: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6420 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 27 13:14:40.267: INFO: stderr: "I0427 13:14:40.187792 146 log.go:172] (0xc0003d6630) (0xc0006b8b40) Create stream\nI0427 13:14:40.187850 146 log.go:172] (0xc0003d6630) (0xc0006b8b40) Stream added, broadcasting: 1\nI0427 13:14:40.191504 146 log.go:172] (0xc0003d6630) Reply frame received for 1\nI0427 13:14:40.191545 146 log.go:172] (0xc0003d6630) (0xc0006b8280) Create stream\nI0427 13:14:40.191558 146 log.go:172] (0xc0003d6630) (0xc0006b8280) Stream added, broadcasting: 3\nI0427 13:14:40.192546 146 log.go:172] (0xc0003d6630) Reply frame received for 3\nI0427 13:14:40.192626 146 log.go:172] (0xc0003d6630) (0xc000186000) Create stream\nI0427 13:14:40.192659 146 log.go:172] (0xc0003d6630) (0xc000186000) Stream added, broadcasting: 5\nI0427 13:14:40.193837 146 log.go:172] (0xc0003d6630) Reply frame received for 5\nI0427 13:14:40.260971 146 log.go:172] (0xc0003d6630) Data frame received for 5\nI0427 13:14:40.260996 146 log.go:172] (0xc000186000) (5) Data frame handling\nI0427 13:14:40.261008 146 log.go:172] (0xc000186000) (5) Data frame sent\nI0427 13:14:40.261016 146 log.go:172] (0xc0003d6630) Data frame received for 5\nI0427 13:14:40.261022 146 log.go:172] (0xc000186000) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0427 13:14:40.261046 146 log.go:172] (0xc0003d6630) Data frame received for 3\nI0427 13:14:40.261068 146 log.go:172] (0xc0006b8280) (3) Data frame handling\nI0427 13:14:40.261106 146 log.go:172] (0xc0006b8280) (3) Data frame sent\nI0427 13:14:40.261286 146 log.go:172] (0xc0003d6630) Data frame received for 3\nI0427 13:14:40.261303 146 log.go:172] (0xc0006b8280) (3) Data frame handling\nI0427 13:14:40.262499 146 log.go:172] (0xc0003d6630) Data frame received for 1\nI0427 13:14:40.262536 146 log.go:172] (0xc0006b8b40) (1) Data frame handling\nI0427 13:14:40.262578 146 log.go:172] (0xc0006b8b40) (1) Data frame sent\nI0427 13:14:40.262595 146 log.go:172] (0xc0003d6630) (0xc0006b8b40) Stream removed, broadcasting: 1\nI0427 13:14:40.262736 146 log.go:172] (0xc0003d6630) Go away received\nI0427 13:14:40.262962 146 log.go:172] (0xc0003d6630) (0xc0006b8b40) Stream removed, broadcasting: 1\nI0427 13:14:40.262980 146 log.go:172] (0xc0003d6630) (0xc0006b8280) Stream removed, broadcasting: 3\nI0427 13:14:40.262998 146 log.go:172] (0xc0003d6630) (0xc000186000) Stream removed, broadcasting: 5\n" Apr 27 13:14:40.267: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Apr 27 13:14:40.267: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Apr 27 13:14:40.267: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6420 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 27 13:14:40.467: INFO: stderr: "I0427 13:14:40.391357 166 log.go:172] (0xc000770a50) (0xc000754aa0) Create stream\nI0427 13:14:40.391435 166 log.go:172] (0xc000770a50) (0xc000754aa0) Stream added, broadcasting: 1\nI0427 13:14:40.393659 166 log.go:172] (0xc000770a50) Reply frame received for 1\nI0427 13:14:40.393696 166 log.go:172] (0xc000770a50) (0xc000748000) Create stream\nI0427 13:14:40.393709 166 log.go:172] (0xc000770a50) (0xc000748000) Stream added, broadcasting: 3\nI0427 13:14:40.394587 166 log.go:172] (0xc000770a50) Reply frame received for 3\nI0427 13:14:40.394621 166 log.go:172] (0xc000770a50) (0xc000754b40) Create stream\nI0427 13:14:40.394639 166 log.go:172] (0xc000770a50) (0xc000754b40) Stream added, broadcasting: 5\nI0427 13:14:40.395345 166 log.go:172] (0xc000770a50) Reply frame received for 5\nI0427 13:14:40.461010 166 log.go:172] (0xc000770a50) Data frame received for 5\nI0427 13:14:40.461059 166 log.go:172] (0xc000754b40) (5) Data frame handling\nI0427 13:14:40.461071 166 log.go:172] (0xc000754b40) (5) Data frame sent\nI0427 13:14:40.461079 166 log.go:172] (0xc000770a50) Data frame received for 5\nI0427 13:14:40.461091 166 log.go:172] (0xc000754b40) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0427 13:14:40.461261 166 log.go:172] (0xc000770a50) Data frame received for 3\nI0427 13:14:40.461290 166 log.go:172] (0xc000748000) (3) Data frame handling\nI0427 13:14:40.461301 166 log.go:172] (0xc000748000) (3) Data frame sent\nI0427 13:14:40.461414 166 log.go:172] (0xc000770a50) Data frame received for 3\nI0427 13:14:40.461424 166 log.go:172] (0xc000748000) (3) Data frame handling\nI0427 13:14:40.462828 166 log.go:172] (0xc000770a50) Data frame received for 1\nI0427 13:14:40.462858 166 log.go:172] (0xc000754aa0) (1) Data frame handling\nI0427 13:14:40.462874 166 log.go:172] (0xc000754aa0) (1) Data frame sent\nI0427 13:14:40.462906 166 log.go:172] (0xc000770a50) (0xc000754aa0) Stream removed, broadcasting: 1\nI0427 13:14:40.462937 166 log.go:172] (0xc000770a50) Go away received\nI0427 13:14:40.463275 166 log.go:172] (0xc000770a50) (0xc000754aa0) Stream removed, broadcasting: 1\nI0427 13:14:40.463291 166 log.go:172] (0xc000770a50) (0xc000748000) Stream removed, broadcasting: 3\nI0427 13:14:40.463302 166 log.go:172] (0xc000770a50) (0xc000754b40) Stream removed, broadcasting: 5\n" Apr 27 13:14:40.467: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Apr 27 13:14:40.467: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Apr 27 13:14:40.467: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6420 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 27 13:14:40.657: INFO: stderr: "I0427 13:14:40.587963 186 log.go:172] (0xc0009d8420) (0xc0009a6780) Create stream\nI0427 13:14:40.588051 186 log.go:172] (0xc0009d8420) (0xc0009a6780) Stream added, broadcasting: 1\nI0427 13:14:40.590870 186 log.go:172] (0xc0009d8420) Reply frame received for 1\nI0427 13:14:40.590935 186 log.go:172] (0xc0009d8420) (0xc0005380a0) Create stream\nI0427 13:14:40.590957 186 log.go:172] (0xc0009d8420) (0xc0005380a0) Stream added, broadcasting: 3\nI0427 13:14:40.591813 186 log.go:172] (0xc0009d8420) Reply frame received for 3\nI0427 13:14:40.591855 186 log.go:172] (0xc0009d8420) (0xc00090c000) Create stream\nI0427 13:14:40.591875 186 log.go:172] (0xc0009d8420) (0xc00090c000) Stream added, broadcasting: 5\nI0427 13:14:40.592676 186 log.go:172] (0xc0009d8420) Reply frame received for 5\nI0427 13:14:40.649504 186 log.go:172] (0xc0009d8420) Data frame received for 5\nI0427 13:14:40.649550 186 log.go:172] (0xc00090c000) (5) Data frame handling\nI0427 13:14:40.649565 186 log.go:172] (0xc00090c000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0427 13:14:40.649585 186 log.go:172] (0xc0009d8420) Data frame received for 3\nI0427 13:14:40.649597 186 log.go:172] (0xc0005380a0) (3) Data frame handling\nI0427 13:14:40.649619 186 log.go:172] (0xc0009d8420) Data frame received for 5\nI0427 13:14:40.649649 186 log.go:172] (0xc00090c000) (5) Data frame handling\nI0427 13:14:40.649688 186 log.go:172] (0xc0005380a0) (3) Data frame sent\nI0427 13:14:40.649737 186 log.go:172] (0xc0009d8420) Data frame received for 3\nI0427 13:14:40.649757 186 log.go:172] (0xc0005380a0) (3) Data frame handling\nI0427 13:14:40.651296 186 log.go:172] (0xc0009d8420) Data frame received for 1\nI0427 13:14:40.651319 186 log.go:172] (0xc0009a6780) (1) Data frame handling\nI0427 13:14:40.651342 186 log.go:172] (0xc0009a6780) (1) Data frame sent\nI0427 13:14:40.651357 186 log.go:172] (0xc0009d8420) (0xc0009a6780) Stream removed, broadcasting: 1\nI0427 13:14:40.651546 186 log.go:172] (0xc0009d8420) Go away received\nI0427 13:14:40.651743 186 log.go:172] (0xc0009d8420) (0xc0009a6780) Stream removed, broadcasting: 1\nI0427 13:14:40.651766 186 log.go:172] (0xc0009d8420) (0xc0005380a0) Stream removed, broadcasting: 3\nI0427 13:14:40.651783 186 log.go:172] (0xc0009d8420) (0xc00090c000) Stream removed, broadcasting: 5\n" Apr 27 13:14:40.657: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Apr 27 13:14:40.657: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Apr 27 13:14:40.657: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 Apr 27 13:15:00.674: INFO: Deleting all statefulset in ns statefulset-6420 Apr 27 13:15:00.677: INFO: Scaling statefulset ss to 0 Apr 27 13:15:00.684: INFO: Waiting for statefulset status.replicas updated to 0 Apr 27 13:15:00.693: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 27 13:15:00.702: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-6420" for this suite. Apr 27 13:15:06.715: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 27 13:15:06.788: INFO: namespace statefulset-6420 deletion completed in 6.083287918s • [SLOW TEST:92.092 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 27 13:15:06.789: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Performing setup for networking test in namespace pod-network-test-1629 STEP: creating a selector STEP: Creating the service pods in kubernetes Apr 27 13:15:06.869: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Apr 27 13:15:32.994: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.26:8080/dial?request=hostName&protocol=udp&host=10.244.2.67&port=8081&tries=1'] Namespace:pod-network-test-1629 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 27 13:15:32.994: INFO: >>> kubeConfig: /root/.kube/config I0427 13:15:33.033077 6 log.go:172] (0xc0009d6420) (0xc001b16780) Create stream I0427 13:15:33.033102 6 log.go:172] (0xc0009d6420) (0xc001b16780) Stream added, broadcasting: 1 I0427 13:15:33.035269 6 log.go:172] (0xc0009d6420) Reply frame received for 1 I0427 13:15:33.035311 6 log.go:172] (0xc0009d6420) (0xc00147e960) Create stream I0427 13:15:33.035324 6 log.go:172] (0xc0009d6420) (0xc00147e960) Stream added, broadcasting: 3 I0427 13:15:33.036223 6 log.go:172] (0xc0009d6420) Reply frame received for 3 I0427 13:15:33.036272 6 log.go:172] (0xc0009d6420) (0xc001c16320) Create stream I0427 13:15:33.036290 6 log.go:172] (0xc0009d6420) (0xc001c16320) Stream added, broadcasting: 5 I0427 13:15:33.037485 6 log.go:172] (0xc0009d6420) Reply frame received for 5 I0427 13:15:33.142072 6 log.go:172] (0xc0009d6420) Data frame received for 3 I0427 13:15:33.142119 6 log.go:172] (0xc00147e960) (3) Data frame handling I0427 13:15:33.142157 6 log.go:172] (0xc00147e960) (3) Data frame sent I0427 13:15:33.142232 6 log.go:172] (0xc0009d6420) Data frame received for 3 I0427 13:15:33.142272 6 log.go:172] (0xc00147e960) (3) Data frame handling I0427 13:15:33.142884 6 log.go:172] (0xc0009d6420) Data frame received for 5 I0427 13:15:33.142904 6 log.go:172] (0xc001c16320) (5) Data frame handling I0427 13:15:33.144670 6 log.go:172] (0xc0009d6420) Data frame received for 1 I0427 13:15:33.144685 6 log.go:172] (0xc001b16780) (1) Data frame handling I0427 13:15:33.144694 6 log.go:172] (0xc001b16780) (1) Data frame sent I0427 13:15:33.144709 6 log.go:172] (0xc0009d6420) (0xc001b16780) Stream removed, broadcasting: 1 I0427 13:15:33.144719 6 log.go:172] (0xc0009d6420) Go away received I0427 13:15:33.145429 6 log.go:172] (0xc0009d6420) (0xc001b16780) Stream removed, broadcasting: 1 I0427 13:15:33.145455 6 log.go:172] (0xc0009d6420) (0xc00147e960) Stream removed, broadcasting: 3 I0427 13:15:33.145473 6 log.go:172] (0xc0009d6420) (0xc001c16320) Stream removed, broadcasting: 5 Apr 27 13:15:33.145: INFO: Waiting for endpoints: map[] Apr 27 13:15:33.149: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.26:8080/dial?request=hostName&protocol=udp&host=10.244.1.25&port=8081&tries=1'] Namespace:pod-network-test-1629 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 27 13:15:33.149: INFO: >>> kubeConfig: /root/.kube/config I0427 13:15:33.183388 6 log.go:172] (0xc00087c9a0) (0xc00147ee60) Create stream I0427 13:15:33.183435 6 log.go:172] (0xc00087c9a0) (0xc00147ee60) Stream added, broadcasting: 1 I0427 13:15:33.185650 6 log.go:172] (0xc00087c9a0) Reply frame received for 1 I0427 13:15:33.185679 6 log.go:172] (0xc00087c9a0) (0xc00147ef00) Create stream I0427 13:15:33.185689 6 log.go:172] (0xc00087c9a0) (0xc00147ef00) Stream added, broadcasting: 3 I0427 13:15:33.186667 6 log.go:172] (0xc00087c9a0) Reply frame received for 3 I0427 13:15:33.186712 6 log.go:172] (0xc00087c9a0) (0xc000518dc0) Create stream I0427 13:15:33.186736 6 log.go:172] (0xc00087c9a0) (0xc000518dc0) Stream added, broadcasting: 5 I0427 13:15:33.188037 6 log.go:172] (0xc00087c9a0) Reply frame received for 5 I0427 13:15:33.253983 6 log.go:172] (0xc00087c9a0) Data frame received for 3 I0427 13:15:33.254010 6 log.go:172] (0xc00147ef00) (3) Data frame handling I0427 13:15:33.254025 6 log.go:172] (0xc00147ef00) (3) Data frame sent I0427 13:15:33.254424 6 log.go:172] (0xc00087c9a0) Data frame received for 3 I0427 13:15:33.254461 6 log.go:172] (0xc00147ef00) (3) Data frame handling I0427 13:15:33.254479 6 log.go:172] (0xc00087c9a0) Data frame received for 5 I0427 13:15:33.254497 6 log.go:172] (0xc000518dc0) (5) Data frame handling I0427 13:15:33.255809 6 log.go:172] (0xc00087c9a0) Data frame received for 1 I0427 13:15:33.255830 6 log.go:172] (0xc00147ee60) (1) Data frame handling I0427 13:15:33.255851 6 log.go:172] (0xc00147ee60) (1) Data frame sent I0427 13:15:33.255895 6 log.go:172] (0xc00087c9a0) (0xc00147ee60) Stream removed, broadcasting: 1 I0427 13:15:33.255955 6 log.go:172] (0xc00087c9a0) Go away received I0427 13:15:33.255999 6 log.go:172] (0xc00087c9a0) (0xc00147ee60) Stream removed, broadcasting: 1 I0427 13:15:33.256019 6 log.go:172] (0xc00087c9a0) (0xc00147ef00) Stream removed, broadcasting: 3 I0427 13:15:33.256036 6 log.go:172] (0xc00087c9a0) (0xc000518dc0) Stream removed, broadcasting: 5 Apr 27 13:15:33.256: INFO: Waiting for endpoints: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 27 13:15:33.256: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-1629" for this suite. Apr 27 13:15:55.294: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 27 13:15:55.399: INFO: namespace pod-network-test-1629 deletion completed in 22.129919339s • [SLOW TEST:48.610 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 27 13:15:55.399: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with configMap that has name projected-configmap-test-upd-a238e630-d0d0-4083-8771-2657abb970e7 STEP: Creating the pod STEP: Updating configmap projected-configmap-test-upd-a238e630-d0d0-4083-8771-2657abb970e7 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 27 13:17:20.467: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6663" for this suite. Apr 27 13:17:42.484: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 27 13:17:42.572: INFO: namespace projected-6663 deletion completed in 22.100414901s • [SLOW TEST:107.173 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 27 13:17:42.572: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name cm-test-opt-del-6909dc2f-b458-4cc2-9d79-22b15b08dba7 STEP: Creating configMap with name cm-test-opt-upd-51eda301-a56b-4949-bd92-5b54ac295353 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-6909dc2f-b458-4cc2-9d79-22b15b08dba7 STEP: Updating configmap cm-test-opt-upd-51eda301-a56b-4949-bd92-5b54ac295353 STEP: Creating configMap with name cm-test-opt-create-d83169b4-d689-4640-a982-9e220a7e3324 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 27 13:18:55.048: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1612" for this suite. Apr 27 13:19:17.068: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 27 13:19:17.134: INFO: namespace configmap-1612 deletion completed in 22.082586247s • [SLOW TEST:94.562 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 27 13:19:17.135: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W0427 13:19:47.735162 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Apr 27 13:19:47.735: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 27 13:19:47.735: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-4829" for this suite. Apr 27 13:19:53.752: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 27 13:19:53.820: INFO: namespace gc-4829 deletion completed in 6.081365443s • [SLOW TEST:36.685 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 27 13:19:53.820: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Apr 27 13:20:01.928: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 27 13:20:01.937: INFO: Pod pod-with-prestop-exec-hook still exists Apr 27 13:20:03.938: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 27 13:20:03.941: INFO: Pod pod-with-prestop-exec-hook still exists Apr 27 13:20:05.938: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 27 13:20:05.942: INFO: Pod pod-with-prestop-exec-hook still exists Apr 27 13:20:07.938: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 27 13:20:07.942: INFO: Pod pod-with-prestop-exec-hook still exists Apr 27 13:20:09.938: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 27 13:20:09.943: INFO: Pod pod-with-prestop-exec-hook still exists Apr 27 13:20:11.938: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 27 13:20:11.942: INFO: Pod pod-with-prestop-exec-hook still exists Apr 27 13:20:13.938: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 27 13:20:13.942: INFO: Pod pod-with-prestop-exec-hook still exists Apr 27 13:20:15.938: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 27 13:20:15.942: INFO: Pod pod-with-prestop-exec-hook still exists Apr 27 13:20:17.938: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 27 13:20:17.943: INFO: Pod pod-with-prestop-exec-hook still exists Apr 27 13:20:19.938: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 27 13:20:19.942: INFO: Pod pod-with-prestop-exec-hook still exists Apr 27 13:20:21.938: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 27 13:20:21.943: INFO: Pod pod-with-prestop-exec-hook still exists Apr 27 13:20:23.938: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 27 13:20:23.942: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 27 13:20:23.949: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-9253" for this suite. Apr 27 13:20:45.965: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 27 13:20:46.042: INFO: namespace container-lifecycle-hook-9253 deletion completed in 22.089529304s • [SLOW TEST:52.222 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 27 13:20:46.042: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename hostpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37 [It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test hostPath mode Apr 27 13:20:46.118: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-9709" to be "success or failure" Apr 27 13:20:46.122: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.4332ms Apr 27 13:20:48.126: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007894617s Apr 27 13:20:50.130: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.012467416s Apr 27 13:20:52.134: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.016495039s STEP: Saw pod success Apr 27 13:20:52.134: INFO: Pod "pod-host-path-test" satisfied condition "success or failure" Apr 27 13:20:52.137: INFO: Trying to get logs from node iruya-worker2 pod pod-host-path-test container test-container-1: STEP: delete the pod Apr 27 13:20:52.153: INFO: Waiting for pod pod-host-path-test to disappear Apr 27 13:20:52.158: INFO: Pod pod-host-path-test no longer exists [AfterEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 27 13:20:52.158: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "hostpath-9709" for this suite. Apr 27 13:20:58.169: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 27 13:20:58.253: INFO: namespace hostpath-9709 deletion completed in 6.093121651s • [SLOW TEST:12.211 seconds] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34 should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 27 13:20:58.254: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Apr 27 13:20:58.337: INFO: (0) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 6.105074ms) Apr 27 13:20:58.341: INFO: (1) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.558187ms) Apr 27 13:20:58.344: INFO: (2) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.816996ms) Apr 27 13:20:58.348: INFO: (3) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.451259ms) Apr 27 13:20:58.351: INFO: (4) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.587022ms) Apr 27 13:20:58.355: INFO: (5) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.808896ms) Apr 27 13:20:58.358: INFO: (6) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.116248ms) Apr 27 13:20:58.362: INFO: (7) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.064488ms) Apr 27 13:20:58.364: INFO: (8) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 2.886237ms) Apr 27 13:20:58.368: INFO: (9) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.292344ms) Apr 27 13:20:58.371: INFO: (10) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.002263ms) Apr 27 13:20:58.374: INFO: (11) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 2.988938ms) Apr 27 13:20:58.377: INFO: (12) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.120772ms) Apr 27 13:20:58.380: INFO: (13) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.238913ms) Apr 27 13:20:58.383: INFO: (14) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.099114ms) Apr 27 13:20:58.387: INFO: (15) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.402126ms) Apr 27 13:20:58.391: INFO: (16) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.932325ms) Apr 27 13:20:58.394: INFO: (17) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.022909ms) Apr 27 13:20:58.397: INFO: (18) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 2.563952ms) Apr 27 13:20:58.399: INFO: (19) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 2.824164ms) [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 27 13:20:58.399: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-7167" for this suite. Apr 27 13:21:04.413: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 27 13:21:04.487: INFO: namespace proxy-7167 deletion completed in 6.084847062s • [SLOW TEST:6.234 seconds] [sig-network] Proxy /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:58 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 27 13:21:04.488: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir volume type on node default medium Apr 27 13:21:04.562: INFO: Waiting up to 5m0s for pod "pod-80e301f6-630d-4f17-a4e2-589c1fdfc2ac" in namespace "emptydir-3671" to be "success or failure" Apr 27 13:21:04.567: INFO: Pod "pod-80e301f6-630d-4f17-a4e2-589c1fdfc2ac": Phase="Pending", Reason="", readiness=false. Elapsed: 4.602954ms Apr 27 13:21:06.572: INFO: Pod "pod-80e301f6-630d-4f17-a4e2-589c1fdfc2ac": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009312639s Apr 27 13:21:08.576: INFO: Pod "pod-80e301f6-630d-4f17-a4e2-589c1fdfc2ac": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013705979s STEP: Saw pod success Apr 27 13:21:08.576: INFO: Pod "pod-80e301f6-630d-4f17-a4e2-589c1fdfc2ac" satisfied condition "success or failure" Apr 27 13:21:08.580: INFO: Trying to get logs from node iruya-worker pod pod-80e301f6-630d-4f17-a4e2-589c1fdfc2ac container test-container: STEP: delete the pod Apr 27 13:21:08.599: INFO: Waiting for pod pod-80e301f6-630d-4f17-a4e2-589c1fdfc2ac to disappear Apr 27 13:21:08.603: INFO: Pod pod-80e301f6-630d-4f17-a4e2-589c1fdfc2ac no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 27 13:21:08.603: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3671" for this suite. Apr 27 13:21:14.637: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 27 13:21:14.720: INFO: namespace emptydir-3671 deletion completed in 6.112975581s • [SLOW TEST:10.232 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 27 13:21:14.721: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Setting up the test STEP: Creating hostNetwork=false pod STEP: Creating hostNetwork=true pod STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false Apr 27 13:21:24.831: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-4129 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 27 13:21:24.831: INFO: >>> kubeConfig: /root/.kube/config I0427 13:21:24.863335 6 log.go:172] (0xc0011bcc60) (0xc002428960) Create stream I0427 13:21:24.863361 6 log.go:172] (0xc0011bcc60) (0xc002428960) Stream added, broadcasting: 1 I0427 13:21:24.864867 6 log.go:172] (0xc0011bcc60) Reply frame received for 1 I0427 13:21:24.864914 6 log.go:172] (0xc0011bcc60) (0xc002428a00) Create stream I0427 13:21:24.864936 6 log.go:172] (0xc0011bcc60) (0xc002428a00) Stream added, broadcasting: 3 I0427 13:21:24.866089 6 log.go:172] (0xc0011bcc60) Reply frame received for 3 I0427 13:21:24.866143 6 log.go:172] (0xc0011bcc60) (0xc002428aa0) Create stream I0427 13:21:24.866161 6 log.go:172] (0xc0011bcc60) (0xc002428aa0) Stream added, broadcasting: 5 I0427 13:21:24.866955 6 log.go:172] (0xc0011bcc60) Reply frame received for 5 I0427 13:21:24.937288 6 log.go:172] (0xc0011bcc60) Data frame received for 5 I0427 13:21:24.937317 6 log.go:172] (0xc002428aa0) (5) Data frame handling I0427 13:21:24.937376 6 log.go:172] (0xc0011bcc60) Data frame received for 3 I0427 13:21:24.937413 6 log.go:172] (0xc002428a00) (3) Data frame handling I0427 13:21:24.937443 6 log.go:172] (0xc002428a00) (3) Data frame sent I0427 13:21:24.937460 6 log.go:172] (0xc0011bcc60) Data frame received for 3 I0427 13:21:24.937474 6 log.go:172] (0xc002428a00) (3) Data frame handling I0427 13:21:24.938258 6 log.go:172] (0xc0011bcc60) Data frame received for 1 I0427 13:21:24.938270 6 log.go:172] (0xc002428960) (1) Data frame handling I0427 13:21:24.938279 6 log.go:172] (0xc002428960) (1) Data frame sent I0427 13:21:24.938300 6 log.go:172] (0xc0011bcc60) (0xc002428960) Stream removed, broadcasting: 1 I0427 13:21:24.938343 6 log.go:172] (0xc0011bcc60) Go away received I0427 13:21:24.938478 6 log.go:172] (0xc0011bcc60) (0xc002428960) Stream removed, broadcasting: 1 I0427 13:21:24.938505 6 log.go:172] (0xc0011bcc60) (0xc002428a00) Stream removed, broadcasting: 3 I0427 13:21:24.938516 6 log.go:172] (0xc0011bcc60) (0xc002428aa0) Stream removed, broadcasting: 5 Apr 27 13:21:24.938: INFO: Exec stderr: "" Apr 27 13:21:24.938: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-4129 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 27 13:21:24.938: INFO: >>> kubeConfig: /root/.kube/config I0427 13:21:24.969879 6 log.go:172] (0xc00273a6e0) (0xc002dd5400) Create stream I0427 13:21:24.969912 6 log.go:172] (0xc00273a6e0) (0xc002dd5400) Stream added, broadcasting: 1 I0427 13:21:24.972597 6 log.go:172] (0xc00273a6e0) Reply frame received for 1 I0427 13:21:24.972656 6 log.go:172] (0xc00273a6e0) (0xc002dd54a0) Create stream I0427 13:21:24.972691 6 log.go:172] (0xc00273a6e0) (0xc002dd54a0) Stream added, broadcasting: 3 I0427 13:21:24.973770 6 log.go:172] (0xc00273a6e0) Reply frame received for 3 I0427 13:21:24.973810 6 log.go:172] (0xc00273a6e0) (0xc002dd0640) Create stream I0427 13:21:24.973824 6 log.go:172] (0xc00273a6e0) (0xc002dd0640) Stream added, broadcasting: 5 I0427 13:21:24.974852 6 log.go:172] (0xc00273a6e0) Reply frame received for 5 I0427 13:21:25.042714 6 log.go:172] (0xc00273a6e0) Data frame received for 5 I0427 13:21:25.042772 6 log.go:172] (0xc002dd0640) (5) Data frame handling I0427 13:21:25.042810 6 log.go:172] (0xc00273a6e0) Data frame received for 3 I0427 13:21:25.042830 6 log.go:172] (0xc002dd54a0) (3) Data frame handling I0427 13:21:25.042859 6 log.go:172] (0xc002dd54a0) (3) Data frame sent I0427 13:21:25.042893 6 log.go:172] (0xc00273a6e0) Data frame received for 3 I0427 13:21:25.042912 6 log.go:172] (0xc002dd54a0) (3) Data frame handling I0427 13:21:25.044087 6 log.go:172] (0xc00273a6e0) Data frame received for 1 I0427 13:21:25.044122 6 log.go:172] (0xc002dd5400) (1) Data frame handling I0427 13:21:25.044133 6 log.go:172] (0xc002dd5400) (1) Data frame sent I0427 13:21:25.044145 6 log.go:172] (0xc00273a6e0) (0xc002dd5400) Stream removed, broadcasting: 1 I0427 13:21:25.044175 6 log.go:172] (0xc00273a6e0) Go away received I0427 13:21:25.044382 6 log.go:172] (0xc00273a6e0) (0xc002dd5400) Stream removed, broadcasting: 1 I0427 13:21:25.044405 6 log.go:172] (0xc00273a6e0) (0xc002dd54a0) Stream removed, broadcasting: 3 I0427 13:21:25.044414 6 log.go:172] (0xc00273a6e0) (0xc002dd0640) Stream removed, broadcasting: 5 Apr 27 13:21:25.044: INFO: Exec stderr: "" Apr 27 13:21:25.044: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-4129 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 27 13:21:25.044: INFO: >>> kubeConfig: /root/.kube/config I0427 13:21:25.078584 6 log.go:172] (0xc001efa630) (0xc002e7a280) Create stream I0427 13:21:25.078613 6 log.go:172] (0xc001efa630) (0xc002e7a280) Stream added, broadcasting: 1 I0427 13:21:25.082608 6 log.go:172] (0xc001efa630) Reply frame received for 1 I0427 13:21:25.082641 6 log.go:172] (0xc001efa630) (0xc002428b40) Create stream I0427 13:21:25.082658 6 log.go:172] (0xc001efa630) (0xc002428b40) Stream added, broadcasting: 3 I0427 13:21:25.083692 6 log.go:172] (0xc001efa630) Reply frame received for 3 I0427 13:21:25.083729 6 log.go:172] (0xc001efa630) (0xc001b161e0) Create stream I0427 13:21:25.083742 6 log.go:172] (0xc001efa630) (0xc001b161e0) Stream added, broadcasting: 5 I0427 13:21:25.084895 6 log.go:172] (0xc001efa630) Reply frame received for 5 I0427 13:21:25.155826 6 log.go:172] (0xc001efa630) Data frame received for 5 I0427 13:21:25.155872 6 log.go:172] (0xc001b161e0) (5) Data frame handling I0427 13:21:25.155903 6 log.go:172] (0xc001efa630) Data frame received for 3 I0427 13:21:25.155923 6 log.go:172] (0xc002428b40) (3) Data frame handling I0427 13:21:25.155935 6 log.go:172] (0xc002428b40) (3) Data frame sent I0427 13:21:25.155946 6 log.go:172] (0xc001efa630) Data frame received for 3 I0427 13:21:25.155958 6 log.go:172] (0xc002428b40) (3) Data frame handling I0427 13:21:25.157520 6 log.go:172] (0xc001efa630) Data frame received for 1 I0427 13:21:25.157559 6 log.go:172] (0xc002e7a280) (1) Data frame handling I0427 13:21:25.157595 6 log.go:172] (0xc002e7a280) (1) Data frame sent I0427 13:21:25.157613 6 log.go:172] (0xc001efa630) (0xc002e7a280) Stream removed, broadcasting: 1 I0427 13:21:25.157638 6 log.go:172] (0xc001efa630) Go away received I0427 13:21:25.157770 6 log.go:172] (0xc001efa630) (0xc002e7a280) Stream removed, broadcasting: 1 I0427 13:21:25.157796 6 log.go:172] (0xc001efa630) (0xc002428b40) Stream removed, broadcasting: 3 I0427 13:21:25.157812 6 log.go:172] (0xc001efa630) (0xc001b161e0) Stream removed, broadcasting: 5 Apr 27 13:21:25.157: INFO: Exec stderr: "" Apr 27 13:21:25.157: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-4129 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 27 13:21:25.157: INFO: >>> kubeConfig: /root/.kube/config I0427 13:21:25.192105 6 log.go:172] (0xc00273b970) (0xc002dd57c0) Create stream I0427 13:21:25.192145 6 log.go:172] (0xc00273b970) (0xc002dd57c0) Stream added, broadcasting: 1 I0427 13:21:25.196661 6 log.go:172] (0xc00273b970) Reply frame received for 1 I0427 13:21:25.196712 6 log.go:172] (0xc00273b970) (0xc001b16280) Create stream I0427 13:21:25.196734 6 log.go:172] (0xc00273b970) (0xc001b16280) Stream added, broadcasting: 3 I0427 13:21:25.198240 6 log.go:172] (0xc00273b970) Reply frame received for 3 I0427 13:21:25.198281 6 log.go:172] (0xc00273b970) (0xc001b163c0) Create stream I0427 13:21:25.198296 6 log.go:172] (0xc00273b970) (0xc001b163c0) Stream added, broadcasting: 5 I0427 13:21:25.199289 6 log.go:172] (0xc00273b970) Reply frame received for 5 I0427 13:21:25.259321 6 log.go:172] (0xc00273b970) Data frame received for 5 I0427 13:21:25.259352 6 log.go:172] (0xc001b163c0) (5) Data frame handling I0427 13:21:25.259367 6 log.go:172] (0xc00273b970) Data frame received for 3 I0427 13:21:25.259387 6 log.go:172] (0xc001b16280) (3) Data frame handling I0427 13:21:25.259449 6 log.go:172] (0xc001b16280) (3) Data frame sent I0427 13:21:25.259464 6 log.go:172] (0xc00273b970) Data frame received for 3 I0427 13:21:25.259478 6 log.go:172] (0xc001b16280) (3) Data frame handling I0427 13:21:25.260418 6 log.go:172] (0xc00273b970) Data frame received for 1 I0427 13:21:25.260442 6 log.go:172] (0xc002dd57c0) (1) Data frame handling I0427 13:21:25.260464 6 log.go:172] (0xc002dd57c0) (1) Data frame sent I0427 13:21:25.260483 6 log.go:172] (0xc00273b970) (0xc002dd57c0) Stream removed, broadcasting: 1 I0427 13:21:25.260513 6 log.go:172] (0xc00273b970) Go away received I0427 13:21:25.260582 6 log.go:172] (0xc00273b970) (0xc002dd57c0) Stream removed, broadcasting: 1 I0427 13:21:25.260598 6 log.go:172] (0xc00273b970) (0xc001b16280) Stream removed, broadcasting: 3 I0427 13:21:25.260608 6 log.go:172] (0xc00273b970) (0xc001b163c0) Stream removed, broadcasting: 5 Apr 27 13:21:25.260: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount Apr 27 13:21:25.260: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-4129 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 27 13:21:25.260: INFO: >>> kubeConfig: /root/.kube/config I0427 13:21:25.289967 6 log.go:172] (0xc001217ad0) (0xc001b168c0) Create stream I0427 13:21:25.290000 6 log.go:172] (0xc001217ad0) (0xc001b168c0) Stream added, broadcasting: 1 I0427 13:21:25.293305 6 log.go:172] (0xc001217ad0) Reply frame received for 1 I0427 13:21:25.293366 6 log.go:172] (0xc001217ad0) (0xc001b16aa0) Create stream I0427 13:21:25.293382 6 log.go:172] (0xc001217ad0) (0xc001b16aa0) Stream added, broadcasting: 3 I0427 13:21:25.294495 6 log.go:172] (0xc001217ad0) Reply frame received for 3 I0427 13:21:25.294545 6 log.go:172] (0xc001217ad0) (0xc002dd06e0) Create stream I0427 13:21:25.294570 6 log.go:172] (0xc001217ad0) (0xc002dd06e0) Stream added, broadcasting: 5 I0427 13:21:25.295628 6 log.go:172] (0xc001217ad0) Reply frame received for 5 I0427 13:21:25.362729 6 log.go:172] (0xc001217ad0) Data frame received for 3 I0427 13:21:25.362776 6 log.go:172] (0xc001b16aa0) (3) Data frame handling I0427 13:21:25.362808 6 log.go:172] (0xc001b16aa0) (3) Data frame sent I0427 13:21:25.362829 6 log.go:172] (0xc001217ad0) Data frame received for 3 I0427 13:21:25.362859 6 log.go:172] (0xc001b16aa0) (3) Data frame handling I0427 13:21:25.363126 6 log.go:172] (0xc001217ad0) Data frame received for 5 I0427 13:21:25.363142 6 log.go:172] (0xc002dd06e0) (5) Data frame handling I0427 13:21:25.364547 6 log.go:172] (0xc001217ad0) Data frame received for 1 I0427 13:21:25.364574 6 log.go:172] (0xc001b168c0) (1) Data frame handling I0427 13:21:25.364614 6 log.go:172] (0xc001b168c0) (1) Data frame sent I0427 13:21:25.364631 6 log.go:172] (0xc001217ad0) (0xc001b168c0) Stream removed, broadcasting: 1 I0427 13:21:25.364701 6 log.go:172] (0xc001217ad0) Go away received I0427 13:21:25.364803 6 log.go:172] (0xc001217ad0) (0xc001b168c0) Stream removed, broadcasting: 1 I0427 13:21:25.364878 6 log.go:172] (0xc001217ad0) (0xc001b16aa0) Stream removed, broadcasting: 3 I0427 13:21:25.364898 6 log.go:172] (0xc001217ad0) (0xc002dd06e0) Stream removed, broadcasting: 5 Apr 27 13:21:25.364: INFO: Exec stderr: "" Apr 27 13:21:25.365: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-4129 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 27 13:21:25.365: INFO: >>> kubeConfig: /root/.kube/config I0427 13:21:25.398933 6 log.go:172] (0xc0013d7550) (0xc002dd0a00) Create stream I0427 13:21:25.398958 6 log.go:172] (0xc0013d7550) (0xc002dd0a00) Stream added, broadcasting: 1 I0427 13:21:25.401812 6 log.go:172] (0xc0013d7550) Reply frame received for 1 I0427 13:21:25.401848 6 log.go:172] (0xc0013d7550) (0xc002dd0aa0) Create stream I0427 13:21:25.401860 6 log.go:172] (0xc0013d7550) (0xc002dd0aa0) Stream added, broadcasting: 3 I0427 13:21:25.402649 6 log.go:172] (0xc0013d7550) Reply frame received for 3 I0427 13:21:25.402692 6 log.go:172] (0xc0013d7550) (0xc002dd0b40) Create stream I0427 13:21:25.402710 6 log.go:172] (0xc0013d7550) (0xc002dd0b40) Stream added, broadcasting: 5 I0427 13:21:25.403635 6 log.go:172] (0xc0013d7550) Reply frame received for 5 I0427 13:21:25.474295 6 log.go:172] (0xc0013d7550) Data frame received for 3 I0427 13:21:25.474326 6 log.go:172] (0xc002dd0aa0) (3) Data frame handling I0427 13:21:25.474333 6 log.go:172] (0xc002dd0aa0) (3) Data frame sent I0427 13:21:25.474338 6 log.go:172] (0xc0013d7550) Data frame received for 3 I0427 13:21:25.474342 6 log.go:172] (0xc002dd0aa0) (3) Data frame handling I0427 13:21:25.474357 6 log.go:172] (0xc0013d7550) Data frame received for 5 I0427 13:21:25.474366 6 log.go:172] (0xc002dd0b40) (5) Data frame handling I0427 13:21:25.475635 6 log.go:172] (0xc0013d7550) Data frame received for 1 I0427 13:21:25.475651 6 log.go:172] (0xc002dd0a00) (1) Data frame handling I0427 13:21:25.475663 6 log.go:172] (0xc002dd0a00) (1) Data frame sent I0427 13:21:25.475672 6 log.go:172] (0xc0013d7550) (0xc002dd0a00) Stream removed, broadcasting: 1 I0427 13:21:25.475686 6 log.go:172] (0xc0013d7550) Go away received I0427 13:21:25.475878 6 log.go:172] (0xc0013d7550) (0xc002dd0a00) Stream removed, broadcasting: 1 I0427 13:21:25.475901 6 log.go:172] (0xc0013d7550) (0xc002dd0aa0) Stream removed, broadcasting: 3 I0427 13:21:25.475917 6 log.go:172] (0xc0013d7550) (0xc002dd0b40) Stream removed, broadcasting: 5 Apr 27 13:21:25.475: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true Apr 27 13:21:25.475: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-4129 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 27 13:21:25.476: INFO: >>> kubeConfig: /root/.kube/config I0427 13:21:25.511189 6 log.go:172] (0xc0009d6160) (0xc0020fc0a0) Create stream I0427 13:21:25.511224 6 log.go:172] (0xc0009d6160) (0xc0020fc0a0) Stream added, broadcasting: 1 I0427 13:21:25.519247 6 log.go:172] (0xc0009d6160) Reply frame received for 1 I0427 13:21:25.519301 6 log.go:172] (0xc0009d6160) (0xc0017c4000) Create stream I0427 13:21:25.519318 6 log.go:172] (0xc0009d6160) (0xc0017c4000) Stream added, broadcasting: 3 I0427 13:21:25.521049 6 log.go:172] (0xc0009d6160) Reply frame received for 3 I0427 13:21:25.521077 6 log.go:172] (0xc0009d6160) (0xc0020fc140) Create stream I0427 13:21:25.521090 6 log.go:172] (0xc0009d6160) (0xc0020fc140) Stream added, broadcasting: 5 I0427 13:21:25.523056 6 log.go:172] (0xc0009d6160) Reply frame received for 5 I0427 13:21:25.584773 6 log.go:172] (0xc0009d6160) Data frame received for 5 I0427 13:21:25.584833 6 log.go:172] (0xc0020fc140) (5) Data frame handling I0427 13:21:25.584882 6 log.go:172] (0xc0009d6160) Data frame received for 3 I0427 13:21:25.584918 6 log.go:172] (0xc0017c4000) (3) Data frame handling I0427 13:21:25.584973 6 log.go:172] (0xc0017c4000) (3) Data frame sent I0427 13:21:25.584995 6 log.go:172] (0xc0009d6160) Data frame received for 3 I0427 13:21:25.585010 6 log.go:172] (0xc0017c4000) (3) Data frame handling I0427 13:21:25.586879 6 log.go:172] (0xc0009d6160) Data frame received for 1 I0427 13:21:25.586903 6 log.go:172] (0xc0020fc0a0) (1) Data frame handling I0427 13:21:25.586911 6 log.go:172] (0xc0020fc0a0) (1) Data frame sent I0427 13:21:25.586917 6 log.go:172] (0xc0009d6160) (0xc0020fc0a0) Stream removed, broadcasting: 1 I0427 13:21:25.586987 6 log.go:172] (0xc0009d6160) (0xc0020fc0a0) Stream removed, broadcasting: 1 I0427 13:21:25.586994 6 log.go:172] (0xc0009d6160) (0xc0017c4000) Stream removed, broadcasting: 3 I0427 13:21:25.586999 6 log.go:172] (0xc0009d6160) (0xc0020fc140) Stream removed, broadcasting: 5 Apr 27 13:21:25.587: INFO: Exec stderr: "" Apr 27 13:21:25.587: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-4129 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 27 13:21:25.587: INFO: >>> kubeConfig: /root/.kube/config I0427 13:21:25.587163 6 log.go:172] (0xc0009d6160) Go away received I0427 13:21:25.617825 6 log.go:172] (0xc00087cb00) (0xc0020fc500) Create stream I0427 13:21:25.617856 6 log.go:172] (0xc00087cb00) (0xc0020fc500) Stream added, broadcasting: 1 I0427 13:21:25.619951 6 log.go:172] (0xc00087cb00) Reply frame received for 1 I0427 13:21:25.619994 6 log.go:172] (0xc00087cb00) (0xc00222e000) Create stream I0427 13:21:25.620013 6 log.go:172] (0xc00087cb00) (0xc00222e000) Stream added, broadcasting: 3 I0427 13:21:25.620905 6 log.go:172] (0xc00087cb00) Reply frame received for 3 I0427 13:21:25.620943 6 log.go:172] (0xc00087cb00) (0xc00222e0a0) Create stream I0427 13:21:25.620954 6 log.go:172] (0xc00087cb00) (0xc00222e0a0) Stream added, broadcasting: 5 I0427 13:21:25.621981 6 log.go:172] (0xc00087cb00) Reply frame received for 5 I0427 13:21:25.697432 6 log.go:172] (0xc00087cb00) Data frame received for 5 I0427 13:21:25.697510 6 log.go:172] (0xc00222e0a0) (5) Data frame handling I0427 13:21:25.697566 6 log.go:172] (0xc00087cb00) Data frame received for 3 I0427 13:21:25.697593 6 log.go:172] (0xc00222e000) (3) Data frame handling I0427 13:21:25.697614 6 log.go:172] (0xc00222e000) (3) Data frame sent I0427 13:21:25.697637 6 log.go:172] (0xc00087cb00) Data frame received for 3 I0427 13:21:25.697656 6 log.go:172] (0xc00222e000) (3) Data frame handling I0427 13:21:25.699432 6 log.go:172] (0xc00087cb00) Data frame received for 1 I0427 13:21:25.699466 6 log.go:172] (0xc0020fc500) (1) Data frame handling I0427 13:21:25.699494 6 log.go:172] (0xc0020fc500) (1) Data frame sent I0427 13:21:25.699518 6 log.go:172] (0xc00087cb00) (0xc0020fc500) Stream removed, broadcasting: 1 I0427 13:21:25.699577 6 log.go:172] (0xc00087cb00) Go away received I0427 13:21:25.699635 6 log.go:172] (0xc00087cb00) (0xc0020fc500) Stream removed, broadcasting: 1 I0427 13:21:25.699665 6 log.go:172] (0xc00087cb00) (0xc00222e000) Stream removed, broadcasting: 3 I0427 13:21:25.699687 6 log.go:172] (0xc00087cb00) (0xc00222e0a0) Stream removed, broadcasting: 5 Apr 27 13:21:25.699: INFO: Exec stderr: "" Apr 27 13:21:25.699: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-4129 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 27 13:21:25.699: INFO: >>> kubeConfig: /root/.kube/config I0427 13:21:25.729289 6 log.go:172] (0xc000fde2c0) (0xc0017c4b40) Create stream I0427 13:21:25.729341 6 log.go:172] (0xc000fde2c0) (0xc0017c4b40) Stream added, broadcasting: 1 I0427 13:21:25.731525 6 log.go:172] (0xc000fde2c0) Reply frame received for 1 I0427 13:21:25.731559 6 log.go:172] (0xc000fde2c0) (0xc001222140) Create stream I0427 13:21:25.731570 6 log.go:172] (0xc000fde2c0) (0xc001222140) Stream added, broadcasting: 3 I0427 13:21:25.732513 6 log.go:172] (0xc000fde2c0) Reply frame received for 3 I0427 13:21:25.732568 6 log.go:172] (0xc000fde2c0) (0xc000110140) Create stream I0427 13:21:25.732586 6 log.go:172] (0xc000fde2c0) (0xc000110140) Stream added, broadcasting: 5 I0427 13:21:25.733598 6 log.go:172] (0xc000fde2c0) Reply frame received for 5 I0427 13:21:25.782372 6 log.go:172] (0xc000fde2c0) Data frame received for 3 I0427 13:21:25.782419 6 log.go:172] (0xc001222140) (3) Data frame handling I0427 13:21:25.782459 6 log.go:172] (0xc000fde2c0) Data frame received for 5 I0427 13:21:25.782535 6 log.go:172] (0xc000110140) (5) Data frame handling I0427 13:21:25.782580 6 log.go:172] (0xc001222140) (3) Data frame sent I0427 13:21:25.782601 6 log.go:172] (0xc000fde2c0) Data frame received for 3 I0427 13:21:25.782612 6 log.go:172] (0xc001222140) (3) Data frame handling I0427 13:21:25.783989 6 log.go:172] (0xc000fde2c0) Data frame received for 1 I0427 13:21:25.784015 6 log.go:172] (0xc0017c4b40) (1) Data frame handling I0427 13:21:25.784043 6 log.go:172] (0xc0017c4b40) (1) Data frame sent I0427 13:21:25.784070 6 log.go:172] (0xc000fde2c0) (0xc0017c4b40) Stream removed, broadcasting: 1 I0427 13:21:25.784115 6 log.go:172] (0xc000fde2c0) Go away received I0427 13:21:25.784276 6 log.go:172] (0xc000fde2c0) (0xc0017c4b40) Stream removed, broadcasting: 1 I0427 13:21:25.784354 6 log.go:172] (0xc000fde2c0) (0xc001222140) Stream removed, broadcasting: 3 I0427 13:21:25.784393 6 log.go:172] (0xc000fde2c0) (0xc000110140) Stream removed, broadcasting: 5 Apr 27 13:21:25.784: INFO: Exec stderr: "" Apr 27 13:21:25.784: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-4129 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 27 13:21:25.784: INFO: >>> kubeConfig: /root/.kube/config I0427 13:21:25.817928 6 log.go:172] (0xc000f16b00) (0xc0012228c0) Create stream I0427 13:21:25.817949 6 log.go:172] (0xc000f16b00) (0xc0012228c0) Stream added, broadcasting: 1 I0427 13:21:25.819672 6 log.go:172] (0xc000f16b00) Reply frame received for 1 I0427 13:21:25.819730 6 log.go:172] (0xc000f16b00) (0xc001222a00) Create stream I0427 13:21:25.819752 6 log.go:172] (0xc000f16b00) (0xc001222a00) Stream added, broadcasting: 3 I0427 13:21:25.820605 6 log.go:172] (0xc000f16b00) Reply frame received for 3 I0427 13:21:25.820624 6 log.go:172] (0xc000f16b00) (0xc001222aa0) Create stream I0427 13:21:25.820631 6 log.go:172] (0xc000f16b00) (0xc001222aa0) Stream added, broadcasting: 5 I0427 13:21:25.821763 6 log.go:172] (0xc000f16b00) Reply frame received for 5 I0427 13:21:25.892926 6 log.go:172] (0xc000f16b00) Data frame received for 5 I0427 13:21:25.892974 6 log.go:172] (0xc001222aa0) (5) Data frame handling I0427 13:21:25.893022 6 log.go:172] (0xc000f16b00) Data frame received for 3 I0427 13:21:25.893038 6 log.go:172] (0xc001222a00) (3) Data frame handling I0427 13:21:25.893055 6 log.go:172] (0xc001222a00) (3) Data frame sent I0427 13:21:25.893066 6 log.go:172] (0xc000f16b00) Data frame received for 3 I0427 13:21:25.893078 6 log.go:172] (0xc001222a00) (3) Data frame handling I0427 13:21:25.894446 6 log.go:172] (0xc000f16b00) Data frame received for 1 I0427 13:21:25.894466 6 log.go:172] (0xc0012228c0) (1) Data frame handling I0427 13:21:25.894494 6 log.go:172] (0xc0012228c0) (1) Data frame sent I0427 13:21:25.894507 6 log.go:172] (0xc000f16b00) (0xc0012228c0) Stream removed, broadcasting: 1 I0427 13:21:25.894522 6 log.go:172] (0xc000f16b00) Go away received I0427 13:21:25.894709 6 log.go:172] (0xc000f16b00) (0xc0012228c0) Stream removed, broadcasting: 1 I0427 13:21:25.894743 6 log.go:172] (0xc000f16b00) (0xc001222a00) Stream removed, broadcasting: 3 I0427 13:21:25.894754 6 log.go:172] (0xc000f16b00) (0xc001222aa0) Stream removed, broadcasting: 5 Apr 27 13:21:25.894: INFO: Exec stderr: "" [AfterEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 27 13:21:25.894: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-kubelet-etc-hosts-4129" for this suite. Apr 27 13:22:15.925: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 27 13:22:16.011: INFO: namespace e2e-kubelet-etc-hosts-4129 deletion completed in 50.112280157s • [SLOW TEST:61.290 seconds] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 27 13:22:16.011: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Apr 27 13:22:16.055: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d032745d-217f-4332-97ad-3edb91a4907e" in namespace "projected-4488" to be "success or failure" Apr 27 13:22:16.110: INFO: Pod "downwardapi-volume-d032745d-217f-4332-97ad-3edb91a4907e": Phase="Pending", Reason="", readiness=false. Elapsed: 54.57019ms Apr 27 13:22:18.114: INFO: Pod "downwardapi-volume-d032745d-217f-4332-97ad-3edb91a4907e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.059152427s Apr 27 13:22:20.119: INFO: Pod "downwardapi-volume-d032745d-217f-4332-97ad-3edb91a4907e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.063742818s STEP: Saw pod success Apr 27 13:22:20.119: INFO: Pod "downwardapi-volume-d032745d-217f-4332-97ad-3edb91a4907e" satisfied condition "success or failure" Apr 27 13:22:20.122: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-d032745d-217f-4332-97ad-3edb91a4907e container client-container: STEP: delete the pod Apr 27 13:22:20.143: INFO: Waiting for pod downwardapi-volume-d032745d-217f-4332-97ad-3edb91a4907e to disappear Apr 27 13:22:20.181: INFO: Pod downwardapi-volume-d032745d-217f-4332-97ad-3edb91a4907e no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 27 13:22:20.181: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4488" for this suite. Apr 27 13:22:26.200: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 27 13:22:26.279: INFO: namespace projected-4488 deletion completed in 6.094396123s • [SLOW TEST:10.269 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 27 13:22:26.280: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-3161bac5-a6a3-4e87-82cf-22e969da9ec9 STEP: Creating a pod to test consume configMaps Apr 27 13:22:26.354: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-de2a4f42-53ff-4b5e-975a-9b82e75486b4" in namespace "projected-5979" to be "success or failure" Apr 27 13:22:26.370: INFO: Pod "pod-projected-configmaps-de2a4f42-53ff-4b5e-975a-9b82e75486b4": Phase="Pending", Reason="", readiness=false. Elapsed: 16.169201ms Apr 27 13:22:28.374: INFO: Pod "pod-projected-configmaps-de2a4f42-53ff-4b5e-975a-9b82e75486b4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020574175s Apr 27 13:22:30.378: INFO: Pod "pod-projected-configmaps-de2a4f42-53ff-4b5e-975a-9b82e75486b4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024561054s STEP: Saw pod success Apr 27 13:22:30.378: INFO: Pod "pod-projected-configmaps-de2a4f42-53ff-4b5e-975a-9b82e75486b4" satisfied condition "success or failure" Apr 27 13:22:30.380: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-configmaps-de2a4f42-53ff-4b5e-975a-9b82e75486b4 container projected-configmap-volume-test: STEP: delete the pod Apr 27 13:22:30.416: INFO: Waiting for pod pod-projected-configmaps-de2a4f42-53ff-4b5e-975a-9b82e75486b4 to disappear Apr 27 13:22:30.429: INFO: Pod pod-projected-configmaps-de2a4f42-53ff-4b5e-975a-9b82e75486b4 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 27 13:22:30.429: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5979" for this suite. Apr 27 13:22:36.445: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 27 13:22:36.524: INFO: namespace projected-5979 deletion completed in 6.092369959s • [SLOW TEST:10.245 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 27 13:22:36.525: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test env composition Apr 27 13:22:36.600: INFO: Waiting up to 5m0s for pod "var-expansion-37ad4d7a-75d9-43f3-9770-d880a35e52fa" in namespace "var-expansion-8463" to be "success or failure" Apr 27 13:22:36.634: INFO: Pod "var-expansion-37ad4d7a-75d9-43f3-9770-d880a35e52fa": Phase="Pending", Reason="", readiness=false. Elapsed: 34.181172ms Apr 27 13:22:38.639: INFO: Pod "var-expansion-37ad4d7a-75d9-43f3-9770-d880a35e52fa": Phase="Pending", Reason="", readiness=false. Elapsed: 2.038810687s Apr 27 13:22:40.644: INFO: Pod "var-expansion-37ad4d7a-75d9-43f3-9770-d880a35e52fa": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.043349249s STEP: Saw pod success Apr 27 13:22:40.644: INFO: Pod "var-expansion-37ad4d7a-75d9-43f3-9770-d880a35e52fa" satisfied condition "success or failure" Apr 27 13:22:40.647: INFO: Trying to get logs from node iruya-worker2 pod var-expansion-37ad4d7a-75d9-43f3-9770-d880a35e52fa container dapi-container: STEP: delete the pod Apr 27 13:22:40.672: INFO: Waiting for pod var-expansion-37ad4d7a-75d9-43f3-9770-d880a35e52fa to disappear Apr 27 13:22:40.676: INFO: Pod var-expansion-37ad4d7a-75d9-43f3-9770-d880a35e52fa no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 27 13:22:40.676: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-8463" for this suite. Apr 27 13:22:46.692: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 27 13:22:46.767: INFO: namespace var-expansion-8463 deletion completed in 6.088506098s • [SLOW TEST:10.242 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 27 13:22:46.767: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test substitution in container's command Apr 27 13:22:46.854: INFO: Waiting up to 5m0s for pod "var-expansion-8c3117f9-1706-4434-80f6-6b5b9b02f8bb" in namespace "var-expansion-9704" to be "success or failure" Apr 27 13:22:46.862: INFO: Pod "var-expansion-8c3117f9-1706-4434-80f6-6b5b9b02f8bb": Phase="Pending", Reason="", readiness=false. Elapsed: 7.246434ms Apr 27 13:22:48.876: INFO: Pod "var-expansion-8c3117f9-1706-4434-80f6-6b5b9b02f8bb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021488576s Apr 27 13:22:50.882: INFO: Pod "var-expansion-8c3117f9-1706-4434-80f6-6b5b9b02f8bb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.028015298s STEP: Saw pod success Apr 27 13:22:50.883: INFO: Pod "var-expansion-8c3117f9-1706-4434-80f6-6b5b9b02f8bb" satisfied condition "success or failure" Apr 27 13:22:50.886: INFO: Trying to get logs from node iruya-worker pod var-expansion-8c3117f9-1706-4434-80f6-6b5b9b02f8bb container dapi-container: STEP: delete the pod Apr 27 13:22:50.917: INFO: Waiting for pod var-expansion-8c3117f9-1706-4434-80f6-6b5b9b02f8bb to disappear Apr 27 13:22:50.928: INFO: Pod var-expansion-8c3117f9-1706-4434-80f6-6b5b9b02f8bb no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 27 13:22:50.928: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-9704" for this suite. Apr 27 13:22:56.944: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 27 13:22:57.022: INFO: namespace var-expansion-9704 deletion completed in 6.09088191s • [SLOW TEST:10.255 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 27 13:22:57.022: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0644 on node default medium Apr 27 13:22:57.097: INFO: Waiting up to 5m0s for pod "pod-ba6647fb-778f-4053-b383-2de9db3c131e" in namespace "emptydir-4338" to be "success or failure" Apr 27 13:22:57.101: INFO: Pod "pod-ba6647fb-778f-4053-b383-2de9db3c131e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.120207ms Apr 27 13:22:59.105: INFO: Pod "pod-ba6647fb-778f-4053-b383-2de9db3c131e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008172174s Apr 27 13:23:01.152: INFO: Pod "pod-ba6647fb-778f-4053-b383-2de9db3c131e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.055352481s STEP: Saw pod success Apr 27 13:23:01.152: INFO: Pod "pod-ba6647fb-778f-4053-b383-2de9db3c131e" satisfied condition "success or failure" Apr 27 13:23:01.155: INFO: Trying to get logs from node iruya-worker2 pod pod-ba6647fb-778f-4053-b383-2de9db3c131e container test-container: STEP: delete the pod Apr 27 13:23:01.174: INFO: Waiting for pod pod-ba6647fb-778f-4053-b383-2de9db3c131e to disappear Apr 27 13:23:01.179: INFO: Pod pod-ba6647fb-778f-4053-b383-2de9db3c131e no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 27 13:23:01.179: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4338" for this suite. Apr 27 13:23:07.194: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 27 13:23:07.270: INFO: namespace emptydir-4338 deletion completed in 6.088378658s • [SLOW TEST:10.248 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 27 13:23:07.270: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-map-8b2fd9c5-8743-48a2-a736-3f546b94d9ed STEP: Creating a pod to test consume secrets Apr 27 13:23:07.337: INFO: Waiting up to 5m0s for pod "pod-secrets-95d3d273-4e20-40a9-9755-30c44f7dff46" in namespace "secrets-6225" to be "success or failure" Apr 27 13:23:07.356: INFO: Pod "pod-secrets-95d3d273-4e20-40a9-9755-30c44f7dff46": Phase="Pending", Reason="", readiness=false. Elapsed: 18.935491ms Apr 27 13:23:09.360: INFO: Pod "pod-secrets-95d3d273-4e20-40a9-9755-30c44f7dff46": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023204042s Apr 27 13:23:11.364: INFO: Pod "pod-secrets-95d3d273-4e20-40a9-9755-30c44f7dff46": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.027285479s STEP: Saw pod success Apr 27 13:23:11.364: INFO: Pod "pod-secrets-95d3d273-4e20-40a9-9755-30c44f7dff46" satisfied condition "success or failure" Apr 27 13:23:11.367: INFO: Trying to get logs from node iruya-worker pod pod-secrets-95d3d273-4e20-40a9-9755-30c44f7dff46 container secret-volume-test: STEP: delete the pod Apr 27 13:23:11.384: INFO: Waiting for pod pod-secrets-95d3d273-4e20-40a9-9755-30c44f7dff46 to disappear Apr 27 13:23:11.404: INFO: Pod pod-secrets-95d3d273-4e20-40a9-9755-30c44f7dff46 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 27 13:23:11.404: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-6225" for this suite. Apr 27 13:23:17.427: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 27 13:23:17.491: INFO: namespace secrets-6225 deletion completed in 6.081920623s • [SLOW TEST:10.220 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 27 13:23:17.491: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Apr 27 13:23:17.561: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace STEP: Creating rc "condition-test" that asks for more than the allowed pod quota STEP: Checking rc "condition-test" has the desired failure condition set STEP: Scaling down rc "condition-test" to satisfy pod quota Apr 27 13:23:19.604: INFO: Updating replication controller "condition-test" STEP: Checking rc "condition-test" has no failure condition set [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 27 13:23:19.649: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-9968" for this suite. Apr 27 13:23:25.668: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 27 13:23:25.742: INFO: namespace replication-controller-9968 deletion completed in 6.088911881s • [SLOW TEST:8.251 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 27 13:23:25.742: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-04bb02a8-e074-4d8a-9366-67db8a8587d7 STEP: Creating a pod to test consume configMaps Apr 27 13:23:25.954: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-e818a98f-8611-413d-8e43-09e2f924af21" in namespace "projected-6937" to be "success or failure" Apr 27 13:23:25.959: INFO: Pod "pod-projected-configmaps-e818a98f-8611-413d-8e43-09e2f924af21": Phase="Pending", Reason="", readiness=false. Elapsed: 4.310678ms Apr 27 13:23:27.963: INFO: Pod "pod-projected-configmaps-e818a98f-8611-413d-8e43-09e2f924af21": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008609005s Apr 27 13:23:29.967: INFO: Pod "pod-projected-configmaps-e818a98f-8611-413d-8e43-09e2f924af21": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012528831s STEP: Saw pod success Apr 27 13:23:29.967: INFO: Pod "pod-projected-configmaps-e818a98f-8611-413d-8e43-09e2f924af21" satisfied condition "success or failure" Apr 27 13:23:29.970: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-configmaps-e818a98f-8611-413d-8e43-09e2f924af21 container projected-configmap-volume-test: STEP: delete the pod Apr 27 13:23:29.995: INFO: Waiting for pod pod-projected-configmaps-e818a98f-8611-413d-8e43-09e2f924af21 to disappear Apr 27 13:23:30.011: INFO: Pod pod-projected-configmaps-e818a98f-8611-413d-8e43-09e2f924af21 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 27 13:23:30.011: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6937" for this suite. Apr 27 13:23:36.027: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 27 13:23:36.101: INFO: namespace projected-6937 deletion completed in 6.086665188s • [SLOW TEST:10.359 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 27 13:23:36.102: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-2269 [It] Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating stateful set ss in namespace statefulset-2269 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-2269 Apr 27 13:23:36.186: INFO: Found 0 stateful pods, waiting for 1 Apr 27 13:23:46.207: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod Apr 27 13:23:46.209: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2269 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Apr 27 13:23:48.447: INFO: stderr: "I0427 13:23:48.291558 206 log.go:172] (0xc0009c0370) (0xc000a36dc0) Create stream\nI0427 13:23:48.291613 206 log.go:172] (0xc0009c0370) (0xc000a36dc0) Stream added, broadcasting: 1\nI0427 13:23:48.295994 206 log.go:172] (0xc0009c0370) Reply frame received for 1\nI0427 13:23:48.296058 206 log.go:172] (0xc0009c0370) (0xc0004d41e0) Create stream\nI0427 13:23:48.296086 206 log.go:172] (0xc0009c0370) (0xc0004d41e0) Stream added, broadcasting: 3\nI0427 13:23:48.297647 206 log.go:172] (0xc0009c0370) Reply frame received for 3\nI0427 13:23:48.297691 206 log.go:172] (0xc0009c0370) (0xc000a36000) Create stream\nI0427 13:23:48.297716 206 log.go:172] (0xc0009c0370) (0xc000a36000) Stream added, broadcasting: 5\nI0427 13:23:48.298702 206 log.go:172] (0xc0009c0370) Reply frame received for 5\nI0427 13:23:48.358640 206 log.go:172] (0xc0009c0370) Data frame received for 5\nI0427 13:23:48.358682 206 log.go:172] (0xc000a36000) (5) Data frame handling\nI0427 13:23:48.358702 206 log.go:172] (0xc000a36000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0427 13:23:48.438656 206 log.go:172] (0xc0009c0370) Data frame received for 3\nI0427 13:23:48.438731 206 log.go:172] (0xc0004d41e0) (3) Data frame handling\nI0427 13:23:48.438760 206 log.go:172] (0xc0004d41e0) (3) Data frame sent\nI0427 13:23:48.438785 206 log.go:172] (0xc0009c0370) Data frame received for 5\nI0427 13:23:48.438839 206 log.go:172] (0xc000a36000) (5) Data frame handling\nI0427 13:23:48.438877 206 log.go:172] (0xc0009c0370) Data frame received for 3\nI0427 13:23:48.438903 206 log.go:172] (0xc0004d41e0) (3) Data frame handling\nI0427 13:23:48.441704 206 log.go:172] (0xc0009c0370) Data frame received for 1\nI0427 13:23:48.441743 206 log.go:172] (0xc000a36dc0) (1) Data frame handling\nI0427 13:23:48.441760 206 log.go:172] (0xc000a36dc0) (1) Data frame sent\nI0427 13:23:48.441775 206 log.go:172] (0xc0009c0370) (0xc000a36dc0) Stream removed, broadcasting: 1\nI0427 13:23:48.442118 206 log.go:172] (0xc0009c0370) (0xc000a36dc0) Stream removed, broadcasting: 1\nI0427 13:23:48.442137 206 log.go:172] (0xc0009c0370) (0xc0004d41e0) Stream removed, broadcasting: 3\nI0427 13:23:48.442148 206 log.go:172] (0xc0009c0370) (0xc000a36000) Stream removed, broadcasting: 5\n" Apr 27 13:23:48.447: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Apr 27 13:23:48.447: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Apr 27 13:23:48.450: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Apr 27 13:23:58.454: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Apr 27 13:23:58.454: INFO: Waiting for statefulset status.replicas updated to 0 Apr 27 13:23:58.481: INFO: POD NODE PHASE GRACE CONDITIONS Apr 27 13:23:58.481: INFO: ss-0 iruya-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-27 13:23:36 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-27 13:23:48 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-27 13:23:48 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-27 13:23:36 +0000 UTC }] Apr 27 13:23:58.481: INFO: ss-1 Pending [] Apr 27 13:23:58.481: INFO: Apr 27 13:23:58.481: INFO: StatefulSet ss has not reached scale 3, at 2 Apr 27 13:23:59.486: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.981895972s Apr 27 13:24:00.490: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.976804186s Apr 27 13:24:01.495: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.972725097s Apr 27 13:24:02.500: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.967634131s Apr 27 13:24:03.506: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.963211324s Apr 27 13:24:04.512: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.957171836s Apr 27 13:24:05.518: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.950885005s Apr 27 13:24:06.524: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.945473404s Apr 27 13:24:08.237: INFO: Verifying statefulset ss doesn't scale past 3 for another 939.370367ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-2269 Apr 27 13:24:09.244: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2269 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 27 13:24:09.774: INFO: stderr: "I0427 13:24:09.685377 235 log.go:172] (0xc0005e04d0) (0xc000566aa0) Create stream\nI0427 13:24:09.685417 235 log.go:172] (0xc0005e04d0) (0xc000566aa0) Stream added, broadcasting: 1\nI0427 13:24:09.687474 235 log.go:172] (0xc0005e04d0) Reply frame received for 1\nI0427 13:24:09.687519 235 log.go:172] (0xc0005e04d0) (0xc0004be000) Create stream\nI0427 13:24:09.687541 235 log.go:172] (0xc0005e04d0) (0xc0004be000) Stream added, broadcasting: 3\nI0427 13:24:09.688423 235 log.go:172] (0xc0005e04d0) Reply frame received for 3\nI0427 13:24:09.688498 235 log.go:172] (0xc0005e04d0) (0xc000566b40) Create stream\nI0427 13:24:09.688518 235 log.go:172] (0xc0005e04d0) (0xc000566b40) Stream added, broadcasting: 5\nI0427 13:24:09.689730 235 log.go:172] (0xc0005e04d0) Reply frame received for 5\nI0427 13:24:09.768993 235 log.go:172] (0xc0005e04d0) Data frame received for 5\nI0427 13:24:09.769041 235 log.go:172] (0xc000566b40) (5) Data frame handling\nI0427 13:24:09.769060 235 log.go:172] (0xc000566b40) (5) Data frame sent\nI0427 13:24:09.769072 235 log.go:172] (0xc0005e04d0) Data frame received for 5\nI0427 13:24:09.769082 235 log.go:172] (0xc000566b40) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0427 13:24:09.769259 235 log.go:172] (0xc0005e04d0) Data frame received for 3\nI0427 13:24:09.769288 235 log.go:172] (0xc0004be000) (3) Data frame handling\nI0427 13:24:09.769319 235 log.go:172] (0xc0004be000) (3) Data frame sent\nI0427 13:24:09.769338 235 log.go:172] (0xc0005e04d0) Data frame received for 3\nI0427 13:24:09.769349 235 log.go:172] (0xc0004be000) (3) Data frame handling\nI0427 13:24:09.770397 235 log.go:172] (0xc0005e04d0) Data frame received for 1\nI0427 13:24:09.770423 235 log.go:172] (0xc000566aa0) (1) Data frame handling\nI0427 13:24:09.770443 235 log.go:172] (0xc000566aa0) (1) Data frame sent\nI0427 13:24:09.770463 235 log.go:172] (0xc0005e04d0) (0xc000566aa0) Stream removed, broadcasting: 1\nI0427 13:24:09.770482 235 log.go:172] (0xc0005e04d0) Go away received\nI0427 13:24:09.770932 235 log.go:172] (0xc0005e04d0) (0xc000566aa0) Stream removed, broadcasting: 1\nI0427 13:24:09.770949 235 log.go:172] (0xc0005e04d0) (0xc0004be000) Stream removed, broadcasting: 3\nI0427 13:24:09.770957 235 log.go:172] (0xc0005e04d0) (0xc000566b40) Stream removed, broadcasting: 5\n" Apr 27 13:24:09.774: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Apr 27 13:24:09.775: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Apr 27 13:24:09.775: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2269 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 27 13:24:10.003: INFO: stderr: "I0427 13:24:09.908983 262 log.go:172] (0xc00099a0b0) (0xc00097e6e0) Create stream\nI0427 13:24:09.909059 262 log.go:172] (0xc00099a0b0) (0xc00097e6e0) Stream added, broadcasting: 1\nI0427 13:24:09.911466 262 log.go:172] (0xc00099a0b0) Reply frame received for 1\nI0427 13:24:09.911526 262 log.go:172] (0xc00099a0b0) (0xc0005f2280) Create stream\nI0427 13:24:09.911554 262 log.go:172] (0xc00099a0b0) (0xc0005f2280) Stream added, broadcasting: 3\nI0427 13:24:09.912623 262 log.go:172] (0xc00099a0b0) Reply frame received for 3\nI0427 13:24:09.912652 262 log.go:172] (0xc00099a0b0) (0xc00097e780) Create stream\nI0427 13:24:09.912659 262 log.go:172] (0xc00099a0b0) (0xc00097e780) Stream added, broadcasting: 5\nI0427 13:24:09.914143 262 log.go:172] (0xc00099a0b0) Reply frame received for 5\nI0427 13:24:09.995517 262 log.go:172] (0xc00099a0b0) Data frame received for 5\nI0427 13:24:09.995566 262 log.go:172] (0xc00097e780) (5) Data frame handling\nI0427 13:24:09.995585 262 log.go:172] (0xc00097e780) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0427 13:24:09.995611 262 log.go:172] (0xc00099a0b0) Data frame received for 3\nI0427 13:24:09.995622 262 log.go:172] (0xc0005f2280) (3) Data frame handling\nI0427 13:24:09.995635 262 log.go:172] (0xc0005f2280) (3) Data frame sent\nI0427 13:24:09.995666 262 log.go:172] (0xc00099a0b0) Data frame received for 3\nI0427 13:24:09.995685 262 log.go:172] (0xc0005f2280) (3) Data frame handling\nI0427 13:24:09.995708 262 log.go:172] (0xc00099a0b0) Data frame received for 5\nI0427 13:24:09.995731 262 log.go:172] (0xc00097e780) (5) Data frame handling\nI0427 13:24:09.997739 262 log.go:172] (0xc00099a0b0) Data frame received for 1\nI0427 13:24:09.997772 262 log.go:172] (0xc00097e6e0) (1) Data frame handling\nI0427 13:24:09.997822 262 log.go:172] (0xc00097e6e0) (1) Data frame sent\nI0427 13:24:09.997848 262 log.go:172] (0xc00099a0b0) (0xc00097e6e0) Stream removed, broadcasting: 1\nI0427 13:24:09.997935 262 log.go:172] (0xc00099a0b0) Go away received\nI0427 13:24:09.998252 262 log.go:172] (0xc00099a0b0) (0xc00097e6e0) Stream removed, broadcasting: 1\nI0427 13:24:09.998281 262 log.go:172] (0xc00099a0b0) (0xc0005f2280) Stream removed, broadcasting: 3\nI0427 13:24:09.998301 262 log.go:172] (0xc00099a0b0) (0xc00097e780) Stream removed, broadcasting: 5\n" Apr 27 13:24:10.003: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Apr 27 13:24:10.003: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Apr 27 13:24:10.003: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2269 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 27 13:24:10.205: INFO: stderr: "I0427 13:24:10.133736 282 log.go:172] (0xc0009446e0) (0xc0008ec960) Create stream\nI0427 13:24:10.133791 282 log.go:172] (0xc0009446e0) (0xc0008ec960) Stream added, broadcasting: 1\nI0427 13:24:10.138608 282 log.go:172] (0xc0009446e0) Reply frame received for 1\nI0427 13:24:10.138663 282 log.go:172] (0xc0009446e0) (0xc0008ec000) Create stream\nI0427 13:24:10.138678 282 log.go:172] (0xc0009446e0) (0xc0008ec000) Stream added, broadcasting: 3\nI0427 13:24:10.139672 282 log.go:172] (0xc0009446e0) Reply frame received for 3\nI0427 13:24:10.139702 282 log.go:172] (0xc0009446e0) (0xc00060e140) Create stream\nI0427 13:24:10.139710 282 log.go:172] (0xc0009446e0) (0xc00060e140) Stream added, broadcasting: 5\nI0427 13:24:10.140884 282 log.go:172] (0xc0009446e0) Reply frame received for 5\nI0427 13:24:10.198402 282 log.go:172] (0xc0009446e0) Data frame received for 5\nI0427 13:24:10.198433 282 log.go:172] (0xc00060e140) (5) Data frame handling\nI0427 13:24:10.198441 282 log.go:172] (0xc00060e140) (5) Data frame sent\nI0427 13:24:10.198447 282 log.go:172] (0xc0009446e0) Data frame received for 5\nI0427 13:24:10.198451 282 log.go:172] (0xc00060e140) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0427 13:24:10.198467 282 log.go:172] (0xc0009446e0) Data frame received for 3\nI0427 13:24:10.198472 282 log.go:172] (0xc0008ec000) (3) Data frame handling\nI0427 13:24:10.198478 282 log.go:172] (0xc0008ec000) (3) Data frame sent\nI0427 13:24:10.198482 282 log.go:172] (0xc0009446e0) Data frame received for 3\nI0427 13:24:10.198486 282 log.go:172] (0xc0008ec000) (3) Data frame handling\nI0427 13:24:10.200285 282 log.go:172] (0xc0009446e0) Data frame received for 1\nI0427 13:24:10.200302 282 log.go:172] (0xc0008ec960) (1) Data frame handling\nI0427 13:24:10.200309 282 log.go:172] (0xc0008ec960) (1) Data frame sent\nI0427 13:24:10.200324 282 log.go:172] (0xc0009446e0) (0xc0008ec960) Stream removed, broadcasting: 1\nI0427 13:24:10.200600 282 log.go:172] (0xc0009446e0) (0xc0008ec960) Stream removed, broadcasting: 1\nI0427 13:24:10.200615 282 log.go:172] (0xc0009446e0) (0xc0008ec000) Stream removed, broadcasting: 3\nI0427 13:24:10.200622 282 log.go:172] (0xc0009446e0) (0xc00060e140) Stream removed, broadcasting: 5\n" Apr 27 13:24:10.205: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Apr 27 13:24:10.205: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Apr 27 13:24:10.210: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Apr 27 13:24:10.210: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Apr 27 13:24:10.210: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod Apr 27 13:24:10.212: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2269 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Apr 27 13:24:10.410: INFO: stderr: "I0427 13:24:10.333723 302 log.go:172] (0xc00093c420) (0xc000740640) Create stream\nI0427 13:24:10.333788 302 log.go:172] (0xc00093c420) (0xc000740640) Stream added, broadcasting: 1\nI0427 13:24:10.336698 302 log.go:172] (0xc00093c420) Reply frame received for 1\nI0427 13:24:10.336745 302 log.go:172] (0xc00093c420) (0xc000610280) Create stream\nI0427 13:24:10.336771 302 log.go:172] (0xc00093c420) (0xc000610280) Stream added, broadcasting: 3\nI0427 13:24:10.337954 302 log.go:172] (0xc00093c420) Reply frame received for 3\nI0427 13:24:10.338006 302 log.go:172] (0xc00093c420) (0xc0007406e0) Create stream\nI0427 13:24:10.338021 302 log.go:172] (0xc00093c420) (0xc0007406e0) Stream added, broadcasting: 5\nI0427 13:24:10.338996 302 log.go:172] (0xc00093c420) Reply frame received for 5\nI0427 13:24:10.401558 302 log.go:172] (0xc00093c420) Data frame received for 5\nI0427 13:24:10.401592 302 log.go:172] (0xc0007406e0) (5) Data frame handling\nI0427 13:24:10.401606 302 log.go:172] (0xc0007406e0) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0427 13:24:10.401629 302 log.go:172] (0xc00093c420) Data frame received for 3\nI0427 13:24:10.401645 302 log.go:172] (0xc000610280) (3) Data frame handling\nI0427 13:24:10.401656 302 log.go:172] (0xc000610280) (3) Data frame sent\nI0427 13:24:10.401673 302 log.go:172] (0xc00093c420) Data frame received for 3\nI0427 13:24:10.401680 302 log.go:172] (0xc000610280) (3) Data frame handling\nI0427 13:24:10.401689 302 log.go:172] (0xc00093c420) Data frame received for 5\nI0427 13:24:10.401695 302 log.go:172] (0xc0007406e0) (5) Data frame handling\nI0427 13:24:10.403879 302 log.go:172] (0xc00093c420) Data frame received for 1\nI0427 13:24:10.404001 302 log.go:172] (0xc000740640) (1) Data frame handling\nI0427 13:24:10.404035 302 log.go:172] (0xc000740640) (1) Data frame sent\nI0427 13:24:10.404176 302 log.go:172] (0xc00093c420) (0xc000740640) Stream removed, broadcasting: 1\nI0427 13:24:10.404210 302 log.go:172] (0xc00093c420) Go away received\nI0427 13:24:10.404584 302 log.go:172] (0xc00093c420) (0xc000740640) Stream removed, broadcasting: 1\nI0427 13:24:10.404606 302 log.go:172] (0xc00093c420) (0xc000610280) Stream removed, broadcasting: 3\nI0427 13:24:10.404616 302 log.go:172] (0xc00093c420) (0xc0007406e0) Stream removed, broadcasting: 5\n" Apr 27 13:24:10.410: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Apr 27 13:24:10.410: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Apr 27 13:24:10.410: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2269 ss-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Apr 27 13:24:10.648: INFO: stderr: "I0427 13:24:10.530923 323 log.go:172] (0xc00085e630) (0xc0006b4aa0) Create stream\nI0427 13:24:10.531015 323 log.go:172] (0xc00085e630) (0xc0006b4aa0) Stream added, broadcasting: 1\nI0427 13:24:10.534182 323 log.go:172] (0xc00085e630) Reply frame received for 1\nI0427 13:24:10.534265 323 log.go:172] (0xc00085e630) (0xc0009d4000) Create stream\nI0427 13:24:10.534287 323 log.go:172] (0xc00085e630) (0xc0009d4000) Stream added, broadcasting: 3\nI0427 13:24:10.535377 323 log.go:172] (0xc00085e630) Reply frame received for 3\nI0427 13:24:10.535421 323 log.go:172] (0xc00085e630) (0xc0006b4b40) Create stream\nI0427 13:24:10.535435 323 log.go:172] (0xc00085e630) (0xc0006b4b40) Stream added, broadcasting: 5\nI0427 13:24:10.536421 323 log.go:172] (0xc00085e630) Reply frame received for 5\nI0427 13:24:10.613552 323 log.go:172] (0xc00085e630) Data frame received for 5\nI0427 13:24:10.613582 323 log.go:172] (0xc0006b4b40) (5) Data frame handling\nI0427 13:24:10.613602 323 log.go:172] (0xc0006b4b40) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0427 13:24:10.639381 323 log.go:172] (0xc00085e630) Data frame received for 5\nI0427 13:24:10.639428 323 log.go:172] (0xc0006b4b40) (5) Data frame handling\nI0427 13:24:10.639460 323 log.go:172] (0xc00085e630) Data frame received for 3\nI0427 13:24:10.639512 323 log.go:172] (0xc0009d4000) (3) Data frame handling\nI0427 13:24:10.639534 323 log.go:172] (0xc0009d4000) (3) Data frame sent\nI0427 13:24:10.639551 323 log.go:172] (0xc00085e630) Data frame received for 3\nI0427 13:24:10.639568 323 log.go:172] (0xc0009d4000) (3) Data frame handling\nI0427 13:24:10.642155 323 log.go:172] (0xc00085e630) Data frame received for 1\nI0427 13:24:10.642184 323 log.go:172] (0xc0006b4aa0) (1) Data frame handling\nI0427 13:24:10.642212 323 log.go:172] (0xc0006b4aa0) (1) Data frame sent\nI0427 13:24:10.642239 323 log.go:172] (0xc00085e630) (0xc0006b4aa0) Stream removed, broadcasting: 1\nI0427 13:24:10.642257 323 log.go:172] (0xc00085e630) Go away received\nI0427 13:24:10.642731 323 log.go:172] (0xc00085e630) (0xc0006b4aa0) Stream removed, broadcasting: 1\nI0427 13:24:10.642754 323 log.go:172] (0xc00085e630) (0xc0009d4000) Stream removed, broadcasting: 3\nI0427 13:24:10.642765 323 log.go:172] (0xc00085e630) (0xc0006b4b40) Stream removed, broadcasting: 5\n" Apr 27 13:24:10.648: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Apr 27 13:24:10.648: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Apr 27 13:24:10.648: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2269 ss-2 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Apr 27 13:24:10.875: INFO: stderr: "I0427 13:24:10.773346 342 log.go:172] (0xc000117080) (0xc00062eaa0) Create stream\nI0427 13:24:10.773408 342 log.go:172] (0xc000117080) (0xc00062eaa0) Stream added, broadcasting: 1\nI0427 13:24:10.776652 342 log.go:172] (0xc000117080) Reply frame received for 1\nI0427 13:24:10.776693 342 log.go:172] (0xc000117080) (0xc00062e1e0) Create stream\nI0427 13:24:10.776707 342 log.go:172] (0xc000117080) (0xc00062e1e0) Stream added, broadcasting: 3\nI0427 13:24:10.777648 342 log.go:172] (0xc000117080) Reply frame received for 3\nI0427 13:24:10.777679 342 log.go:172] (0xc000117080) (0xc00026c000) Create stream\nI0427 13:24:10.777689 342 log.go:172] (0xc000117080) (0xc00026c000) Stream added, broadcasting: 5\nI0427 13:24:10.778739 342 log.go:172] (0xc000117080) Reply frame received for 5\nI0427 13:24:10.840812 342 log.go:172] (0xc000117080) Data frame received for 5\nI0427 13:24:10.840849 342 log.go:172] (0xc00026c000) (5) Data frame handling\nI0427 13:24:10.840870 342 log.go:172] (0xc00026c000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0427 13:24:10.867153 342 log.go:172] (0xc000117080) Data frame received for 3\nI0427 13:24:10.867178 342 log.go:172] (0xc00062e1e0) (3) Data frame handling\nI0427 13:24:10.867191 342 log.go:172] (0xc00062e1e0) (3) Data frame sent\nI0427 13:24:10.867201 342 log.go:172] (0xc000117080) Data frame received for 3\nI0427 13:24:10.867209 342 log.go:172] (0xc00062e1e0) (3) Data frame handling\nI0427 13:24:10.867734 342 log.go:172] (0xc000117080) Data frame received for 5\nI0427 13:24:10.867760 342 log.go:172] (0xc00026c000) (5) Data frame handling\nI0427 13:24:10.869599 342 log.go:172] (0xc000117080) Data frame received for 1\nI0427 13:24:10.869613 342 log.go:172] (0xc00062eaa0) (1) Data frame handling\nI0427 13:24:10.869626 342 log.go:172] (0xc00062eaa0) (1) Data frame sent\nI0427 13:24:10.869883 342 log.go:172] (0xc000117080) (0xc00062eaa0) Stream removed, broadcasting: 1\nI0427 13:24:10.870047 342 log.go:172] (0xc000117080) Go away received\nI0427 13:24:10.870183 342 log.go:172] (0xc000117080) (0xc00062eaa0) Stream removed, broadcasting: 1\nI0427 13:24:10.870204 342 log.go:172] (0xc000117080) (0xc00062e1e0) Stream removed, broadcasting: 3\nI0427 13:24:10.870212 342 log.go:172] (0xc000117080) (0xc00026c000) Stream removed, broadcasting: 5\n" Apr 27 13:24:10.875: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Apr 27 13:24:10.875: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Apr 27 13:24:10.875: INFO: Waiting for statefulset status.replicas updated to 0 Apr 27 13:24:10.926: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 Apr 27 13:24:20.935: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Apr 27 13:24:20.935: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Apr 27 13:24:20.935: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Apr 27 13:24:20.948: INFO: POD NODE PHASE GRACE CONDITIONS Apr 27 13:24:20.948: INFO: ss-0 iruya-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-27 13:23:36 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-27 13:24:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-27 13:24:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-27 13:23:36 +0000 UTC }] Apr 27 13:24:20.948: INFO: ss-1 iruya-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-27 13:23:58 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-27 13:24:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-27 13:24:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-27 13:23:58 +0000 UTC }] Apr 27 13:24:20.948: INFO: ss-2 iruya-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-27 13:23:58 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-27 13:24:11 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-27 13:24:11 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-27 13:23:58 +0000 UTC }] Apr 27 13:24:20.948: INFO: Apr 27 13:24:20.948: INFO: StatefulSet ss has not reached scale 0, at 3 Apr 27 13:24:21.953: INFO: POD NODE PHASE GRACE CONDITIONS Apr 27 13:24:21.953: INFO: ss-0 iruya-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-27 13:23:36 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-27 13:24:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-27 13:24:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-27 13:23:36 +0000 UTC }] Apr 27 13:24:21.953: INFO: ss-1 iruya-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-27 13:23:58 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-27 13:24:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-27 13:24:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-27 13:23:58 +0000 UTC }] Apr 27 13:24:21.953: INFO: ss-2 iruya-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-27 13:23:58 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-27 13:24:11 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-27 13:24:11 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-27 13:23:58 +0000 UTC }] Apr 27 13:24:21.953: INFO: Apr 27 13:24:21.953: INFO: StatefulSet ss has not reached scale 0, at 3 Apr 27 13:24:22.958: INFO: POD NODE PHASE GRACE CONDITIONS Apr 27 13:24:22.958: INFO: ss-0 iruya-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-27 13:23:36 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-27 13:24:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-27 13:24:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-27 13:23:36 +0000 UTC }] Apr 27 13:24:22.959: INFO: ss-1 iruya-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-27 13:23:58 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-27 13:24:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-27 13:24:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-27 13:23:58 +0000 UTC }] Apr 27 13:24:22.959: INFO: ss-2 iruya-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-27 13:23:58 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-27 13:24:11 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-27 13:24:11 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-27 13:23:58 +0000 UTC }] Apr 27 13:24:22.959: INFO: Apr 27 13:24:22.959: INFO: StatefulSet ss has not reached scale 0, at 3 Apr 27 13:24:23.963: INFO: POD NODE PHASE GRACE CONDITIONS Apr 27 13:24:23.963: INFO: ss-0 iruya-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-27 13:23:36 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-27 13:24:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-27 13:24:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-27 13:23:36 +0000 UTC }] Apr 27 13:24:23.963: INFO: ss-2 iruya-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-27 13:23:58 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-27 13:24:11 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-27 13:24:11 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-27 13:23:58 +0000 UTC }] Apr 27 13:24:23.963: INFO: Apr 27 13:24:23.963: INFO: StatefulSet ss has not reached scale 0, at 2 Apr 27 13:24:24.968: INFO: Verifying statefulset ss doesn't scale past 0 for another 5.979493301s Apr 27 13:24:25.972: INFO: Verifying statefulset ss doesn't scale past 0 for another 4.975052524s Apr 27 13:24:26.977: INFO: Verifying statefulset ss doesn't scale past 0 for another 3.970507288s Apr 27 13:24:27.981: INFO: Verifying statefulset ss doesn't scale past 0 for another 2.96616513s Apr 27 13:24:28.985: INFO: Verifying statefulset ss doesn't scale past 0 for another 1.96159109s Apr 27 13:24:29.990: INFO: Verifying statefulset ss doesn't scale past 0 for another 957.26523ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-2269 Apr 27 13:24:30.994: INFO: Scaling statefulset ss to 0 Apr 27 13:24:31.003: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 Apr 27 13:24:31.006: INFO: Deleting all statefulset in ns statefulset-2269 Apr 27 13:24:31.009: INFO: Scaling statefulset ss to 0 Apr 27 13:24:31.016: INFO: Waiting for statefulset status.replicas updated to 0 Apr 27 13:24:31.019: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 27 13:24:31.033: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-2269" for this suite. Apr 27 13:24:37.047: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 27 13:24:37.132: INFO: namespace statefulset-2269 deletion completed in 6.095602974s • [SLOW TEST:61.030 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 27 13:24:37.132: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Apr 27 13:24:37.224: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 27 13:24:37.244: INFO: Number of nodes with available pods: 0 Apr 27 13:24:37.244: INFO: Node iruya-worker is running more than one daemon pod Apr 27 13:24:38.250: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 27 13:24:38.253: INFO: Number of nodes with available pods: 0 Apr 27 13:24:38.253: INFO: Node iruya-worker is running more than one daemon pod Apr 27 13:24:39.250: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 27 13:24:39.253: INFO: Number of nodes with available pods: 0 Apr 27 13:24:39.253: INFO: Node iruya-worker is running more than one daemon pod Apr 27 13:24:40.250: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 27 13:24:40.253: INFO: Number of nodes with available pods: 0 Apr 27 13:24:40.254: INFO: Node iruya-worker is running more than one daemon pod Apr 27 13:24:41.250: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 27 13:24:41.253: INFO: Number of nodes with available pods: 1 Apr 27 13:24:41.253: INFO: Node iruya-worker2 is running more than one daemon pod Apr 27 13:24:42.249: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 27 13:24:42.252: INFO: Number of nodes with available pods: 2 Apr 27 13:24:42.252: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Stop a daemon pod, check that the daemon pod is revived. Apr 27 13:24:42.301: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 27 13:24:42.303: INFO: Number of nodes with available pods: 1 Apr 27 13:24:42.303: INFO: Node iruya-worker2 is running more than one daemon pod Apr 27 13:24:43.309: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 27 13:24:43.313: INFO: Number of nodes with available pods: 1 Apr 27 13:24:43.313: INFO: Node iruya-worker2 is running more than one daemon pod Apr 27 13:24:44.308: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 27 13:24:44.311: INFO: Number of nodes with available pods: 1 Apr 27 13:24:44.311: INFO: Node iruya-worker2 is running more than one daemon pod Apr 27 13:24:45.309: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 27 13:24:45.313: INFO: Number of nodes with available pods: 1 Apr 27 13:24:45.313: INFO: Node iruya-worker2 is running more than one daemon pod Apr 27 13:24:46.309: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 27 13:24:46.312: INFO: Number of nodes with available pods: 1 Apr 27 13:24:46.312: INFO: Node iruya-worker2 is running more than one daemon pod Apr 27 13:24:47.309: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 27 13:24:47.312: INFO: Number of nodes with available pods: 1 Apr 27 13:24:47.312: INFO: Node iruya-worker2 is running more than one daemon pod Apr 27 13:24:48.308: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 27 13:24:48.312: INFO: Number of nodes with available pods: 1 Apr 27 13:24:48.312: INFO: Node iruya-worker2 is running more than one daemon pod Apr 27 13:24:49.309: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 27 13:24:49.312: INFO: Number of nodes with available pods: 1 Apr 27 13:24:49.312: INFO: Node iruya-worker2 is running more than one daemon pod Apr 27 13:24:50.309: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 27 13:24:50.312: INFO: Number of nodes with available pods: 1 Apr 27 13:24:50.312: INFO: Node iruya-worker2 is running more than one daemon pod Apr 27 13:24:51.309: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 27 13:24:51.312: INFO: Number of nodes with available pods: 1 Apr 27 13:24:51.312: INFO: Node iruya-worker2 is running more than one daemon pod Apr 27 13:24:52.308: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 27 13:24:52.321: INFO: Number of nodes with available pods: 1 Apr 27 13:24:52.321: INFO: Node iruya-worker2 is running more than one daemon pod Apr 27 13:24:53.310: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 27 13:24:53.314: INFO: Number of nodes with available pods: 1 Apr 27 13:24:53.314: INFO: Node iruya-worker2 is running more than one daemon pod Apr 27 13:24:54.308: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 27 13:24:54.311: INFO: Number of nodes with available pods: 1 Apr 27 13:24:54.311: INFO: Node iruya-worker2 is running more than one daemon pod Apr 27 13:24:55.309: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 27 13:24:55.312: INFO: Number of nodes with available pods: 2 Apr 27 13:24:55.312: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-1867, will wait for the garbage collector to delete the pods Apr 27 13:24:55.375: INFO: Deleting DaemonSet.extensions daemon-set took: 5.818162ms Apr 27 13:24:55.675: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.30295ms Apr 27 13:25:01.979: INFO: Number of nodes with available pods: 0 Apr 27 13:25:01.979: INFO: Number of running nodes: 0, number of available pods: 0 Apr 27 13:25:01.983: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-1867/daemonsets","resourceVersion":"7718962"},"items":null} Apr 27 13:25:01.985: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-1867/pods","resourceVersion":"7718962"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 27 13:25:01.992: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-1867" for this suite. Apr 27 13:25:08.005: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 27 13:25:08.075: INFO: namespace daemonsets-1867 deletion completed in 6.081604962s • [SLOW TEST:30.943 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 27 13:25:08.076: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test substitution in container's args Apr 27 13:25:08.142: INFO: Waiting up to 5m0s for pod "var-expansion-bcff62d2-59d1-4e4b-ad4d-f961e595c048" in namespace "var-expansion-9341" to be "success or failure" Apr 27 13:25:08.146: INFO: Pod "var-expansion-bcff62d2-59d1-4e4b-ad4d-f961e595c048": Phase="Pending", Reason="", readiness=false. Elapsed: 3.6542ms Apr 27 13:25:10.184: INFO: Pod "var-expansion-bcff62d2-59d1-4e4b-ad4d-f961e595c048": Phase="Pending", Reason="", readiness=false. Elapsed: 2.041738399s Apr 27 13:25:12.327: INFO: Pod "var-expansion-bcff62d2-59d1-4e4b-ad4d-f961e595c048": Phase="Pending", Reason="", readiness=false. Elapsed: 4.185013267s Apr 27 13:25:14.332: INFO: Pod "var-expansion-bcff62d2-59d1-4e4b-ad4d-f961e595c048": Phase="Running", Reason="", readiness=true. Elapsed: 6.189373162s Apr 27 13:25:16.351: INFO: Pod "var-expansion-bcff62d2-59d1-4e4b-ad4d-f961e595c048": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.209162606s STEP: Saw pod success Apr 27 13:25:16.351: INFO: Pod "var-expansion-bcff62d2-59d1-4e4b-ad4d-f961e595c048" satisfied condition "success or failure" Apr 27 13:25:16.355: INFO: Trying to get logs from node iruya-worker2 pod var-expansion-bcff62d2-59d1-4e4b-ad4d-f961e595c048 container dapi-container: STEP: delete the pod Apr 27 13:25:16.394: INFO: Waiting for pod var-expansion-bcff62d2-59d1-4e4b-ad4d-f961e595c048 to disappear Apr 27 13:25:16.422: INFO: Pod var-expansion-bcff62d2-59d1-4e4b-ad4d-f961e595c048 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 27 13:25:16.422: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-9341" for this suite. Apr 27 13:25:22.504: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 27 13:25:22.581: INFO: namespace var-expansion-9341 deletion completed in 6.154866164s • [SLOW TEST:14.506 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 27 13:25:22.582: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Apr 27 13:25:22.722: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) Apr 27 13:25:22.744: INFO: Pod name sample-pod: Found 0 pods out of 1 Apr 27 13:25:27.760: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Apr 27 13:25:27.760: INFO: Creating deployment "test-rolling-update-deployment" Apr 27 13:25:27.764: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has Apr 27 13:25:27.809: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created Apr 27 13:25:29.816: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected Apr 27 13:25:29.818: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723590727, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723590727, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723590727, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723590727, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-79f6b9d75c\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 27 13:25:31.822: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723590727, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723590727, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723590727, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723590727, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-79f6b9d75c\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 27 13:25:33.822: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 Apr 27 13:25:33.829: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment,GenerateName:,Namespace:deployment-2292,SelfLink:/apis/apps/v1/namespaces/deployment-2292/deployments/test-rolling-update-deployment,UID:fe7b428b-cc34-4bef-8f54-d95f491dcdd1,ResourceVersion:7719112,Generation:1,CreationTimestamp:2020-04-27 13:25:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-04-27 13:25:27 +0000 UTC 2020-04-27 13:25:27 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-04-27 13:25:32 +0000 UTC 2020-04-27 13:25:27 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rolling-update-deployment-79f6b9d75c" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},} Apr 27 13:25:33.833: INFO: New ReplicaSet "test-rolling-update-deployment-79f6b9d75c" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-79f6b9d75c,GenerateName:,Namespace:deployment-2292,SelfLink:/apis/apps/v1/namespaces/deployment-2292/replicasets/test-rolling-update-deployment-79f6b9d75c,UID:10d01edd-7174-4b02-b86c-3afcb302110e,ResourceVersion:7719100,Generation:1,CreationTimestamp:2020-04-27 13:25:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment fe7b428b-cc34-4bef-8f54-d95f491dcdd1 0xc002baf357 0xc002baf358}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} Apr 27 13:25:33.833: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": Apr 27 13:25:33.833: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-controller,GenerateName:,Namespace:deployment-2292,SelfLink:/apis/apps/v1/namespaces/deployment-2292/replicasets/test-rolling-update-controller,UID:7460ea1a-2e23-42c8-86b1-534a8c66bcd2,ResourceVersion:7719110,Generation:2,CreationTimestamp:2020-04-27 13:25:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305832,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment fe7b428b-cc34-4bef-8f54-d95f491dcdd1 0xc002baf287 0xc002baf288}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Apr 27 13:25:33.835: INFO: Pod "test-rolling-update-deployment-79f6b9d75c-6ngz2" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-79f6b9d75c-6ngz2,GenerateName:test-rolling-update-deployment-79f6b9d75c-,Namespace:deployment-2292,SelfLink:/api/v1/namespaces/deployment-2292/pods/test-rolling-update-deployment-79f6b9d75c-6ngz2,UID:90973456-83cc-4b2d-a076-70a3652c7509,ResourceVersion:7719099,Generation:0,CreationTimestamp:2020-04-27 13:25:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rolling-update-deployment-79f6b9d75c 10d01edd-7174-4b02-b86c-3afcb302110e 0xc0022cba07 0xc0022cba08}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-hrjjq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hrjjq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-hrjjq true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0022cba80} {node.kubernetes.io/unreachable Exists NoExecute 0xc0022cbaa0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-27 13:25:28 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-04-27 13:25:31 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-04-27 13:25:31 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-27 13:25:27 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:10.244.1.41,StartTime:2020-04-27 13:25:28 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-04-27 13:25:31 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 containerd://95a1b73c5a33d3af14c2a5c8b793e948f6dbeb64e211b746ee56099d851f112f}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 27 13:25:33.835: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-2292" for this suite. Apr 27 13:25:39.955: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 27 13:25:40.033: INFO: namespace deployment-2292 deletion completed in 6.194101629s • [SLOW TEST:17.451 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl api-versions should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 27 13:25:40.035: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: validating api versions Apr 27 13:25:40.424: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions' Apr 27 13:25:40.649: INFO: stderr: "" Apr 27 13:25:40.649: INFO: stdout: "admissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\napps/v1beta1\napps/v1beta2\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 27 13:25:40.649: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1261" for this suite. Apr 27 13:25:46.949: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 27 13:25:47.106: INFO: namespace kubectl-1261 deletion completed in 6.452247811s • [SLOW TEST:7.071 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl api-versions /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl label should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 27 13:25:47.106: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1210 STEP: creating the pod Apr 27 13:25:47.209: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3247' Apr 27 13:25:47.583: INFO: stderr: "" Apr 27 13:25:47.584: INFO: stdout: "pod/pause created\n" Apr 27 13:25:47.584: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] Apr 27 13:25:47.584: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-3247" to be "running and ready" Apr 27 13:25:47.608: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 24.48192ms Apr 27 13:25:49.612: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02862646s Apr 27 13:25:51.616: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 4.0323377s Apr 27 13:25:51.616: INFO: Pod "pause" satisfied condition "running and ready" Apr 27 13:25:51.616: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: adding the label testing-label with value testing-label-value to a pod Apr 27 13:25:51.616: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-3247' Apr 27 13:25:51.735: INFO: stderr: "" Apr 27 13:25:51.735: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value Apr 27 13:25:51.736: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-3247' Apr 27 13:25:51.894: INFO: stderr: "" Apr 27 13:25:51.894: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 4s testing-label-value\n" STEP: removing the label testing-label of a pod Apr 27 13:25:51.894: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-3247' Apr 27 13:25:51.995: INFO: stderr: "" Apr 27 13:25:51.995: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label Apr 27 13:25:51.995: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-3247' Apr 27 13:25:52.168: INFO: stderr: "" Apr 27 13:25:52.168: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 5s \n" [AfterEach] [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1217 STEP: using delete to clean up resources Apr 27 13:25:52.168: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-3247' Apr 27 13:25:52.356: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 27 13:25:52.357: INFO: stdout: "pod \"pause\" force deleted\n" Apr 27 13:25:52.357: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-3247' Apr 27 13:25:52.495: INFO: stderr: "No resources found.\n" Apr 27 13:25:52.495: INFO: stdout: "" Apr 27 13:25:52.495: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-3247 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Apr 27 13:25:52.653: INFO: stderr: "" Apr 27 13:25:52.653: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 27 13:25:52.653: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3247" for this suite. Apr 27 13:26:00.683: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 27 13:26:00.754: INFO: namespace kubectl-3247 deletion completed in 8.097892006s • [SLOW TEST:13.648 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 27 13:26:00.755: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-28c5bea9-2445-45d3-97c2-4260f8b17dd8 STEP: Creating a pod to test consume secrets Apr 27 13:26:00.959: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-2f6e18d1-7229-4523-a5e0-4101eeb4bd91" in namespace "projected-5864" to be "success or failure" Apr 27 13:26:01.023: INFO: Pod "pod-projected-secrets-2f6e18d1-7229-4523-a5e0-4101eeb4bd91": Phase="Pending", Reason="", readiness=false. Elapsed: 64.400633ms Apr 27 13:26:03.030: INFO: Pod "pod-projected-secrets-2f6e18d1-7229-4523-a5e0-4101eeb4bd91": Phase="Pending", Reason="", readiness=false. Elapsed: 2.071659336s Apr 27 13:26:05.034: INFO: Pod "pod-projected-secrets-2f6e18d1-7229-4523-a5e0-4101eeb4bd91": Phase="Pending", Reason="", readiness=false. Elapsed: 4.075135559s Apr 27 13:26:07.037: INFO: Pod "pod-projected-secrets-2f6e18d1-7229-4523-a5e0-4101eeb4bd91": Phase="Running", Reason="", readiness=true. Elapsed: 6.078753754s Apr 27 13:26:09.041: INFO: Pod "pod-projected-secrets-2f6e18d1-7229-4523-a5e0-4101eeb4bd91": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.082540201s STEP: Saw pod success Apr 27 13:26:09.041: INFO: Pod "pod-projected-secrets-2f6e18d1-7229-4523-a5e0-4101eeb4bd91" satisfied condition "success or failure" Apr 27 13:26:09.043: INFO: Trying to get logs from node iruya-worker pod pod-projected-secrets-2f6e18d1-7229-4523-a5e0-4101eeb4bd91 container projected-secret-volume-test: STEP: delete the pod Apr 27 13:26:09.096: INFO: Waiting for pod pod-projected-secrets-2f6e18d1-7229-4523-a5e0-4101eeb4bd91 to disappear Apr 27 13:26:09.118: INFO: Pod pod-projected-secrets-2f6e18d1-7229-4523-a5e0-4101eeb4bd91 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 27 13:26:09.118: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5864" for this suite. Apr 27 13:26:15.214: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 27 13:26:15.278: INFO: namespace projected-5864 deletion completed in 6.1572301s • [SLOW TEST:14.523 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 27 13:26:15.278: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Apr 27 13:26:16.264: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"29057e42-25b5-43dc-b7d1-1d0a2d0d1250", Controller:(*bool)(0xc0025d8ce2), BlockOwnerDeletion:(*bool)(0xc0025d8ce3)}} Apr 27 13:26:16.316: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"94ca1700-0378-48e6-a133-a8caf0350562", Controller:(*bool)(0xc002151522), BlockOwnerDeletion:(*bool)(0xc002151523)}} Apr 27 13:26:16.396: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"9a776018-17ab-407c-946a-d0f2cd2e5c06", Controller:(*bool)(0xc0025d8e8a), BlockOwnerDeletion:(*bool)(0xc0025d8e8b)}} [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 27 13:26:21.430: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-9466" for this suite. Apr 27 13:26:27.496: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 27 13:26:27.561: INFO: namespace gc-9466 deletion completed in 6.126211423s • [SLOW TEST:12.283 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 27 13:26:27.561: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test override command Apr 27 13:26:27.715: INFO: Waiting up to 5m0s for pod "client-containers-ccd0f0bc-ac86-41a0-b8c0-51da0f6d5b5f" in namespace "containers-4354" to be "success or failure" Apr 27 13:26:27.740: INFO: Pod "client-containers-ccd0f0bc-ac86-41a0-b8c0-51da0f6d5b5f": Phase="Pending", Reason="", readiness=false. Elapsed: 25.492113ms Apr 27 13:26:29.744: INFO: Pod "client-containers-ccd0f0bc-ac86-41a0-b8c0-51da0f6d5b5f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029667191s Apr 27 13:26:31.789: INFO: Pod "client-containers-ccd0f0bc-ac86-41a0-b8c0-51da0f6d5b5f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.074658361s Apr 27 13:26:33.794: INFO: Pod "client-containers-ccd0f0bc-ac86-41a0-b8c0-51da0f6d5b5f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.079058126s Apr 27 13:26:35.798: INFO: Pod "client-containers-ccd0f0bc-ac86-41a0-b8c0-51da0f6d5b5f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.083213454s STEP: Saw pod success Apr 27 13:26:35.798: INFO: Pod "client-containers-ccd0f0bc-ac86-41a0-b8c0-51da0f6d5b5f" satisfied condition "success or failure" Apr 27 13:26:35.801: INFO: Trying to get logs from node iruya-worker2 pod client-containers-ccd0f0bc-ac86-41a0-b8c0-51da0f6d5b5f container test-container: STEP: delete the pod Apr 27 13:26:35.833: INFO: Waiting for pod client-containers-ccd0f0bc-ac86-41a0-b8c0-51da0f6d5b5f to disappear Apr 27 13:26:35.915: INFO: Pod client-containers-ccd0f0bc-ac86-41a0-b8c0-51da0f6d5b5f no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 27 13:26:35.915: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-4354" for this suite. Apr 27 13:26:41.967: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 27 13:26:42.038: INFO: namespace containers-4354 deletion completed in 6.119372048s • [SLOW TEST:14.477 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 27 13:26:42.039: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [It] should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a replication controller Apr 27 13:26:42.253: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9092' Apr 27 13:26:42.555: INFO: stderr: "" Apr 27 13:26:42.555: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Apr 27 13:26:42.555: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9092' Apr 27 13:26:42.700: INFO: stderr: "" Apr 27 13:26:42.700: INFO: stdout: "update-demo-nautilus-5brl6 update-demo-nautilus-n6jgg " Apr 27 13:26:42.700: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-5brl6 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9092' Apr 27 13:26:42.852: INFO: stderr: "" Apr 27 13:26:42.852: INFO: stdout: "" Apr 27 13:26:42.852: INFO: update-demo-nautilus-5brl6 is created but not running Apr 27 13:26:47.852: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9092' Apr 27 13:26:47.964: INFO: stderr: "" Apr 27 13:26:47.964: INFO: stdout: "update-demo-nautilus-5brl6 update-demo-nautilus-n6jgg " Apr 27 13:26:47.964: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-5brl6 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9092' Apr 27 13:26:48.048: INFO: stderr: "" Apr 27 13:26:48.048: INFO: stdout: "" Apr 27 13:26:48.048: INFO: update-demo-nautilus-5brl6 is created but not running Apr 27 13:26:53.049: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9092' Apr 27 13:26:53.140: INFO: stderr: "" Apr 27 13:26:53.140: INFO: stdout: "update-demo-nautilus-5brl6 update-demo-nautilus-n6jgg " Apr 27 13:26:53.140: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-5brl6 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9092' Apr 27 13:26:53.229: INFO: stderr: "" Apr 27 13:26:53.229: INFO: stdout: "true" Apr 27 13:26:53.229: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-5brl6 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9092' Apr 27 13:26:53.315: INFO: stderr: "" Apr 27 13:26:53.315: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 27 13:26:53.315: INFO: validating pod update-demo-nautilus-5brl6 Apr 27 13:26:53.319: INFO: got data: { "image": "nautilus.jpg" } Apr 27 13:26:53.319: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 27 13:26:53.319: INFO: update-demo-nautilus-5brl6 is verified up and running Apr 27 13:26:53.319: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-n6jgg -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9092' Apr 27 13:26:53.411: INFO: stderr: "" Apr 27 13:26:53.411: INFO: stdout: "true" Apr 27 13:26:53.411: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-n6jgg -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9092' Apr 27 13:26:53.642: INFO: stderr: "" Apr 27 13:26:53.642: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 27 13:26:53.642: INFO: validating pod update-demo-nautilus-n6jgg Apr 27 13:26:53.647: INFO: got data: { "image": "nautilus.jpg" } Apr 27 13:26:53.647: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 27 13:26:53.647: INFO: update-demo-nautilus-n6jgg is verified up and running STEP: scaling down the replication controller Apr 27 13:26:53.686: INFO: scanned /root for discovery docs: Apr 27 13:26:53.686: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-9092' Apr 27 13:26:54.817: INFO: stderr: "" Apr 27 13:26:54.817: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Apr 27 13:26:54.817: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9092' Apr 27 13:26:55.118: INFO: stderr: "" Apr 27 13:26:55.118: INFO: stdout: "update-demo-nautilus-5brl6 update-demo-nautilus-n6jgg " STEP: Replicas for name=update-demo: expected=1 actual=2 Apr 27 13:27:00.118: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9092' Apr 27 13:27:00.220: INFO: stderr: "" Apr 27 13:27:00.220: INFO: stdout: "update-demo-nautilus-5brl6 update-demo-nautilus-n6jgg " STEP: Replicas for name=update-demo: expected=1 actual=2 Apr 27 13:27:05.220: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9092' Apr 27 13:27:05.317: INFO: stderr: "" Apr 27 13:27:05.317: INFO: stdout: "update-demo-nautilus-n6jgg " Apr 27 13:27:05.317: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-n6jgg -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9092' Apr 27 13:27:05.406: INFO: stderr: "" Apr 27 13:27:05.406: INFO: stdout: "true" Apr 27 13:27:05.406: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-n6jgg -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9092' Apr 27 13:27:05.504: INFO: stderr: "" Apr 27 13:27:05.504: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 27 13:27:05.504: INFO: validating pod update-demo-nautilus-n6jgg Apr 27 13:27:05.507: INFO: got data: { "image": "nautilus.jpg" } Apr 27 13:27:05.507: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 27 13:27:05.507: INFO: update-demo-nautilus-n6jgg is verified up and running STEP: scaling up the replication controller Apr 27 13:27:05.509: INFO: scanned /root for discovery docs: Apr 27 13:27:05.509: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-9092' Apr 27 13:27:06.670: INFO: stderr: "" Apr 27 13:27:06.670: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Apr 27 13:27:06.671: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9092' Apr 27 13:27:06.826: INFO: stderr: "" Apr 27 13:27:06.826: INFO: stdout: "update-demo-nautilus-l8bmz update-demo-nautilus-n6jgg " Apr 27 13:27:06.826: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-l8bmz -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9092' Apr 27 13:27:06.919: INFO: stderr: "" Apr 27 13:27:06.919: INFO: stdout: "" Apr 27 13:27:06.919: INFO: update-demo-nautilus-l8bmz is created but not running Apr 27 13:27:11.919: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9092' Apr 27 13:27:12.028: INFO: stderr: "" Apr 27 13:27:12.028: INFO: stdout: "update-demo-nautilus-l8bmz update-demo-nautilus-n6jgg " Apr 27 13:27:12.028: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-l8bmz -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9092' Apr 27 13:27:12.118: INFO: stderr: "" Apr 27 13:27:12.118: INFO: stdout: "true" Apr 27 13:27:12.118: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-l8bmz -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9092' Apr 27 13:27:12.214: INFO: stderr: "" Apr 27 13:27:12.214: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 27 13:27:12.214: INFO: validating pod update-demo-nautilus-l8bmz Apr 27 13:27:12.217: INFO: got data: { "image": "nautilus.jpg" } Apr 27 13:27:12.217: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 27 13:27:12.217: INFO: update-demo-nautilus-l8bmz is verified up and running Apr 27 13:27:12.217: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-n6jgg -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9092' Apr 27 13:27:12.307: INFO: stderr: "" Apr 27 13:27:12.307: INFO: stdout: "true" Apr 27 13:27:12.307: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-n6jgg -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9092' Apr 27 13:27:12.395: INFO: stderr: "" Apr 27 13:27:12.395: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 27 13:27:12.395: INFO: validating pod update-demo-nautilus-n6jgg Apr 27 13:27:12.398: INFO: got data: { "image": "nautilus.jpg" } Apr 27 13:27:12.398: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 27 13:27:12.398: INFO: update-demo-nautilus-n6jgg is verified up and running STEP: using delete to clean up resources Apr 27 13:27:12.398: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-9092' Apr 27 13:27:12.519: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 27 13:27:12.519: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Apr 27 13:27:12.519: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-9092' Apr 27 13:27:12.610: INFO: stderr: "No resources found.\n" Apr 27 13:27:12.610: INFO: stdout: "" Apr 27 13:27:12.610: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-9092 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Apr 27 13:27:12.805: INFO: stderr: "" Apr 27 13:27:12.805: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 27 13:27:12.805: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9092" for this suite. Apr 27 13:27:18.934: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 27 13:27:19.006: INFO: namespace kubectl-9092 deletion completed in 6.174330761s • [SLOW TEST:36.967 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-cli] Kubectl client [k8s.io] Guestbook application should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 27 13:27:19.006: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating all guestbook components Apr 27 13:27:19.224: INFO: apiVersion: v1 kind: Service metadata: name: redis-slave labels: app: redis role: slave tier: backend spec: ports: - port: 6379 selector: app: redis role: slave tier: backend Apr 27 13:27:19.224: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5896' Apr 27 13:27:19.624: INFO: stderr: "" Apr 27 13:27:19.624: INFO: stdout: "service/redis-slave created\n" Apr 27 13:27:19.624: INFO: apiVersion: v1 kind: Service metadata: name: redis-master labels: app: redis role: master tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: redis role: master tier: backend Apr 27 13:27:19.624: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5896' Apr 27 13:27:20.022: INFO: stderr: "" Apr 27 13:27:20.022: INFO: stdout: "service/redis-master created\n" Apr 27 13:27:20.022: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend Apr 27 13:27:20.022: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5896' Apr 27 13:27:20.431: INFO: stderr: "" Apr 27 13:27:20.431: INFO: stdout: "service/frontend created\n" Apr 27 13:27:20.431: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: frontend spec: replicas: 3 selector: matchLabels: app: guestbook tier: frontend template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: php-redis image: gcr.io/google-samples/gb-frontend:v6 resources: requests: cpu: 100m memory: 100Mi env: - name: GET_HOSTS_FROM value: dns # If your cluster config does not include a dns service, then to # instead access environment variables to find service host # info, comment out the 'value: dns' line above, and uncomment the # line below: # value: env ports: - containerPort: 80 Apr 27 13:27:20.431: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5896' Apr 27 13:27:20.741: INFO: stderr: "" Apr 27 13:27:20.741: INFO: stdout: "deployment.apps/frontend created\n" Apr 27 13:27:20.742: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: redis-master spec: replicas: 1 selector: matchLabels: app: redis role: master tier: backend template: metadata: labels: app: redis role: master tier: backend spec: containers: - name: master image: gcr.io/kubernetes-e2e-test-images/redis:1.0 resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Apr 27 13:27:20.742: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5896' Apr 27 13:27:21.049: INFO: stderr: "" Apr 27 13:27:21.049: INFO: stdout: "deployment.apps/redis-master created\n" Apr 27 13:27:21.050: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: redis-slave spec: replicas: 2 selector: matchLabels: app: redis role: slave tier: backend template: metadata: labels: app: redis role: slave tier: backend spec: containers: - name: slave image: gcr.io/google-samples/gb-redisslave:v3 resources: requests: cpu: 100m memory: 100Mi env: - name: GET_HOSTS_FROM value: dns # If your cluster config does not include a dns service, then to # instead access an environment variable to find the master # service's host, comment out the 'value: dns' line above, and # uncomment the line below: # value: env ports: - containerPort: 6379 Apr 27 13:27:21.050: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5896' Apr 27 13:27:21.362: INFO: stderr: "" Apr 27 13:27:21.362: INFO: stdout: "deployment.apps/redis-slave created\n" STEP: validating guestbook app Apr 27 13:27:21.362: INFO: Waiting for all frontend pods to be Running. Apr 27 13:27:31.413: INFO: Waiting for frontend to serve content. Apr 27 13:27:31.429: INFO: Trying to add a new entry to the guestbook. Apr 27 13:27:31.442: INFO: Verifying that added entry can be retrieved. STEP: using delete to clean up resources Apr 27 13:27:31.457: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-5896' Apr 27 13:27:31.706: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 27 13:27:31.706: INFO: stdout: "service \"redis-slave\" force deleted\n" STEP: using delete to clean up resources Apr 27 13:27:31.706: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-5896' Apr 27 13:27:32.012: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 27 13:27:32.012: INFO: stdout: "service \"redis-master\" force deleted\n" STEP: using delete to clean up resources Apr 27 13:27:32.012: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-5896' Apr 27 13:27:32.279: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 27 13:27:32.279: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources Apr 27 13:27:32.279: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-5896' Apr 27 13:27:32.443: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 27 13:27:32.444: INFO: stdout: "deployment.apps \"frontend\" force deleted\n" STEP: using delete to clean up resources Apr 27 13:27:32.444: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-5896' Apr 27 13:27:32.616: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 27 13:27:32.616: INFO: stdout: "deployment.apps \"redis-master\" force deleted\n" STEP: using delete to clean up resources Apr 27 13:27:32.617: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-5896' Apr 27 13:27:32.999: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 27 13:27:32.999: INFO: stdout: "deployment.apps \"redis-slave\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 27 13:27:32.999: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5896" for this suite. Apr 27 13:28:13.985: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 27 13:28:14.176: INFO: namespace kubectl-5896 deletion completed in 41.149432757s • [SLOW TEST:55.170 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Guestbook application /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 27 13:28:14.177: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Apr 27 13:28:14.308: INFO: Waiting up to 5m0s for pod "downwardapi-volume-2efc08c9-1342-40d9-aeb8-b3984aaa5e5c" in namespace "projected-3504" to be "success or failure" Apr 27 13:28:14.338: INFO: Pod "downwardapi-volume-2efc08c9-1342-40d9-aeb8-b3984aaa5e5c": Phase="Pending", Reason="", readiness=false. Elapsed: 30.046588ms Apr 27 13:28:16.344: INFO: Pod "downwardapi-volume-2efc08c9-1342-40d9-aeb8-b3984aaa5e5c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035788566s Apr 27 13:28:18.516: INFO: Pod "downwardapi-volume-2efc08c9-1342-40d9-aeb8-b3984aaa5e5c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.207852451s Apr 27 13:28:20.520: INFO: Pod "downwardapi-volume-2efc08c9-1342-40d9-aeb8-b3984aaa5e5c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.211959125s STEP: Saw pod success Apr 27 13:28:20.520: INFO: Pod "downwardapi-volume-2efc08c9-1342-40d9-aeb8-b3984aaa5e5c" satisfied condition "success or failure" Apr 27 13:28:20.522: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-2efc08c9-1342-40d9-aeb8-b3984aaa5e5c container client-container: STEP: delete the pod Apr 27 13:28:20.625: INFO: Waiting for pod downwardapi-volume-2efc08c9-1342-40d9-aeb8-b3984aaa5e5c to disappear Apr 27 13:28:20.697: INFO: Pod downwardapi-volume-2efc08c9-1342-40d9-aeb8-b3984aaa5e5c no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 27 13:28:20.697: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3504" for this suite. Apr 27 13:28:26.809: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 27 13:28:26.880: INFO: namespace projected-3504 deletion completed in 6.179532503s • [SLOW TEST:12.704 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 27 13:28:26.881: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 27 13:28:33.139: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-133" for this suite. Apr 27 13:29:13.166: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 27 13:29:13.238: INFO: namespace kubelet-test-133 deletion completed in 40.095527379s • [SLOW TEST:46.357 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a busybox Pod with hostAliases /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:136 should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl rolling-update should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 27 13:29:13.238: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1516 [It] should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Apr 27 13:29:13.396: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=kubectl-6758' Apr 27 13:29:13.523: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Apr 27 13:29:13.523: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n" STEP: verifying the rc e2e-test-nginx-rc was created Apr 27 13:29:13.538: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 0 spec.replicas 1 status.replicas 0 Apr 27 13:29:13.572: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0 STEP: rolling-update to same image controller Apr 27 13:29:13.638: INFO: scanned /root for discovery docs: Apr 27 13:29:13.638: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-nginx-rc --update-period=1s --image=docker.io/library/nginx:1.14-alpine --image-pull-policy=IfNotPresent --namespace=kubectl-6758' Apr 27 13:29:31.711: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" Apr 27 13:29:31.711: INFO: stdout: "Created e2e-test-nginx-rc-ea85506f6b5b319021b1a34078b9e995\nScaling up e2e-test-nginx-rc-ea85506f6b5b319021b1a34078b9e995 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-ea85506f6b5b319021b1a34078b9e995 up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-ea85506f6b5b319021b1a34078b9e995 to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n" Apr 27 13:29:31.711: INFO: stdout: "Created e2e-test-nginx-rc-ea85506f6b5b319021b1a34078b9e995\nScaling up e2e-test-nginx-rc-ea85506f6b5b319021b1a34078b9e995 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-ea85506f6b5b319021b1a34078b9e995 up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-ea85506f6b5b319021b1a34078b9e995 to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n" STEP: waiting for all containers in run=e2e-test-nginx-rc pods to come up. Apr 27 13:29:31.711: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-6758' Apr 27 13:29:31.810: INFO: stderr: "" Apr 27 13:29:31.810: INFO: stdout: "e2e-test-nginx-rc-ea85506f6b5b319021b1a34078b9e995-ksgfm e2e-test-nginx-rc-r5pg6 " STEP: Replicas for run=e2e-test-nginx-rc: expected=1 actual=2 Apr 27 13:29:36.811: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-6758' Apr 27 13:29:36.900: INFO: stderr: "" Apr 27 13:29:36.900: INFO: stdout: "e2e-test-nginx-rc-ea85506f6b5b319021b1a34078b9e995-ksgfm " Apr 27 13:29:36.900: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-ea85506f6b5b319021b1a34078b9e995-ksgfm -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-nginx-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6758' Apr 27 13:29:36.990: INFO: stderr: "" Apr 27 13:29:36.990: INFO: stdout: "true" Apr 27 13:29:36.990: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-ea85506f6b5b319021b1a34078b9e995-ksgfm -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-nginx-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6758' Apr 27 13:29:37.070: INFO: stderr: "" Apr 27 13:29:37.070: INFO: stdout: "docker.io/library/nginx:1.14-alpine" Apr 27 13:29:37.070: INFO: e2e-test-nginx-rc-ea85506f6b5b319021b1a34078b9e995-ksgfm is verified up and running [AfterEach] [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1522 Apr 27 13:29:37.070: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=kubectl-6758' Apr 27 13:29:37.168: INFO: stderr: "" Apr 27 13:29:37.168: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 27 13:29:37.168: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6758" for this suite. Apr 27 13:29:43.312: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 27 13:29:43.392: INFO: namespace kubectl-6758 deletion completed in 6.185216686s • [SLOW TEST:30.154 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 27 13:29:43.393: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Apr 27 13:29:43.634: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 27 13:29:43.690: INFO: Number of nodes with available pods: 0 Apr 27 13:29:43.690: INFO: Node iruya-worker is running more than one daemon pod Apr 27 13:29:44.745: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 27 13:29:44.807: INFO: Number of nodes with available pods: 0 Apr 27 13:29:44.807: INFO: Node iruya-worker is running more than one daemon pod Apr 27 13:29:45.695: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 27 13:29:45.700: INFO: Number of nodes with available pods: 0 Apr 27 13:29:45.700: INFO: Node iruya-worker is running more than one daemon pod Apr 27 13:29:47.069: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 27 13:29:47.367: INFO: Number of nodes with available pods: 0 Apr 27 13:29:47.368: INFO: Node iruya-worker is running more than one daemon pod Apr 27 13:29:47.696: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 27 13:29:47.700: INFO: Number of nodes with available pods: 0 Apr 27 13:29:47.700: INFO: Node iruya-worker is running more than one daemon pod Apr 27 13:29:48.702: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 27 13:29:48.706: INFO: Number of nodes with available pods: 0 Apr 27 13:29:48.706: INFO: Node iruya-worker is running more than one daemon pod Apr 27 13:29:49.704: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 27 13:29:49.707: INFO: Number of nodes with available pods: 2 Apr 27 13:29:49.707: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. Apr 27 13:29:49.754: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 27 13:29:49.777: INFO: Number of nodes with available pods: 2 Apr 27 13:29:49.777: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-1518, will wait for the garbage collector to delete the pods Apr 27 13:29:51.029: INFO: Deleting DaemonSet.extensions daemon-set took: 71.297988ms Apr 27 13:29:51.529: INFO: Terminating DaemonSet.extensions daemon-set pods took: 500.202291ms Apr 27 13:29:54.632: INFO: Number of nodes with available pods: 0 Apr 27 13:29:54.632: INFO: Number of running nodes: 0, number of available pods: 0 Apr 27 13:29:54.634: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-1518/daemonsets","resourceVersion":"7720239"},"items":null} Apr 27 13:29:54.637: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-1518/pods","resourceVersion":"7720239"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 27 13:29:54.646: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-1518" for this suite. Apr 27 13:30:02.718: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 27 13:30:02.793: INFO: namespace daemonsets-1518 deletion completed in 8.144169629s • [SLOW TEST:19.401 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 27 13:30:02.793: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-map-9c9e2540-f7f9-4bf0-a995-ac888b35d274 STEP: Creating a pod to test consume configMaps Apr 27 13:30:03.114: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-f2314c59-0ff8-45fb-a96c-8ab69912b178" in namespace "projected-6813" to be "success or failure" Apr 27 13:30:03.122: INFO: Pod "pod-projected-configmaps-f2314c59-0ff8-45fb-a96c-8ab69912b178": Phase="Pending", Reason="", readiness=false. Elapsed: 8.435501ms Apr 27 13:30:05.126: INFO: Pod "pod-projected-configmaps-f2314c59-0ff8-45fb-a96c-8ab69912b178": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012094655s Apr 27 13:30:07.130: INFO: Pod "pod-projected-configmaps-f2314c59-0ff8-45fb-a96c-8ab69912b178": Phase="Pending", Reason="", readiness=false. Elapsed: 4.016225565s Apr 27 13:30:09.134: INFO: Pod "pod-projected-configmaps-f2314c59-0ff8-45fb-a96c-8ab69912b178": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.019851054s STEP: Saw pod success Apr 27 13:30:09.134: INFO: Pod "pod-projected-configmaps-f2314c59-0ff8-45fb-a96c-8ab69912b178" satisfied condition "success or failure" Apr 27 13:30:09.136: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-configmaps-f2314c59-0ff8-45fb-a96c-8ab69912b178 container projected-configmap-volume-test: STEP: delete the pod Apr 27 13:30:09.274: INFO: Waiting for pod pod-projected-configmaps-f2314c59-0ff8-45fb-a96c-8ab69912b178 to disappear Apr 27 13:30:09.284: INFO: Pod pod-projected-configmaps-f2314c59-0ff8-45fb-a96c-8ab69912b178 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 27 13:30:09.284: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6813" for this suite. Apr 27 13:30:15.330: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 27 13:30:15.398: INFO: namespace projected-6813 deletion completed in 6.110178719s • [SLOW TEST:12.604 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 27 13:30:15.398: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating the pod Apr 27 13:30:20.067: INFO: Successfully updated pod "labelsupdate43d6b5d9-a89d-41ec-a999-242775b1fc0a" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 27 13:30:24.330: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1687" for this suite. Apr 27 13:30:46.422: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 27 13:30:46.491: INFO: namespace downward-api-1687 deletion completed in 22.157686581s • [SLOW TEST:31.093 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 27 13:30:46.492: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0777 on tmpfs Apr 27 13:30:46.628: INFO: Waiting up to 5m0s for pod "pod-5aeaba61-58df-4d3c-a55f-56a29e49324d" in namespace "emptydir-1952" to be "success or failure" Apr 27 13:30:46.674: INFO: Pod "pod-5aeaba61-58df-4d3c-a55f-56a29e49324d": Phase="Pending", Reason="", readiness=false. Elapsed: 46.060535ms Apr 27 13:30:48.679: INFO: Pod "pod-5aeaba61-58df-4d3c-a55f-56a29e49324d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.050446986s Apr 27 13:30:50.683: INFO: Pod "pod-5aeaba61-58df-4d3c-a55f-56a29e49324d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.054286906s Apr 27 13:30:52.687: INFO: Pod "pod-5aeaba61-58df-4d3c-a55f-56a29e49324d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.058657518s STEP: Saw pod success Apr 27 13:30:52.687: INFO: Pod "pod-5aeaba61-58df-4d3c-a55f-56a29e49324d" satisfied condition "success or failure" Apr 27 13:30:52.690: INFO: Trying to get logs from node iruya-worker2 pod pod-5aeaba61-58df-4d3c-a55f-56a29e49324d container test-container: STEP: delete the pod Apr 27 13:30:52.730: INFO: Waiting for pod pod-5aeaba61-58df-4d3c-a55f-56a29e49324d to disappear Apr 27 13:30:52.847: INFO: Pod pod-5aeaba61-58df-4d3c-a55f-56a29e49324d no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 27 13:30:52.847: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1952" for this suite. Apr 27 13:30:58.904: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 27 13:30:58.980: INFO: namespace emptydir-1952 deletion completed in 6.128152299s • [SLOW TEST:12.488 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 27 13:30:58.981: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap configmap-1585/configmap-test-2d3615f7-cf52-4309-af7a-04d4a3babe8f STEP: Creating a pod to test consume configMaps Apr 27 13:30:59.138: INFO: Waiting up to 5m0s for pod "pod-configmaps-601a5485-ed7d-44ba-9f2a-7465595119da" in namespace "configmap-1585" to be "success or failure" Apr 27 13:30:59.148: INFO: Pod "pod-configmaps-601a5485-ed7d-44ba-9f2a-7465595119da": Phase="Pending", Reason="", readiness=false. Elapsed: 10.009152ms Apr 27 13:31:01.152: INFO: Pod "pod-configmaps-601a5485-ed7d-44ba-9f2a-7465595119da": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014011794s Apr 27 13:31:03.156: INFO: Pod "pod-configmaps-601a5485-ed7d-44ba-9f2a-7465595119da": Phase="Pending", Reason="", readiness=false. Elapsed: 4.018002606s Apr 27 13:31:05.160: INFO: Pod "pod-configmaps-601a5485-ed7d-44ba-9f2a-7465595119da": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.022350647s STEP: Saw pod success Apr 27 13:31:05.160: INFO: Pod "pod-configmaps-601a5485-ed7d-44ba-9f2a-7465595119da" satisfied condition "success or failure" Apr 27 13:31:05.163: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-601a5485-ed7d-44ba-9f2a-7465595119da container env-test: STEP: delete the pod Apr 27 13:31:05.196: INFO: Waiting for pod pod-configmaps-601a5485-ed7d-44ba-9f2a-7465595119da to disappear Apr 27 13:31:05.213: INFO: Pod pod-configmaps-601a5485-ed7d-44ba-9f2a-7465595119da no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 27 13:31:05.213: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1585" for this suite. Apr 27 13:31:11.247: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 27 13:31:11.320: INFO: namespace configmap-1585 deletion completed in 6.10392049s • [SLOW TEST:12.339 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 27 13:31:11.320: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars Apr 27 13:31:11.452: INFO: Waiting up to 5m0s for pod "downward-api-3cd973e2-f6f9-4d05-b3b3-b5689883e902" in namespace "downward-api-7527" to be "success or failure" Apr 27 13:31:11.456: INFO: Pod "downward-api-3cd973e2-f6f9-4d05-b3b3-b5689883e902": Phase="Pending", Reason="", readiness=false. Elapsed: 4.701858ms Apr 27 13:31:13.461: INFO: Pod "downward-api-3cd973e2-f6f9-4d05-b3b3-b5689883e902": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009477262s Apr 27 13:31:15.465: INFO: Pod "downward-api-3cd973e2-f6f9-4d05-b3b3-b5689883e902": Phase="Pending", Reason="", readiness=false. Elapsed: 4.013328704s Apr 27 13:31:17.470: INFO: Pod "downward-api-3cd973e2-f6f9-4d05-b3b3-b5689883e902": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.018001005s STEP: Saw pod success Apr 27 13:31:17.470: INFO: Pod "downward-api-3cd973e2-f6f9-4d05-b3b3-b5689883e902" satisfied condition "success or failure" Apr 27 13:31:17.472: INFO: Trying to get logs from node iruya-worker pod downward-api-3cd973e2-f6f9-4d05-b3b3-b5689883e902 container dapi-container: STEP: delete the pod Apr 27 13:31:17.492: INFO: Waiting for pod downward-api-3cd973e2-f6f9-4d05-b3b3-b5689883e902 to disappear Apr 27 13:31:17.497: INFO: Pod downward-api-3cd973e2-f6f9-4d05-b3b3-b5689883e902 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 27 13:31:17.497: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7527" for this suite. Apr 27 13:31:23.512: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 27 13:31:23.585: INFO: namespace downward-api-7527 deletion completed in 6.086359609s • [SLOW TEST:12.266 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 27 13:31:23.586: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-projected-all-test-volume-725ee595-9758-4d08-86c3-5a884ee148a7 STEP: Creating secret with name secret-projected-all-test-volume-75d6eb5b-c0f6-47ce-a897-3a038ef5422b STEP: Creating a pod to test Check all projections for projected volume plugin Apr 27 13:31:23.810: INFO: Waiting up to 5m0s for pod "projected-volume-8117ff25-4bfc-4dcd-a877-08804d958aa5" in namespace "projected-5661" to be "success or failure" Apr 27 13:31:23.843: INFO: Pod "projected-volume-8117ff25-4bfc-4dcd-a877-08804d958aa5": Phase="Pending", Reason="", readiness=false. Elapsed: 32.942684ms Apr 27 13:31:25.848: INFO: Pod "projected-volume-8117ff25-4bfc-4dcd-a877-08804d958aa5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.037449599s Apr 27 13:31:27.852: INFO: Pod "projected-volume-8117ff25-4bfc-4dcd-a877-08804d958aa5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.041665297s Apr 27 13:31:29.856: INFO: Pod "projected-volume-8117ff25-4bfc-4dcd-a877-08804d958aa5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.045256029s STEP: Saw pod success Apr 27 13:31:29.856: INFO: Pod "projected-volume-8117ff25-4bfc-4dcd-a877-08804d958aa5" satisfied condition "success or failure" Apr 27 13:31:29.858: INFO: Trying to get logs from node iruya-worker pod projected-volume-8117ff25-4bfc-4dcd-a877-08804d958aa5 container projected-all-volume-test: STEP: delete the pod Apr 27 13:31:29.890: INFO: Waiting for pod projected-volume-8117ff25-4bfc-4dcd-a877-08804d958aa5 to disappear Apr 27 13:31:29.973: INFO: Pod projected-volume-8117ff25-4bfc-4dcd-a877-08804d958aa5 no longer exists [AfterEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 27 13:31:29.973: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5661" for this suite. Apr 27 13:31:36.027: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 27 13:31:36.101: INFO: namespace projected-5661 deletion completed in 6.123207709s • [SLOW TEST:12.515 seconds] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:31 should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 27 13:31:36.101: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44 [It] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod Apr 27 13:31:36.257: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 27 13:31:45.755: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-7618" for this suite. Apr 27 13:31:51.824: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 27 13:31:51.898: INFO: namespace init-container-7618 deletion completed in 6.085436365s • [SLOW TEST:15.797 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 27 13:31:51.898: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap configmap-1919/configmap-test-3a5f17b8-0211-424e-9ef2-3e35046d531a STEP: Creating a pod to test consume configMaps Apr 27 13:31:52.216: INFO: Waiting up to 5m0s for pod "pod-configmaps-891ddab0-0775-44e5-a144-fee51358e8b3" in namespace "configmap-1919" to be "success or failure" Apr 27 13:31:52.264: INFO: Pod "pod-configmaps-891ddab0-0775-44e5-a144-fee51358e8b3": Phase="Pending", Reason="", readiness=false. Elapsed: 48.544739ms Apr 27 13:31:54.268: INFO: Pod "pod-configmaps-891ddab0-0775-44e5-a144-fee51358e8b3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.052478217s Apr 27 13:31:56.274: INFO: Pod "pod-configmaps-891ddab0-0775-44e5-a144-fee51358e8b3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.058531837s STEP: Saw pod success Apr 27 13:31:56.274: INFO: Pod "pod-configmaps-891ddab0-0775-44e5-a144-fee51358e8b3" satisfied condition "success or failure" Apr 27 13:31:56.277: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-891ddab0-0775-44e5-a144-fee51358e8b3 container env-test: STEP: delete the pod Apr 27 13:31:56.443: INFO: Waiting for pod pod-configmaps-891ddab0-0775-44e5-a144-fee51358e8b3 to disappear Apr 27 13:31:56.522: INFO: Pod pod-configmaps-891ddab0-0775-44e5-a144-fee51358e8b3 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 27 13:31:56.522: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1919" for this suite. Apr 27 13:32:02.674: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 27 13:32:02.742: INFO: namespace configmap-1919 deletion completed in 6.21592506s • [SLOW TEST:10.844 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 27 13:32:02.742: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Apr 27 13:32:09.417: INFO: Successfully updated pod "pod-update-activedeadlineseconds-7e36c6d4-bc83-45dd-8ca6-afeaeee6737f" Apr 27 13:32:09.417: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-7e36c6d4-bc83-45dd-8ca6-afeaeee6737f" in namespace "pods-2987" to be "terminated due to deadline exceeded" Apr 27 13:32:09.455: INFO: Pod "pod-update-activedeadlineseconds-7e36c6d4-bc83-45dd-8ca6-afeaeee6737f": Phase="Running", Reason="", readiness=true. Elapsed: 38.178491ms Apr 27 13:32:11.459: INFO: Pod "pod-update-activedeadlineseconds-7e36c6d4-bc83-45dd-8ca6-afeaeee6737f": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.041819849s Apr 27 13:32:11.459: INFO: Pod "pod-update-activedeadlineseconds-7e36c6d4-bc83-45dd-8ca6-afeaeee6737f" satisfied condition "terminated due to deadline exceeded" [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 27 13:32:11.459: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-2987" for this suite. Apr 27 13:32:17.542: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 27 13:32:17.688: INFO: namespace pods-2987 deletion completed in 6.22546427s • [SLOW TEST:14.946 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 27 13:32:17.689: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Apr 27 13:32:17.804: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 27 13:32:18.918: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-9895" for this suite. Apr 27 13:32:24.954: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 27 13:32:25.014: INFO: namespace custom-resource-definition-9895 deletion completed in 6.092971909s • [SLOW TEST:7.325 seconds] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Simple CustomResourceDefinition /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:35 creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 27 13:32:25.015: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Apr 27 13:32:25.265: INFO: Pod name rollover-pod: Found 0 pods out of 1 Apr 27 13:32:30.272: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Apr 27 13:32:32.279: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready Apr 27 13:32:34.283: INFO: Creating deployment "test-rollover-deployment" Apr 27 13:32:34.295: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations Apr 27 13:32:36.304: INFO: Check revision of new replica set for deployment "test-rollover-deployment" Apr 27 13:32:36.308: INFO: Ensure that both replica sets have 1 created replica Apr 27 13:32:36.312: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update Apr 27 13:32:36.317: INFO: Updating deployment test-rollover-deployment Apr 27 13:32:36.317: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller Apr 27 13:32:38.471: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 Apr 27 13:32:38.477: INFO: Make sure deployment "test-rollover-deployment" is complete Apr 27 13:32:38.482: INFO: all replica sets need to contain the pod-template-hash label Apr 27 13:32:38.482: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723591154, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723591154, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723591156, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723591154, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 27 13:32:40.490: INFO: all replica sets need to contain the pod-template-hash label Apr 27 13:32:40.490: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723591154, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723591154, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723591156, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723591154, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 27 13:32:42.490: INFO: all replica sets need to contain the pod-template-hash label Apr 27 13:32:42.490: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723591154, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723591154, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723591161, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723591154, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 27 13:32:44.490: INFO: all replica sets need to contain the pod-template-hash label Apr 27 13:32:44.490: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723591154, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723591154, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723591161, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723591154, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 27 13:32:46.490: INFO: all replica sets need to contain the pod-template-hash label Apr 27 13:32:46.491: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723591154, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723591154, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723591161, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723591154, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 27 13:32:48.489: INFO: all replica sets need to contain the pod-template-hash label Apr 27 13:32:48.489: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723591154, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723591154, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723591161, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723591154, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 27 13:32:50.490: INFO: all replica sets need to contain the pod-template-hash label Apr 27 13:32:50.490: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723591154, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723591154, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723591161, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723591154, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 27 13:32:52.491: INFO: Apr 27 13:32:52.491: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 Apr 27 13:32:52.498: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment,GenerateName:,Namespace:deployment-6027,SelfLink:/apis/apps/v1/namespaces/deployment-6027/deployments/test-rollover-deployment,UID:2c3e3550-2740-410a-9289-f951c47f09a8,ResourceVersion:7720932,Generation:2,CreationTimestamp:2020-04-27 13:32:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-04-27 13:32:34 +0000 UTC 2020-04-27 13:32:34 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-04-27 13:32:51 +0000 UTC 2020-04-27 13:32:34 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rollover-deployment-854595fc44" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},} Apr 27 13:32:52.502: INFO: New ReplicaSet "test-rollover-deployment-854595fc44" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-854595fc44,GenerateName:,Namespace:deployment-6027,SelfLink:/apis/apps/v1/namespaces/deployment-6027/replicasets/test-rollover-deployment-854595fc44,UID:1829966a-b5b6-4279-a033-8a4d23dabbf3,ResourceVersion:7720921,Generation:2,CreationTimestamp:2020-04-27 13:32:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 2c3e3550-2740-410a-9289-f951c47f09a8 0xc00264cfe7 0xc00264cfe8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} Apr 27 13:32:52.502: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": Apr 27 13:32:52.502: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-controller,GenerateName:,Namespace:deployment-6027,SelfLink:/apis/apps/v1/namespaces/deployment-6027/replicasets/test-rollover-controller,UID:4adc2971-316a-4536-baad-e9ed815bad45,ResourceVersion:7720930,Generation:2,CreationTimestamp:2020-04-27 13:32:25 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 2c3e3550-2740-410a-9289-f951c47f09a8 0xc00264cf17 0xc00264cf18}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Apr 27 13:32:52.502: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-9b8b997cf,GenerateName:,Namespace:deployment-6027,SelfLink:/apis/apps/v1/namespaces/deployment-6027/replicasets/test-rollover-deployment-9b8b997cf,UID:7c58d4cb-a026-42d2-bce6-a69972175297,ResourceVersion:7720887,Generation:2,CreationTimestamp:2020-04-27 13:32:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 2c3e3550-2740-410a-9289-f951c47f09a8 0xc00264d0b0 0xc00264d0b1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Apr 27 13:32:52.505: INFO: Pod "test-rollover-deployment-854595fc44-gnm22" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-854595fc44-gnm22,GenerateName:test-rollover-deployment-854595fc44-,Namespace:deployment-6027,SelfLink:/api/v1/namespaces/deployment-6027/pods/test-rollover-deployment-854595fc44-gnm22,UID:18e79c4c-38e9-45ce-9f2e-dfd7a705e1bc,ResourceVersion:7720899,Generation:0,CreationTimestamp:2020-04-27 13:32:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rollover-deployment-854595fc44 1829966a-b5b6-4279-a033-8a4d23dabbf3 0xc0028a2c87 0xc0028a2c88}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-t7kq9 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-t7kq9,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-t7kq9 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0028a2d00} {node.kubernetes.io/unreachable Exists NoExecute 0xc0028a2d20}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-27 13:32:36 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-04-27 13:32:41 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-04-27 13:32:41 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-27 13:32:36 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:10.244.2.96,StartTime:2020-04-27 13:32:36 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-04-27 13:32:40 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 containerd://e598e484c3389ead95e3b0e618095307a33fef44c27fca5644e4a458d0dba6bf}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 27 13:32:52.505: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-6027" for this suite. Apr 27 13:33:00.643: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 27 13:33:00.716: INFO: namespace deployment-6027 deletion completed in 8.207793619s • [SLOW TEST:35.701 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 27 13:33:00.716: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-c914e2aa-2688-46c4-be09-116f3e153542 STEP: Creating a pod to test consume configMaps Apr 27 13:33:00.947: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-cf0a702b-5887-4b89-b4b1-9b0f323575ce" in namespace "projected-628" to be "success or failure" Apr 27 13:33:00.985: INFO: Pod "pod-projected-configmaps-cf0a702b-5887-4b89-b4b1-9b0f323575ce": Phase="Pending", Reason="", readiness=false. Elapsed: 37.631402ms Apr 27 13:33:02.988: INFO: Pod "pod-projected-configmaps-cf0a702b-5887-4b89-b4b1-9b0f323575ce": Phase="Pending", Reason="", readiness=false. Elapsed: 2.040570625s Apr 27 13:33:05.017: INFO: Pod "pod-projected-configmaps-cf0a702b-5887-4b89-b4b1-9b0f323575ce": Phase="Pending", Reason="", readiness=false. Elapsed: 4.06946189s Apr 27 13:33:07.021: INFO: Pod "pod-projected-configmaps-cf0a702b-5887-4b89-b4b1-9b0f323575ce": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.073373339s STEP: Saw pod success Apr 27 13:33:07.021: INFO: Pod "pod-projected-configmaps-cf0a702b-5887-4b89-b4b1-9b0f323575ce" satisfied condition "success or failure" Apr 27 13:33:07.023: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-configmaps-cf0a702b-5887-4b89-b4b1-9b0f323575ce container projected-configmap-volume-test: STEP: delete the pod Apr 27 13:33:07.059: INFO: Waiting for pod pod-projected-configmaps-cf0a702b-5887-4b89-b4b1-9b0f323575ce to disappear Apr 27 13:33:07.203: INFO: Pod pod-projected-configmaps-cf0a702b-5887-4b89-b4b1-9b0f323575ce no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 27 13:33:07.203: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-628" for this suite. Apr 27 13:33:15.240: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 27 13:33:15.307: INFO: namespace projected-628 deletion completed in 8.099196601s • [SLOW TEST:14.591 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 27 13:33:15.309: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating the pod Apr 27 13:33:22.184: INFO: Successfully updated pod "annotationupdate7883c529-e08d-449a-8b4d-96a22abbee6a" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 27 13:33:24.223: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7906" for this suite. Apr 27 13:33:46.283: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 27 13:33:46.352: INFO: namespace projected-7906 deletion completed in 22.124884718s • [SLOW TEST:31.044 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 27 13:33:46.352: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Apr 27 13:33:46.581: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. Apr 27 13:33:46.607: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 27 13:33:46.623: INFO: Number of nodes with available pods: 0 Apr 27 13:33:46.623: INFO: Node iruya-worker is running more than one daemon pod Apr 27 13:33:47.628: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 27 13:33:47.631: INFO: Number of nodes with available pods: 0 Apr 27 13:33:47.631: INFO: Node iruya-worker is running more than one daemon pod Apr 27 13:33:48.629: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 27 13:33:48.633: INFO: Number of nodes with available pods: 0 Apr 27 13:33:48.633: INFO: Node iruya-worker is running more than one daemon pod Apr 27 13:33:49.628: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 27 13:33:49.632: INFO: Number of nodes with available pods: 0 Apr 27 13:33:49.632: INFO: Node iruya-worker is running more than one daemon pod Apr 27 13:33:50.628: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 27 13:33:50.631: INFO: Number of nodes with available pods: 0 Apr 27 13:33:50.631: INFO: Node iruya-worker is running more than one daemon pod Apr 27 13:33:51.647: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 27 13:33:51.650: INFO: Number of nodes with available pods: 0 Apr 27 13:33:51.650: INFO: Node iruya-worker is running more than one daemon pod Apr 27 13:33:52.628: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 27 13:33:52.632: INFO: Number of nodes with available pods: 2 Apr 27 13:33:52.632: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. Apr 27 13:33:52.778: INFO: Wrong image for pod: daemon-set-bljp2. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 27 13:33:52.778: INFO: Wrong image for pod: daemon-set-jbsfs. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 27 13:33:52.800: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 27 13:33:53.862: INFO: Wrong image for pod: daemon-set-bljp2. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 27 13:33:53.862: INFO: Wrong image for pod: daemon-set-jbsfs. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 27 13:33:53.866: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 27 13:33:54.804: INFO: Wrong image for pod: daemon-set-bljp2. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 27 13:33:54.804: INFO: Wrong image for pod: daemon-set-jbsfs. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 27 13:33:54.807: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 27 13:33:55.805: INFO: Wrong image for pod: daemon-set-bljp2. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 27 13:33:55.805: INFO: Wrong image for pod: daemon-set-jbsfs. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 27 13:33:55.809: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 27 13:33:56.898: INFO: Wrong image for pod: daemon-set-bljp2. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 27 13:33:56.898: INFO: Wrong image for pod: daemon-set-jbsfs. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 27 13:33:56.902: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 27 13:33:57.805: INFO: Wrong image for pod: daemon-set-bljp2. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 27 13:33:57.805: INFO: Wrong image for pod: daemon-set-jbsfs. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 27 13:33:57.805: INFO: Pod daemon-set-jbsfs is not available Apr 27 13:33:57.809: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 27 13:33:58.826: INFO: Wrong image for pod: daemon-set-bljp2. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 27 13:33:58.826: INFO: Pod daemon-set-wq8hg is not available Apr 27 13:33:58.830: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 27 13:33:59.804: INFO: Wrong image for pod: daemon-set-bljp2. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 27 13:33:59.804: INFO: Pod daemon-set-wq8hg is not available Apr 27 13:33:59.809: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 27 13:34:00.808: INFO: Wrong image for pod: daemon-set-bljp2. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 27 13:34:00.808: INFO: Pod daemon-set-wq8hg is not available Apr 27 13:34:00.812: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 27 13:34:01.808: INFO: Wrong image for pod: daemon-set-bljp2. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 27 13:34:01.808: INFO: Pod daemon-set-wq8hg is not available Apr 27 13:34:01.813: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 27 13:34:02.827: INFO: Wrong image for pod: daemon-set-bljp2. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 27 13:34:02.827: INFO: Pod daemon-set-wq8hg is not available Apr 27 13:34:02.837: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 27 13:34:03.922: INFO: Wrong image for pod: daemon-set-bljp2. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 27 13:34:04.043: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 27 13:34:04.804: INFO: Wrong image for pod: daemon-set-bljp2. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 27 13:34:04.808: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 27 13:34:05.804: INFO: Wrong image for pod: daemon-set-bljp2. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 27 13:34:05.804: INFO: Pod daemon-set-bljp2 is not available Apr 27 13:34:05.808: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 27 13:34:06.874: INFO: Pod daemon-set-dsn9h is not available Apr 27 13:34:06.877: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node STEP: Check that daemon pods are still running on every node of the cluster. Apr 27 13:34:06.880: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 27 13:34:06.882: INFO: Number of nodes with available pods: 1 Apr 27 13:34:06.882: INFO: Node iruya-worker is running more than one daemon pod Apr 27 13:34:07.887: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 27 13:34:07.890: INFO: Number of nodes with available pods: 1 Apr 27 13:34:07.890: INFO: Node iruya-worker is running more than one daemon pod Apr 27 13:34:09.012: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 27 13:34:09.016: INFO: Number of nodes with available pods: 1 Apr 27 13:34:09.016: INFO: Node iruya-worker is running more than one daemon pod Apr 27 13:34:09.887: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 27 13:34:09.889: INFO: Number of nodes with available pods: 1 Apr 27 13:34:09.889: INFO: Node iruya-worker is running more than one daemon pod Apr 27 13:34:10.888: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 27 13:34:10.891: INFO: Number of nodes with available pods: 2 Apr 27 13:34:10.891: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-7201, will wait for the garbage collector to delete the pods Apr 27 13:34:10.965: INFO: Deleting DaemonSet.extensions daemon-set took: 7.470793ms Apr 27 13:34:11.265: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.28774ms Apr 27 13:34:22.305: INFO: Number of nodes with available pods: 0 Apr 27 13:34:22.305: INFO: Number of running nodes: 0, number of available pods: 0 Apr 27 13:34:22.308: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-7201/daemonsets","resourceVersion":"7721275"},"items":null} Apr 27 13:34:22.310: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-7201/pods","resourceVersion":"7721275"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 27 13:34:22.318: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-7201" for this suite. Apr 27 13:34:30.342: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 27 13:34:30.414: INFO: namespace daemonsets-7201 deletion completed in 8.092211974s • [SLOW TEST:44.062 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 27 13:34:30.414: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Performing setup for networking test in namespace pod-network-test-4110 STEP: creating a selector STEP: Creating the service pods in kubernetes Apr 27 13:34:30.564: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Apr 27 13:35:00.838: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.63:8080/dial?request=hostName&protocol=http&host=10.244.1.62&port=8080&tries=1'] Namespace:pod-network-test-4110 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 27 13:35:00.838: INFO: >>> kubeConfig: /root/.kube/config I0427 13:35:00.866439 6 log.go:172] (0xc0026f9080) (0xc0025f4820) Create stream I0427 13:35:00.866473 6 log.go:172] (0xc0026f9080) (0xc0025f4820) Stream added, broadcasting: 1 I0427 13:35:00.868775 6 log.go:172] (0xc0026f9080) Reply frame received for 1 I0427 13:35:00.868818 6 log.go:172] (0xc0026f9080) (0xc0025f48c0) Create stream I0427 13:35:00.868832 6 log.go:172] (0xc0026f9080) (0xc0025f48c0) Stream added, broadcasting: 3 I0427 13:35:00.869915 6 log.go:172] (0xc0026f9080) Reply frame received for 3 I0427 13:35:00.869959 6 log.go:172] (0xc0026f9080) (0xc0025f4960) Create stream I0427 13:35:00.869968 6 log.go:172] (0xc0026f9080) (0xc0025f4960) Stream added, broadcasting: 5 I0427 13:35:00.870756 6 log.go:172] (0xc0026f9080) Reply frame received for 5 I0427 13:35:00.955541 6 log.go:172] (0xc0026f9080) Data frame received for 3 I0427 13:35:00.955565 6 log.go:172] (0xc0025f48c0) (3) Data frame handling I0427 13:35:00.955579 6 log.go:172] (0xc0025f48c0) (3) Data frame sent I0427 13:35:00.956158 6 log.go:172] (0xc0026f9080) Data frame received for 3 I0427 13:35:00.956195 6 log.go:172] (0xc0026f9080) Data frame received for 5 I0427 13:35:00.956220 6 log.go:172] (0xc0025f4960) (5) Data frame handling I0427 13:35:00.956250 6 log.go:172] (0xc0025f48c0) (3) Data frame handling I0427 13:35:00.958127 6 log.go:172] (0xc0026f9080) Data frame received for 1 I0427 13:35:00.958146 6 log.go:172] (0xc0025f4820) (1) Data frame handling I0427 13:35:00.958153 6 log.go:172] (0xc0025f4820) (1) Data frame sent I0427 13:35:00.958282 6 log.go:172] (0xc0026f9080) (0xc0025f4820) Stream removed, broadcasting: 1 I0427 13:35:00.958393 6 log.go:172] (0xc0026f9080) (0xc0025f4820) Stream removed, broadcasting: 1 I0427 13:35:00.958412 6 log.go:172] (0xc0026f9080) (0xc0025f48c0) Stream removed, broadcasting: 3 I0427 13:35:00.958426 6 log.go:172] (0xc0026f9080) (0xc0025f4960) Stream removed, broadcasting: 5 Apr 27 13:35:00.958: INFO: Waiting for endpoints: map[] I0427 13:35:00.958542 6 log.go:172] (0xc0026f9080) Go away received Apr 27 13:35:00.961: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.63:8080/dial?request=hostName&protocol=http&host=10.244.2.100&port=8080&tries=1'] Namespace:pod-network-test-4110 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 27 13:35:00.961: INFO: >>> kubeConfig: /root/.kube/config I0427 13:35:00.987953 6 log.go:172] (0xc0025c28f0) (0xc00101d680) Create stream I0427 13:35:00.987985 6 log.go:172] (0xc0025c28f0) (0xc00101d680) Stream added, broadcasting: 1 I0427 13:35:00.989988 6 log.go:172] (0xc0025c28f0) Reply frame received for 1 I0427 13:35:00.990043 6 log.go:172] (0xc0025c28f0) (0xc0025f4a00) Create stream I0427 13:35:00.990058 6 log.go:172] (0xc0025c28f0) (0xc0025f4a00) Stream added, broadcasting: 3 I0427 13:35:00.991068 6 log.go:172] (0xc0025c28f0) Reply frame received for 3 I0427 13:35:00.991108 6 log.go:172] (0xc0025c28f0) (0xc000110140) Create stream I0427 13:35:00.991123 6 log.go:172] (0xc0025c28f0) (0xc000110140) Stream added, broadcasting: 5 I0427 13:35:00.991994 6 log.go:172] (0xc0025c28f0) Reply frame received for 5 I0427 13:35:01.068413 6 log.go:172] (0xc0025c28f0) Data frame received for 3 I0427 13:35:01.068455 6 log.go:172] (0xc0025f4a00) (3) Data frame handling I0427 13:35:01.068482 6 log.go:172] (0xc0025f4a00) (3) Data frame sent I0427 13:35:01.069433 6 log.go:172] (0xc0025c28f0) Data frame received for 5 I0427 13:35:01.069471 6 log.go:172] (0xc000110140) (5) Data frame handling I0427 13:35:01.069658 6 log.go:172] (0xc0025c28f0) Data frame received for 3 I0427 13:35:01.069684 6 log.go:172] (0xc0025f4a00) (3) Data frame handling I0427 13:35:01.070824 6 log.go:172] (0xc0025c28f0) Data frame received for 1 I0427 13:35:01.070858 6 log.go:172] (0xc00101d680) (1) Data frame handling I0427 13:35:01.070892 6 log.go:172] (0xc00101d680) (1) Data frame sent I0427 13:35:01.070927 6 log.go:172] (0xc0025c28f0) (0xc00101d680) Stream removed, broadcasting: 1 I0427 13:35:01.070957 6 log.go:172] (0xc0025c28f0) Go away received I0427 13:35:01.071055 6 log.go:172] (0xc0025c28f0) (0xc00101d680) Stream removed, broadcasting: 1 I0427 13:35:01.071077 6 log.go:172] (0xc0025c28f0) (0xc0025f4a00) Stream removed, broadcasting: 3 I0427 13:35:01.071088 6 log.go:172] (0xc0025c28f0) (0xc000110140) Stream removed, broadcasting: 5 Apr 27 13:35:01.071: INFO: Waiting for endpoints: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 27 13:35:01.071: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-4110" for this suite. Apr 27 13:35:27.118: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 27 13:35:27.183: INFO: namespace pod-network-test-4110 deletion completed in 26.108237525s • [SLOW TEST:56.768 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Proxy server should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 27 13:35:27.183: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Starting the proxy Apr 27 13:35:27.404: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix208984888/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 27 13:35:27.469: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9521" for this suite. Apr 27 13:35:33.495: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 27 13:35:33.569: INFO: namespace kubectl-9521 deletion completed in 6.096456212s • [SLOW TEST:6.386 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Proxy server /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 27 13:35:33.569: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-bb21d0ed-6010-4d2f-b82a-894eb3aa50fb STEP: Creating a pod to test consume configMaps Apr 27 13:35:33.745: INFO: Waiting up to 5m0s for pod "pod-configmaps-0e5de7da-5543-481e-ac64-3c36e991d69f" in namespace "configmap-4846" to be "success or failure" Apr 27 13:35:33.755: INFO: Pod "pod-configmaps-0e5de7da-5543-481e-ac64-3c36e991d69f": Phase="Pending", Reason="", readiness=false. Elapsed: 9.249319ms Apr 27 13:35:35.758: INFO: Pod "pod-configmaps-0e5de7da-5543-481e-ac64-3c36e991d69f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012955993s Apr 27 13:35:37.762: INFO: Pod "pod-configmaps-0e5de7da-5543-481e-ac64-3c36e991d69f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.016303678s Apr 27 13:35:39.766: INFO: Pod "pod-configmaps-0e5de7da-5543-481e-ac64-3c36e991d69f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.02024787s STEP: Saw pod success Apr 27 13:35:39.766: INFO: Pod "pod-configmaps-0e5de7da-5543-481e-ac64-3c36e991d69f" satisfied condition "success or failure" Apr 27 13:35:39.768: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-0e5de7da-5543-481e-ac64-3c36e991d69f container configmap-volume-test: STEP: delete the pod Apr 27 13:35:39.825: INFO: Waiting for pod pod-configmaps-0e5de7da-5543-481e-ac64-3c36e991d69f to disappear Apr 27 13:35:39.840: INFO: Pod pod-configmaps-0e5de7da-5543-481e-ac64-3c36e991d69f no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 27 13:35:39.840: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4846" for this suite. Apr 27 13:35:45.896: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 27 13:35:46.052: INFO: namespace configmap-4846 deletion completed in 6.209134469s • [SLOW TEST:12.483 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 27 13:35:46.053: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-9fcffb20-af2b-4cc1-ab24-d7195697a755 STEP: Creating a pod to test consume configMaps Apr 27 13:35:46.238: INFO: Waiting up to 5m0s for pod "pod-configmaps-959fa3bb-d18e-4206-8ec2-eef1be3ca60c" in namespace "configmap-7318" to be "success or failure" Apr 27 13:35:46.261: INFO: Pod "pod-configmaps-959fa3bb-d18e-4206-8ec2-eef1be3ca60c": Phase="Pending", Reason="", readiness=false. Elapsed: 23.134479ms Apr 27 13:35:48.265: INFO: Pod "pod-configmaps-959fa3bb-d18e-4206-8ec2-eef1be3ca60c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027404099s Apr 27 13:35:50.268: INFO: Pod "pod-configmaps-959fa3bb-d18e-4206-8ec2-eef1be3ca60c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.030692946s Apr 27 13:35:52.272: INFO: Pod "pod-configmaps-959fa3bb-d18e-4206-8ec2-eef1be3ca60c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.034509471s STEP: Saw pod success Apr 27 13:35:52.272: INFO: Pod "pod-configmaps-959fa3bb-d18e-4206-8ec2-eef1be3ca60c" satisfied condition "success or failure" Apr 27 13:35:52.275: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-959fa3bb-d18e-4206-8ec2-eef1be3ca60c container configmap-volume-test: STEP: delete the pod Apr 27 13:35:52.339: INFO: Waiting for pod pod-configmaps-959fa3bb-d18e-4206-8ec2-eef1be3ca60c to disappear Apr 27 13:35:52.349: INFO: Pod pod-configmaps-959fa3bb-d18e-4206-8ec2-eef1be3ca60c no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 27 13:35:52.349: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7318" for this suite. Apr 27 13:35:58.458: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 27 13:35:58.538: INFO: namespace configmap-7318 deletion completed in 6.186151229s • [SLOW TEST:12.486 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 27 13:35:58.539: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: getting the auto-created API token Apr 27 13:35:59.252: INFO: created pod pod-service-account-defaultsa Apr 27 13:35:59.252: INFO: pod pod-service-account-defaultsa service account token volume mount: true Apr 27 13:35:59.284: INFO: created pod pod-service-account-mountsa Apr 27 13:35:59.284: INFO: pod pod-service-account-mountsa service account token volume mount: true Apr 27 13:35:59.296: INFO: created pod pod-service-account-nomountsa Apr 27 13:35:59.296: INFO: pod pod-service-account-nomountsa service account token volume mount: false Apr 27 13:35:59.409: INFO: created pod pod-service-account-defaultsa-mountspec Apr 27 13:35:59.410: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true Apr 27 13:35:59.445: INFO: created pod pod-service-account-mountsa-mountspec Apr 27 13:35:59.445: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true Apr 27 13:35:59.478: INFO: created pod pod-service-account-nomountsa-mountspec Apr 27 13:35:59.478: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true Apr 27 13:35:59.504: INFO: created pod pod-service-account-defaultsa-nomountspec Apr 27 13:35:59.504: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false Apr 27 13:35:59.570: INFO: created pod pod-service-account-mountsa-nomountspec Apr 27 13:35:59.570: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false Apr 27 13:35:59.616: INFO: created pod pod-service-account-nomountsa-nomountspec Apr 27 13:35:59.616: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 27 13:35:59.616: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-5180" for this suite. Apr 27 13:36:29.885: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 27 13:36:29.958: INFO: namespace svcaccounts-5180 deletion completed in 30.306755728s • [SLOW TEST:31.420 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 27 13:36:29.959: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 Apr 27 13:36:30.117: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Apr 27 13:36:30.124: INFO: Waiting for terminating namespaces to be deleted... Apr 27 13:36:30.127: INFO: Logging pods the kubelet thinks is on node iruya-worker before test Apr 27 13:36:30.133: INFO: kube-proxy-pmz4p from kube-system started at 2020-03-15 18:24:55 +0000 UTC (1 container statuses recorded) Apr 27 13:36:30.133: INFO: Container kube-proxy ready: true, restart count 0 Apr 27 13:36:30.133: INFO: kindnet-gwz5g from kube-system started at 2020-03-15 18:24:55 +0000 UTC (1 container statuses recorded) Apr 27 13:36:30.133: INFO: Container kindnet-cni ready: true, restart count 0 Apr 27 13:36:30.133: INFO: Logging pods the kubelet thinks is on node iruya-worker2 before test Apr 27 13:36:30.137: INFO: kindnet-mgd8b from kube-system started at 2020-03-15 18:24:43 +0000 UTC (1 container statuses recorded) Apr 27 13:36:30.137: INFO: Container kindnet-cni ready: true, restart count 0 Apr 27 13:36:30.137: INFO: kube-proxy-vwbcj from kube-system started at 2020-03-15 18:24:42 +0000 UTC (1 container statuses recorded) Apr 27 13:36:30.137: INFO: Container kube-proxy ready: true, restart count 0 Apr 27 13:36:30.137: INFO: coredns-5d4dd4b4db-6jcgz from kube-system started at 2020-03-15 18:24:54 +0000 UTC (1 container statuses recorded) Apr 27 13:36:30.137: INFO: Container coredns ready: true, restart count 0 Apr 27 13:36:30.137: INFO: coredns-5d4dd4b4db-gm7vr from kube-system started at 2020-03-15 18:24:52 +0000 UTC (1 container statuses recorded) Apr 27 13:36:30.137: INFO: Container coredns ready: true, restart count 0 [It] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.1609b0aa8b5ac7ec], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 27 13:36:31.157: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-8238" for this suite. Apr 27 13:36:37.196: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 27 13:36:37.299: INFO: namespace sched-pred-8238 deletion completed in 6.138882003s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72 • [SLOW TEST:7.340 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23 validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [k8s.io] Pods should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 27 13:36:37.300: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating pod Apr 27 13:36:43.509: INFO: Pod pod-hostip-be4f95c9-5301-40df-8c4f-0e944acc2267 has hostIP: 172.17.0.6 [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 27 13:36:43.509: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-34" for this suite. Apr 27 13:37:05.551: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 27 13:37:05.621: INFO: namespace pods-34 deletion completed in 22.1079789s • [SLOW TEST:28.321 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 27 13:37:05.621: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object Apr 27 13:37:05.832: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-9874,SelfLink:/api/v1/namespaces/watch-9874/configmaps/e2e-watch-test-label-changed,UID:b9a836f9-580c-4b1c-b7f2-1e057cb361dd,ResourceVersion:7721888,Generation:0,CreationTimestamp:2020-04-27 13:37:05 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} Apr 27 13:37:05.832: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-9874,SelfLink:/api/v1/namespaces/watch-9874/configmaps/e2e-watch-test-label-changed,UID:b9a836f9-580c-4b1c-b7f2-1e057cb361dd,ResourceVersion:7721889,Generation:0,CreationTimestamp:2020-04-27 13:37:05 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} Apr 27 13:37:05.832: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-9874,SelfLink:/api/v1/namespaces/watch-9874/configmaps/e2e-watch-test-label-changed,UID:b9a836f9-580c-4b1c-b7f2-1e057cb361dd,ResourceVersion:7721891,Generation:0,CreationTimestamp:2020-04-27 13:37:05 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored Apr 27 13:37:16.053: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-9874,SelfLink:/api/v1/namespaces/watch-9874/configmaps/e2e-watch-test-label-changed,UID:b9a836f9-580c-4b1c-b7f2-1e057cb361dd,ResourceVersion:7721913,Generation:0,CreationTimestamp:2020-04-27 13:37:05 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Apr 27 13:37:16.053: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-9874,SelfLink:/api/v1/namespaces/watch-9874/configmaps/e2e-watch-test-label-changed,UID:b9a836f9-580c-4b1c-b7f2-1e057cb361dd,ResourceVersion:7721914,Generation:0,CreationTimestamp:2020-04-27 13:37:05 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} Apr 27 13:37:16.053: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-9874,SelfLink:/api/v1/namespaces/watch-9874/configmaps/e2e-watch-test-label-changed,UID:b9a836f9-580c-4b1c-b7f2-1e057cb361dd,ResourceVersion:7721915,Generation:0,CreationTimestamp:2020-04-27 13:37:05 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 27 13:37:16.053: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-9874" for this suite. Apr 27 13:37:22.204: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 27 13:37:22.272: INFO: namespace watch-9874 deletion completed in 6.207223928s • [SLOW TEST:16.650 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 27 13:37:22.272: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 27 13:38:22.363: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-3691" for this suite. Apr 27 13:38:44.398: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 27 13:38:44.472: INFO: namespace container-probe-3691 deletion completed in 22.105734944s • [SLOW TEST:82.200 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 27 13:38:44.472: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Apr 27 13:38:44.610: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8607df6c-0d15-489d-bea7-12580bfcd62c" in namespace "downward-api-369" to be "success or failure" Apr 27 13:38:44.647: INFO: Pod "downwardapi-volume-8607df6c-0d15-489d-bea7-12580bfcd62c": Phase="Pending", Reason="", readiness=false. Elapsed: 36.962531ms Apr 27 13:38:46.652: INFO: Pod "downwardapi-volume-8607df6c-0d15-489d-bea7-12580bfcd62c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.041362358s Apr 27 13:38:48.656: INFO: Pod "downwardapi-volume-8607df6c-0d15-489d-bea7-12580bfcd62c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.045345246s Apr 27 13:38:50.660: INFO: Pod "downwardapi-volume-8607df6c-0d15-489d-bea7-12580bfcd62c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.049531645s STEP: Saw pod success Apr 27 13:38:50.660: INFO: Pod "downwardapi-volume-8607df6c-0d15-489d-bea7-12580bfcd62c" satisfied condition "success or failure" Apr 27 13:38:50.663: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-8607df6c-0d15-489d-bea7-12580bfcd62c container client-container: STEP: delete the pod Apr 27 13:38:50.718: INFO: Waiting for pod downwardapi-volume-8607df6c-0d15-489d-bea7-12580bfcd62c to disappear Apr 27 13:38:50.723: INFO: Pod downwardapi-volume-8607df6c-0d15-489d-bea7-12580bfcd62c no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 27 13:38:50.723: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-369" for this suite. Apr 27 13:38:56.806: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 27 13:38:56.880: INFO: namespace downward-api-369 deletion completed in 6.153944973s • [SLOW TEST:12.408 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 27 13:38:56.881: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-795c865f-d64b-45b4-b40f-81b024005346 STEP: Creating a pod to test consume configMaps Apr 27 13:38:57.042: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-2717c5cc-4499-41de-95fe-107846a38d4a" in namespace "projected-1664" to be "success or failure" Apr 27 13:38:57.065: INFO: Pod "pod-projected-configmaps-2717c5cc-4499-41de-95fe-107846a38d4a": Phase="Pending", Reason="", readiness=false. Elapsed: 22.788806ms Apr 27 13:38:59.069: INFO: Pod "pod-projected-configmaps-2717c5cc-4499-41de-95fe-107846a38d4a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027129181s Apr 27 13:39:01.074: INFO: Pod "pod-projected-configmaps-2717c5cc-4499-41de-95fe-107846a38d4a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.031867304s Apr 27 13:39:03.079: INFO: Pod "pod-projected-configmaps-2717c5cc-4499-41de-95fe-107846a38d4a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.036287523s STEP: Saw pod success Apr 27 13:39:03.079: INFO: Pod "pod-projected-configmaps-2717c5cc-4499-41de-95fe-107846a38d4a" satisfied condition "success or failure" Apr 27 13:39:03.082: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-configmaps-2717c5cc-4499-41de-95fe-107846a38d4a container projected-configmap-volume-test: STEP: delete the pod Apr 27 13:39:03.193: INFO: Waiting for pod pod-projected-configmaps-2717c5cc-4499-41de-95fe-107846a38d4a to disappear Apr 27 13:39:03.243: INFO: Pod pod-projected-configmaps-2717c5cc-4499-41de-95fe-107846a38d4a no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 27 13:39:03.243: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1664" for this suite. Apr 27 13:39:09.283: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 27 13:39:09.358: INFO: namespace projected-1664 deletion completed in 6.111501554s • [SLOW TEST:12.477 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [k8s.io] Pods should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 27 13:39:09.358: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Apr 27 13:39:16.299: INFO: Successfully updated pod "pod-update-720e5064-34e6-490c-8ad1-58a1597ac6a1" STEP: verifying the updated pod is in kubernetes Apr 27 13:39:16.391: INFO: Pod update OK [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 27 13:39:16.391: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-8759" for this suite. Apr 27 13:39:38.422: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 27 13:39:38.492: INFO: namespace pods-8759 deletion completed in 22.097323611s • [SLOW TEST:29.133 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 27 13:39:38.492: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating replication controller my-hostname-basic-6cb8963e-ac43-49f0-829c-ce519588f284 Apr 27 13:39:38.664: INFO: Pod name my-hostname-basic-6cb8963e-ac43-49f0-829c-ce519588f284: Found 0 pods out of 1 Apr 27 13:39:43.668: INFO: Pod name my-hostname-basic-6cb8963e-ac43-49f0-829c-ce519588f284: Found 1 pods out of 1 Apr 27 13:39:43.668: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-6cb8963e-ac43-49f0-829c-ce519588f284" are running Apr 27 13:39:45.675: INFO: Pod "my-hostname-basic-6cb8963e-ac43-49f0-829c-ce519588f284-fhtlm" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-27 13:39:38 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-27 13:39:38 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-6cb8963e-ac43-49f0-829c-ce519588f284]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-27 13:39:38 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-6cb8963e-ac43-49f0-829c-ce519588f284]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-27 13:39:38 +0000 UTC Reason: Message:}]) Apr 27 13:39:45.675: INFO: Trying to dial the pod Apr 27 13:39:50.688: INFO: Controller my-hostname-basic-6cb8963e-ac43-49f0-829c-ce519588f284: Got expected result from replica 1 [my-hostname-basic-6cb8963e-ac43-49f0-829c-ce519588f284-fhtlm]: "my-hostname-basic-6cb8963e-ac43-49f0-829c-ce519588f284-fhtlm", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 27 13:39:50.688: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-5027" for this suite. Apr 27 13:39:56.722: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 27 13:39:56.817: INFO: namespace replication-controller-5027 deletion completed in 6.125639911s • [SLOW TEST:18.325 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl version should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 27 13:39:56.817: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Apr 27 13:39:56.938: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version' Apr 27 13:39:57.080: INFO: stderr: "" Apr 27 13:39:57.080: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"15\", GitVersion:\"v1.15.11\", GitCommit:\"d94a81c724ea8e1ccc9002d89b7fe81d58f89ede\", GitTreeState:\"clean\", BuildDate:\"2020-04-05T10:39:42Z\", GoVersion:\"go1.12.14\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"15\", GitVersion:\"v1.15.7\", GitCommit:\"6c143d35bb11d74970e7bc0b6c45b6bfdffc0bd4\", GitTreeState:\"clean\", BuildDate:\"2020-01-14T00:28:37Z\", GoVersion:\"go1.12.12\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 27 13:39:57.081: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1056" for this suite. Apr 27 13:40:03.256: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 27 13:40:03.334: INFO: namespace kubectl-1056 deletion completed in 6.248562725s • [SLOW TEST:6.517 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl version /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 27 13:40:03.334: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [It] should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the initial replication controller Apr 27 13:40:03.461: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9444' Apr 27 13:40:06.731: INFO: stderr: "" Apr 27 13:40:06.731: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Apr 27 13:40:06.731: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9444' Apr 27 13:40:06.832: INFO: stderr: "" Apr 27 13:40:06.832: INFO: stdout: "update-demo-nautilus-dwgkh update-demo-nautilus-jzn5t " Apr 27 13:40:06.832: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-dwgkh -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9444' Apr 27 13:40:06.975: INFO: stderr: "" Apr 27 13:40:06.975: INFO: stdout: "" Apr 27 13:40:06.975: INFO: update-demo-nautilus-dwgkh is created but not running Apr 27 13:40:11.976: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9444' Apr 27 13:40:12.081: INFO: stderr: "" Apr 27 13:40:12.081: INFO: stdout: "update-demo-nautilus-dwgkh update-demo-nautilus-jzn5t " Apr 27 13:40:12.081: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-dwgkh -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9444' Apr 27 13:40:12.197: INFO: stderr: "" Apr 27 13:40:12.197: INFO: stdout: "true" Apr 27 13:40:12.198: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-dwgkh -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9444' Apr 27 13:40:12.284: INFO: stderr: "" Apr 27 13:40:12.284: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 27 13:40:12.284: INFO: validating pod update-demo-nautilus-dwgkh Apr 27 13:40:12.287: INFO: got data: { "image": "nautilus.jpg" } Apr 27 13:40:12.287: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 27 13:40:12.287: INFO: update-demo-nautilus-dwgkh is verified up and running Apr 27 13:40:12.287: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jzn5t -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9444' Apr 27 13:40:12.380: INFO: stderr: "" Apr 27 13:40:12.380: INFO: stdout: "true" Apr 27 13:40:12.380: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jzn5t -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9444' Apr 27 13:40:12.474: INFO: stderr: "" Apr 27 13:40:12.474: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 27 13:40:12.474: INFO: validating pod update-demo-nautilus-jzn5t Apr 27 13:40:12.478: INFO: got data: { "image": "nautilus.jpg" } Apr 27 13:40:12.478: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 27 13:40:12.478: INFO: update-demo-nautilus-jzn5t is verified up and running STEP: rolling-update to new replication controller Apr 27 13:40:12.479: INFO: scanned /root for discovery docs: Apr 27 13:40:12.479: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=kubectl-9444' Apr 27 13:40:36.555: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" Apr 27 13:40:36.555: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n" STEP: waiting for all containers in name=update-demo pods to come up. Apr 27 13:40:36.555: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9444' Apr 27 13:40:36.645: INFO: stderr: "" Apr 27 13:40:36.645: INFO: stdout: "update-demo-kitten-kpwns update-demo-kitten-rkncv " Apr 27 13:40:36.645: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-kpwns -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9444' Apr 27 13:40:36.785: INFO: stderr: "" Apr 27 13:40:36.785: INFO: stdout: "true" Apr 27 13:40:36.785: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-kpwns -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9444' Apr 27 13:40:36.888: INFO: stderr: "" Apr 27 13:40:36.888: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" Apr 27 13:40:36.888: INFO: validating pod update-demo-kitten-kpwns Apr 27 13:40:36.891: INFO: got data: { "image": "kitten.jpg" } Apr 27 13:40:36.891: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . Apr 27 13:40:36.891: INFO: update-demo-kitten-kpwns is verified up and running Apr 27 13:40:36.891: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-rkncv -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9444' Apr 27 13:40:37.005: INFO: stderr: "" Apr 27 13:40:37.006: INFO: stdout: "true" Apr 27 13:40:37.006: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-rkncv -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9444' Apr 27 13:40:37.124: INFO: stderr: "" Apr 27 13:40:37.124: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" Apr 27 13:40:37.124: INFO: validating pod update-demo-kitten-rkncv Apr 27 13:40:37.129: INFO: got data: { "image": "kitten.jpg" } Apr 27 13:40:37.129: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . Apr 27 13:40:37.129: INFO: update-demo-kitten-rkncv is verified up and running [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 27 13:40:37.129: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9444" for this suite. Apr 27 13:41:01.162: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 27 13:41:01.233: INFO: namespace kubectl-9444 deletion completed in 24.10113916s • [SLOW TEST:57.899 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 27 13:41:01.233: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Apr 27 13:41:01.376: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d19302ed-7d02-4549-b7a7-df61d39e3f90" in namespace "downward-api-3721" to be "success or failure" Apr 27 13:41:01.421: INFO: Pod "downwardapi-volume-d19302ed-7d02-4549-b7a7-df61d39e3f90": Phase="Pending", Reason="", readiness=false. Elapsed: 45.09765ms Apr 27 13:41:03.425: INFO: Pod "downwardapi-volume-d19302ed-7d02-4549-b7a7-df61d39e3f90": Phase="Pending", Reason="", readiness=false. Elapsed: 2.049648094s Apr 27 13:41:05.568: INFO: Pod "downwardapi-volume-d19302ed-7d02-4549-b7a7-df61d39e3f90": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.19247946s STEP: Saw pod success Apr 27 13:41:05.568: INFO: Pod "downwardapi-volume-d19302ed-7d02-4549-b7a7-df61d39e3f90" satisfied condition "success or failure" Apr 27 13:41:05.571: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-d19302ed-7d02-4549-b7a7-df61d39e3f90 container client-container: STEP: delete the pod Apr 27 13:41:05.806: INFO: Waiting for pod downwardapi-volume-d19302ed-7d02-4549-b7a7-df61d39e3f90 to disappear Apr 27 13:41:05.840: INFO: Pod downwardapi-volume-d19302ed-7d02-4549-b7a7-df61d39e3f90 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 27 13:41:05.840: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3721" for this suite. Apr 27 13:41:11.945: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 27 13:41:12.011: INFO: namespace downward-api-3721 deletion completed in 6.168353106s • [SLOW TEST:10.778 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-network] Services should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 27 13:41:12.011: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88 [It] should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating service endpoint-test2 in namespace services-6229 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-6229 to expose endpoints map[] Apr 27 13:41:12.269: INFO: Get endpoints failed (71.727022ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found Apr 27 13:41:13.272: INFO: successfully validated that service endpoint-test2 in namespace services-6229 exposes endpoints map[] (1.075335564s elapsed) STEP: Creating pod pod1 in namespace services-6229 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-6229 to expose endpoints map[pod1:[80]] Apr 27 13:41:17.772: INFO: Unexpected endpoints: found map[], expected map[pod1:[80]] (4.493348978s elapsed, will retry) Apr 27 13:41:18.778: INFO: successfully validated that service endpoint-test2 in namespace services-6229 exposes endpoints map[pod1:[80]] (5.499401928s elapsed) STEP: Creating pod pod2 in namespace services-6229 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-6229 to expose endpoints map[pod1:[80] pod2:[80]] Apr 27 13:41:23.053: INFO: Unexpected endpoints: found map[d752945e-0ee1-4d85-9ed5-6203ca9210c3:[80]], expected map[pod1:[80] pod2:[80]] (4.270966261s elapsed, will retry) Apr 27 13:41:24.134: INFO: successfully validated that service endpoint-test2 in namespace services-6229 exposes endpoints map[pod1:[80] pod2:[80]] (5.352397107s elapsed) STEP: Deleting pod pod1 in namespace services-6229 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-6229 to expose endpoints map[pod2:[80]] Apr 27 13:41:25.191: INFO: successfully validated that service endpoint-test2 in namespace services-6229 exposes endpoints map[pod2:[80]] (1.050802398s elapsed) STEP: Deleting pod pod2 in namespace services-6229 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-6229 to expose endpoints map[] Apr 27 13:41:26.233: INFO: successfully validated that service endpoint-test2 in namespace services-6229 exposes endpoints map[] (1.037564938s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 27 13:41:26.359: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-6229" for this suite. Apr 27 13:41:48.443: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 27 13:41:48.512: INFO: namespace services-6229 deletion completed in 22.148901981s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92 • [SLOW TEST:36.500 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 27 13:41:48.512: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name cm-test-opt-del-6dbfb668-2013-4b74-a0d5-a103844fbf10 STEP: Creating configMap with name cm-test-opt-upd-5c0f0242-db3f-4ccf-b3c8-3db5bd64264f STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-6dbfb668-2013-4b74-a0d5-a103844fbf10 STEP: Updating configmap cm-test-opt-upd-5c0f0242-db3f-4ccf-b3c8-3db5bd64264f STEP: Creating configMap with name cm-test-opt-create-21647590-82e5-4c49-87d7-410beed43c71 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 27 13:42:00.995: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8986" for this suite. Apr 27 13:42:25.048: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 27 13:42:25.113: INFO: namespace projected-8986 deletion completed in 24.114295293s • [SLOW TEST:36.601 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 27 13:42:25.114: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Apr 27 13:42:25.227: INFO: Waiting up to 5m0s for pod "downwardapi-volume-69862957-7329-4194-90a1-a707b40b71f5" in namespace "downward-api-2622" to be "success or failure" Apr 27 13:42:25.248: INFO: Pod "downwardapi-volume-69862957-7329-4194-90a1-a707b40b71f5": Phase="Pending", Reason="", readiness=false. Elapsed: 20.435316ms Apr 27 13:42:27.342: INFO: Pod "downwardapi-volume-69862957-7329-4194-90a1-a707b40b71f5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.114762166s Apr 27 13:42:29.347: INFO: Pod "downwardapi-volume-69862957-7329-4194-90a1-a707b40b71f5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.119401983s Apr 27 13:42:31.351: INFO: Pod "downwardapi-volume-69862957-7329-4194-90a1-a707b40b71f5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.124033236s STEP: Saw pod success Apr 27 13:42:31.351: INFO: Pod "downwardapi-volume-69862957-7329-4194-90a1-a707b40b71f5" satisfied condition "success or failure" Apr 27 13:42:31.356: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-69862957-7329-4194-90a1-a707b40b71f5 container client-container: STEP: delete the pod Apr 27 13:42:31.485: INFO: Waiting for pod downwardapi-volume-69862957-7329-4194-90a1-a707b40b71f5 to disappear Apr 27 13:42:31.488: INFO: Pod downwardapi-volume-69862957-7329-4194-90a1-a707b40b71f5 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 27 13:42:31.488: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2622" for this suite. Apr 27 13:42:37.546: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 27 13:42:37.617: INFO: namespace downward-api-2622 deletion completed in 6.124743569s • [SLOW TEST:12.504 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 27 13:42:37.617: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Apr 27 13:42:37.769: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f473536c-57b6-411f-b355-61eb4c628408" in namespace "downward-api-6765" to be "success or failure" Apr 27 13:42:37.794: INFO: Pod "downwardapi-volume-f473536c-57b6-411f-b355-61eb4c628408": Phase="Pending", Reason="", readiness=false. Elapsed: 24.811353ms Apr 27 13:42:39.798: INFO: Pod "downwardapi-volume-f473536c-57b6-411f-b355-61eb4c628408": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028860091s Apr 27 13:42:41.887: INFO: Pod "downwardapi-volume-f473536c-57b6-411f-b355-61eb4c628408": Phase="Pending", Reason="", readiness=false. Elapsed: 4.118649193s Apr 27 13:42:43.891: INFO: Pod "downwardapi-volume-f473536c-57b6-411f-b355-61eb4c628408": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.122536653s STEP: Saw pod success Apr 27 13:42:43.891: INFO: Pod "downwardapi-volume-f473536c-57b6-411f-b355-61eb4c628408" satisfied condition "success or failure" Apr 27 13:42:43.894: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-f473536c-57b6-411f-b355-61eb4c628408 container client-container: STEP: delete the pod Apr 27 13:42:43.945: INFO: Waiting for pod downwardapi-volume-f473536c-57b6-411f-b355-61eb4c628408 to disappear Apr 27 13:42:43.988: INFO: Pod downwardapi-volume-f473536c-57b6-411f-b355-61eb4c628408 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 27 13:42:43.988: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6765" for this suite. Apr 27 13:42:50.012: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 27 13:42:50.080: INFO: namespace downward-api-6765 deletion completed in 6.089052293s • [SLOW TEST:12.463 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 27 13:42:50.081: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: Gathering metrics W0427 13:42:51.849314 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Apr 27 13:42:51.849: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 27 13:42:51.849: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-3226" for this suite. Apr 27 13:42:58.135: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 27 13:42:58.195: INFO: namespace gc-3226 deletion completed in 6.241476528s • [SLOW TEST:8.115 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 27 13:42:58.196: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test use defaults Apr 27 13:42:58.337: INFO: Waiting up to 5m0s for pod "client-containers-19150bb6-5d2d-4a3e-a4f1-b28dc68da39b" in namespace "containers-9617" to be "success or failure" Apr 27 13:42:58.344: INFO: Pod "client-containers-19150bb6-5d2d-4a3e-a4f1-b28dc68da39b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.790587ms Apr 27 13:43:00.348: INFO: Pod "client-containers-19150bb6-5d2d-4a3e-a4f1-b28dc68da39b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010859169s Apr 27 13:43:02.352: INFO: Pod "client-containers-19150bb6-5d2d-4a3e-a4f1-b28dc68da39b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.014598436s Apr 27 13:43:04.355: INFO: Pod "client-containers-19150bb6-5d2d-4a3e-a4f1-b28dc68da39b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.017886703s STEP: Saw pod success Apr 27 13:43:04.355: INFO: Pod "client-containers-19150bb6-5d2d-4a3e-a4f1-b28dc68da39b" satisfied condition "success or failure" Apr 27 13:43:04.356: INFO: Trying to get logs from node iruya-worker pod client-containers-19150bb6-5d2d-4a3e-a4f1-b28dc68da39b container test-container: STEP: delete the pod Apr 27 13:43:04.400: INFO: Waiting for pod client-containers-19150bb6-5d2d-4a3e-a4f1-b28dc68da39b to disappear Apr 27 13:43:04.423: INFO: Pod client-containers-19150bb6-5d2d-4a3e-a4f1-b28dc68da39b no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 27 13:43:04.423: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-9617" for this suite. Apr 27 13:43:10.450: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 27 13:43:10.524: INFO: namespace containers-9617 deletion completed in 6.098227003s • [SLOW TEST:12.329 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 27 13:43:10.525: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test override all Apr 27 13:43:11.029: INFO: Waiting up to 5m0s for pod "client-containers-25badd2c-d78a-4181-9932-9a685fb0f62c" in namespace "containers-8984" to be "success or failure" Apr 27 13:43:11.103: INFO: Pod "client-containers-25badd2c-d78a-4181-9932-9a685fb0f62c": Phase="Pending", Reason="", readiness=false. Elapsed: 74.019113ms Apr 27 13:43:13.145: INFO: Pod "client-containers-25badd2c-d78a-4181-9932-9a685fb0f62c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.115536442s Apr 27 13:43:15.276: INFO: Pod "client-containers-25badd2c-d78a-4181-9932-9a685fb0f62c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.247267963s Apr 27 13:43:17.280: INFO: Pod "client-containers-25badd2c-d78a-4181-9932-9a685fb0f62c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.2506809s Apr 27 13:43:19.283: INFO: Pod "client-containers-25badd2c-d78a-4181-9932-9a685fb0f62c": Phase="Running", Reason="", readiness=true. Elapsed: 8.253783808s Apr 27 13:43:21.286: INFO: Pod "client-containers-25badd2c-d78a-4181-9932-9a685fb0f62c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.257092858s STEP: Saw pod success Apr 27 13:43:21.286: INFO: Pod "client-containers-25badd2c-d78a-4181-9932-9a685fb0f62c" satisfied condition "success or failure" Apr 27 13:43:21.288: INFO: Trying to get logs from node iruya-worker2 pod client-containers-25badd2c-d78a-4181-9932-9a685fb0f62c container test-container: STEP: delete the pod Apr 27 13:43:21.660: INFO: Waiting for pod client-containers-25badd2c-d78a-4181-9932-9a685fb0f62c to disappear Apr 27 13:43:21.755: INFO: Pod client-containers-25badd2c-d78a-4181-9932-9a685fb0f62c no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 27 13:43:21.755: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-8984" for this suite. Apr 27 13:43:27.950: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 27 13:43:28.012: INFO: namespace containers-8984 deletion completed in 6.210280554s • [SLOW TEST:17.487 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 27 13:43:28.012: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Apr 27 13:43:28.242: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4321' Apr 27 13:43:28.508: INFO: stderr: "" Apr 27 13:43:28.508: INFO: stdout: "replicationcontroller/redis-master created\n" Apr 27 13:43:28.508: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4321' Apr 27 13:43:28.869: INFO: stderr: "" Apr 27 13:43:28.869: INFO: stdout: "service/redis-master created\n" STEP: Waiting for Redis master to start. Apr 27 13:43:29.872: INFO: Selector matched 1 pods for map[app:redis] Apr 27 13:43:29.872: INFO: Found 0 / 1 Apr 27 13:43:30.873: INFO: Selector matched 1 pods for map[app:redis] Apr 27 13:43:30.873: INFO: Found 0 / 1 Apr 27 13:43:32.708: INFO: Selector matched 1 pods for map[app:redis] Apr 27 13:43:32.708: INFO: Found 0 / 1 Apr 27 13:43:33.322: INFO: Selector matched 1 pods for map[app:redis] Apr 27 13:43:33.322: INFO: Found 0 / 1 Apr 27 13:43:34.152: INFO: Selector matched 1 pods for map[app:redis] Apr 27 13:43:34.152: INFO: Found 0 / 1 Apr 27 13:43:34.899: INFO: Selector matched 1 pods for map[app:redis] Apr 27 13:43:34.899: INFO: Found 0 / 1 Apr 27 13:43:36.118: INFO: Selector matched 1 pods for map[app:redis] Apr 27 13:43:36.118: INFO: Found 0 / 1 Apr 27 13:43:36.971: INFO: Selector matched 1 pods for map[app:redis] Apr 27 13:43:36.971: INFO: Found 0 / 1 Apr 27 13:43:37.875: INFO: Selector matched 1 pods for map[app:redis] Apr 27 13:43:37.875: INFO: Found 0 / 1 Apr 27 13:43:38.873: INFO: Selector matched 1 pods for map[app:redis] Apr 27 13:43:38.873: INFO: Found 1 / 1 Apr 27 13:43:38.873: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Apr 27 13:43:38.876: INFO: Selector matched 1 pods for map[app:redis] Apr 27 13:43:38.876: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Apr 27 13:43:38.876: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe pod redis-master-l6xj7 --namespace=kubectl-4321' Apr 27 13:43:38.994: INFO: stderr: "" Apr 27 13:43:38.994: INFO: stdout: "Name: redis-master-l6xj7\nNamespace: kubectl-4321\nPriority: 0\nNode: iruya-worker/172.17.0.6\nStart Time: Mon, 27 Apr 2020 13:43:28 +0000\nLabels: app=redis\n role=master\nAnnotations: \nStatus: Running\nIP: 10.244.2.115\nControlled By: ReplicationController/redis-master\nContainers:\n redis-master:\n Container ID: containerd://802e2fd44b1d2921a609ebc7fcfd05c96b244e786d42ef66823fba50bef6a488\n Image: gcr.io/kubernetes-e2e-test-images/redis:1.0\n Image ID: gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Mon, 27 Apr 2020 13:43:38 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-96hvf (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-96hvf:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-96hvf\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute for 300s\n node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 10s default-scheduler Successfully assigned kubectl-4321/redis-master-l6xj7 to iruya-worker\n Normal Pulled 9s kubelet, iruya-worker Container image \"gcr.io/kubernetes-e2e-test-images/redis:1.0\" already present on machine\n Normal Created 1s kubelet, iruya-worker Created container redis-master\n Normal Started 0s kubelet, iruya-worker Started container redis-master\n" Apr 27 13:43:38.994: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe rc redis-master --namespace=kubectl-4321' Apr 27 13:43:39.102: INFO: stderr: "" Apr 27 13:43:39.102: INFO: stdout: "Name: redis-master\nNamespace: kubectl-4321\nSelector: app=redis,role=master\nLabels: app=redis\n role=master\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=redis\n role=master\n Containers:\n redis-master:\n Image: gcr.io/kubernetes-e2e-test-images/redis:1.0\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 11s replication-controller Created pod: redis-master-l6xj7\n" Apr 27 13:43:39.102: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe service redis-master --namespace=kubectl-4321' Apr 27 13:43:39.199: INFO: stderr: "" Apr 27 13:43:39.199: INFO: stdout: "Name: redis-master\nNamespace: kubectl-4321\nLabels: app=redis\n role=master\nAnnotations: \nSelector: app=redis,role=master\nType: ClusterIP\nIP: 10.102.209.59\nPort: 6379/TCP\nTargetPort: redis-server/TCP\nEndpoints: 10.244.2.115:6379\nSession Affinity: None\nEvents: \n" Apr 27 13:43:39.202: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe node iruya-control-plane' Apr 27 13:43:39.315: INFO: stderr: "" Apr 27 13:43:39.315: INFO: stdout: "Name: iruya-control-plane\nRoles: master\nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=iruya-control-plane\n kubernetes.io/os=linux\n node-role.kubernetes.io/master=\nAnnotations: kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Sun, 15 Mar 2020 18:24:20 +0000\nTaints: node-role.kubernetes.io/master:NoSchedule\nUnschedulable: false\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n MemoryPressure False Mon, 27 Apr 2020 13:43:35 +0000 Sun, 15 Mar 2020 18:24:20 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Mon, 27 Apr 2020 13:43:35 +0000 Sun, 15 Mar 2020 18:24:20 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Mon, 27 Apr 2020 13:43:35 +0000 Sun, 15 Mar 2020 18:24:20 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Mon, 27 Apr 2020 13:43:35 +0000 Sun, 15 Mar 2020 18:25:00 +0000 KubeletReady kubelet is posting ready status\nAddresses:\n InternalIP: 172.17.0.7\n Hostname: iruya-control-plane\nCapacity:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nAllocatable:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nSystem Info:\n Machine ID: 09f14f6f4d1640fcaab2243401c9f154\n System UUID: 7c6ca533-492e-400c-b058-c282f97a69ec\n Boot ID: ca2aa731-f890-4956-92a1-ff8c7560d571\n Kernel Version: 4.15.0-88-generic\n OS Image: Ubuntu 19.10\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: containerd://1.3.2\n Kubelet Version: v1.15.7\n Kube-Proxy Version: v1.15.7\nPodCIDR: 10.244.0.0/24\nNon-terminated Pods: (7 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system etcd-iruya-control-plane 0 (0%) 0 (0%) 0 (0%) 0 (0%) 42d\n kube-system kindnet-zn8sx 100m (0%) 100m (0%) 50Mi (0%) 50Mi (0%) 42d\n kube-system kube-apiserver-iruya-control-plane 250m (1%) 0 (0%) 0 (0%) 0 (0%) 42d\n kube-system kube-controller-manager-iruya-control-plane 200m (1%) 0 (0%) 0 (0%) 0 (0%) 42d\n kube-system kube-proxy-46nsr 0 (0%) 0 (0%) 0 (0%) 0 (0%) 42d\n kube-system kube-scheduler-iruya-control-plane 100m (0%) 0 (0%) 0 (0%) 0 (0%) 42d\n local-path-storage local-path-provisioner-d4947b89c-72frh 0 (0%) 0 (0%) 0 (0%) 0 (0%) 42d\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 650m (4%) 100m (0%)\n memory 50Mi (0%) 50Mi (0%)\n ephemeral-storage 0 (0%) 0 (0%)\nEvents: \n" Apr 27 13:43:39.315: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe namespace kubectl-4321' Apr 27 13:43:39.423: INFO: stderr: "" Apr 27 13:43:39.423: INFO: stdout: "Name: kubectl-4321\nLabels: e2e-framework=kubectl\n e2e-run=00b06017-6fc2-42fc-89bc-40cdf40a9134\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo resource limits.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 27 13:43:39.423: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4321" for this suite. Apr 27 13:44:19.438: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 27 13:44:19.488: INFO: namespace kubectl-4321 deletion completed in 40.063127847s • [SLOW TEST:51.476 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl describe /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 27 13:44:19.488: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Apr 27 13:44:19.738: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b9966691-952c-4c2f-a398-623a9c1bf26d" in namespace "projected-3759" to be "success or failure" Apr 27 13:44:19.747: INFO: Pod "downwardapi-volume-b9966691-952c-4c2f-a398-623a9c1bf26d": Phase="Pending", Reason="", readiness=false. Elapsed: 8.719991ms Apr 27 13:44:21.876: INFO: Pod "downwardapi-volume-b9966691-952c-4c2f-a398-623a9c1bf26d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.137824382s Apr 27 13:44:26.164: INFO: Pod "downwardapi-volume-b9966691-952c-4c2f-a398-623a9c1bf26d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.426109723s Apr 27 13:44:28.697: INFO: Pod "downwardapi-volume-b9966691-952c-4c2f-a398-623a9c1bf26d": Phase="Pending", Reason="", readiness=false. Elapsed: 8.959151119s Apr 27 13:44:30.700: INFO: Pod "downwardapi-volume-b9966691-952c-4c2f-a398-623a9c1bf26d": Phase="Pending", Reason="", readiness=false. Elapsed: 10.962514424s Apr 27 13:44:32.704: INFO: Pod "downwardapi-volume-b9966691-952c-4c2f-a398-623a9c1bf26d": Phase="Pending", Reason="", readiness=false. Elapsed: 12.965973263s Apr 27 13:44:34.811: INFO: Pod "downwardapi-volume-b9966691-952c-4c2f-a398-623a9c1bf26d": Phase="Pending", Reason="", readiness=false. Elapsed: 15.072811065s Apr 27 13:44:39.207: INFO: Pod "downwardapi-volume-b9966691-952c-4c2f-a398-623a9c1bf26d": Phase="Pending", Reason="", readiness=false. Elapsed: 19.469547949s Apr 27 13:44:41.226: INFO: Pod "downwardapi-volume-b9966691-952c-4c2f-a398-623a9c1bf26d": Phase="Pending", Reason="", readiness=false. Elapsed: 21.487881349s Apr 27 13:44:43.617: INFO: Pod "downwardapi-volume-b9966691-952c-4c2f-a398-623a9c1bf26d": Phase="Pending", Reason="", readiness=false. Elapsed: 23.879522429s Apr 27 13:44:45.621: INFO: Pod "downwardapi-volume-b9966691-952c-4c2f-a398-623a9c1bf26d": Phase="Pending", Reason="", readiness=false. Elapsed: 25.883395985s Apr 27 13:44:47.685: INFO: Pod "downwardapi-volume-b9966691-952c-4c2f-a398-623a9c1bf26d": Phase="Pending", Reason="", readiness=false. Elapsed: 27.947422345s Apr 27 13:44:49.688: INFO: Pod "downwardapi-volume-b9966691-952c-4c2f-a398-623a9c1bf26d": Phase="Pending", Reason="", readiness=false. Elapsed: 29.950338804s Apr 27 13:44:51.750: INFO: Pod "downwardapi-volume-b9966691-952c-4c2f-a398-623a9c1bf26d": Phase="Pending", Reason="", readiness=false. Elapsed: 32.012647222s Apr 27 13:44:53.753: INFO: Pod "downwardapi-volume-b9966691-952c-4c2f-a398-623a9c1bf26d": Phase="Pending", Reason="", readiness=false. Elapsed: 34.015513017s Apr 27 13:44:56.524: INFO: Pod "downwardapi-volume-b9966691-952c-4c2f-a398-623a9c1bf26d": Phase="Pending", Reason="", readiness=false. Elapsed: 36.786324986s Apr 27 13:44:58.527: INFO: Pod "downwardapi-volume-b9966691-952c-4c2f-a398-623a9c1bf26d": Phase="Pending", Reason="", readiness=false. Elapsed: 38.789196332s Apr 27 13:45:00.531: INFO: Pod "downwardapi-volume-b9966691-952c-4c2f-a398-623a9c1bf26d": Phase="Pending", Reason="", readiness=false. Elapsed: 40.792901827s Apr 27 13:45:02.534: INFO: Pod "downwardapi-volume-b9966691-952c-4c2f-a398-623a9c1bf26d": Phase="Pending", Reason="", readiness=false. Elapsed: 42.795741225s Apr 27 13:45:04.537: INFO: Pod "downwardapi-volume-b9966691-952c-4c2f-a398-623a9c1bf26d": Phase="Pending", Reason="", readiness=false. Elapsed: 44.79930329s Apr 27 13:45:06.812: INFO: Pod "downwardapi-volume-b9966691-952c-4c2f-a398-623a9c1bf26d": Phase="Pending", Reason="", readiness=false. Elapsed: 47.074280091s Apr 27 13:45:09.278: INFO: Pod "downwardapi-volume-b9966691-952c-4c2f-a398-623a9c1bf26d": Phase="Pending", Reason="", readiness=false. Elapsed: 49.540231484s Apr 27 13:45:11.282: INFO: Pod "downwardapi-volume-b9966691-952c-4c2f-a398-623a9c1bf26d": Phase="Pending", Reason="", readiness=false. Elapsed: 51.544103965s Apr 27 13:45:15.624: INFO: Pod "downwardapi-volume-b9966691-952c-4c2f-a398-623a9c1bf26d": Phase="Pending", Reason="", readiness=false. Elapsed: 55.886028119s Apr 27 13:45:17.627: INFO: Pod "downwardapi-volume-b9966691-952c-4c2f-a398-623a9c1bf26d": Phase="Pending", Reason="", readiness=false. Elapsed: 57.888718793s Apr 27 13:45:19.791: INFO: Pod "downwardapi-volume-b9966691-952c-4c2f-a398-623a9c1bf26d": Phase="Pending", Reason="", readiness=false. Elapsed: 1m0.05338952s Apr 27 13:45:22.105: INFO: Pod "downwardapi-volume-b9966691-952c-4c2f-a398-623a9c1bf26d": Phase="Pending", Reason="", readiness=false. Elapsed: 1m2.367409825s Apr 27 13:45:24.111: INFO: Pod "downwardapi-volume-b9966691-952c-4c2f-a398-623a9c1bf26d": Phase="Pending", Reason="", readiness=false. Elapsed: 1m4.37324913s Apr 27 13:45:26.114: INFO: Pod "downwardapi-volume-b9966691-952c-4c2f-a398-623a9c1bf26d": Phase="Pending", Reason="", readiness=false. Elapsed: 1m6.375932387s Apr 27 13:45:28.117: INFO: Pod "downwardapi-volume-b9966691-952c-4c2f-a398-623a9c1bf26d": Phase="Pending", Reason="", readiness=false. Elapsed: 1m8.379053198s Apr 27 13:45:30.120: INFO: Pod "downwardapi-volume-b9966691-952c-4c2f-a398-623a9c1bf26d": Phase="Pending", Reason="", readiness=false. Elapsed: 1m10.381878194s Apr 27 13:45:32.559: INFO: Pod "downwardapi-volume-b9966691-952c-4c2f-a398-623a9c1bf26d": Phase="Pending", Reason="", readiness=false. Elapsed: 1m12.821315311s Apr 27 13:45:34.562: INFO: Pod "downwardapi-volume-b9966691-952c-4c2f-a398-623a9c1bf26d": Phase="Pending", Reason="", readiness=false. Elapsed: 1m14.824024357s Apr 27 13:45:36.860: INFO: Pod "downwardapi-volume-b9966691-952c-4c2f-a398-623a9c1bf26d": Phase="Pending", Reason="", readiness=false. Elapsed: 1m17.121756653s Apr 27 13:45:38.863: INFO: Pod "downwardapi-volume-b9966691-952c-4c2f-a398-623a9c1bf26d": Phase="Pending", Reason="", readiness=false. Elapsed: 1m19.124712457s Apr 27 13:45:41.787: INFO: Pod "downwardapi-volume-b9966691-952c-4c2f-a398-623a9c1bf26d": Phase="Pending", Reason="", readiness=false. Elapsed: 1m22.049075385s Apr 27 13:45:44.614: INFO: Pod "downwardapi-volume-b9966691-952c-4c2f-a398-623a9c1bf26d": Phase="Pending", Reason="", readiness=false. Elapsed: 1m24.875921549s Apr 27 13:45:46.616: INFO: Pod "downwardapi-volume-b9966691-952c-4c2f-a398-623a9c1bf26d": Phase="Running", Reason="", readiness=true. Elapsed: 1m26.878640154s Apr 27 13:45:48.620: INFO: Pod "downwardapi-volume-b9966691-952c-4c2f-a398-623a9c1bf26d": Phase="Running", Reason="", readiness=true. Elapsed: 1m28.882501122s Apr 27 13:45:50.623: INFO: Pod "downwardapi-volume-b9966691-952c-4c2f-a398-623a9c1bf26d": Phase="Running", Reason="", readiness=true. Elapsed: 1m30.885603595s Apr 27 13:45:52.627: INFO: Pod "downwardapi-volume-b9966691-952c-4c2f-a398-623a9c1bf26d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 1m32.88914431s STEP: Saw pod success Apr 27 13:45:52.627: INFO: Pod "downwardapi-volume-b9966691-952c-4c2f-a398-623a9c1bf26d" satisfied condition "success or failure" Apr 27 13:45:52.629: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-b9966691-952c-4c2f-a398-623a9c1bf26d container client-container: STEP: delete the pod Apr 27 13:45:52.842: INFO: Waiting for pod downwardapi-volume-b9966691-952c-4c2f-a398-623a9c1bf26d to disappear Apr 27 13:45:53.074: INFO: Pod downwardapi-volume-b9966691-952c-4c2f-a398-623a9c1bf26d no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 27 13:45:53.074: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3759" for this suite. Apr 27 13:45:59.134: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 27 13:45:59.312: INFO: namespace projected-3759 deletion completed in 6.235095033s • [SLOW TEST:99.823 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 27 13:45:59.312: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0777 on node default medium Apr 27 13:45:59.487: INFO: Waiting up to 5m0s for pod "pod-a44debba-96bf-4b29-a671-3dad17e99d7b" in namespace "emptydir-2931" to be "success or failure" Apr 27 13:45:59.514: INFO: Pod "pod-a44debba-96bf-4b29-a671-3dad17e99d7b": Phase="Pending", Reason="", readiness=false. Elapsed: 27.250475ms Apr 27 13:46:01.520: INFO: Pod "pod-a44debba-96bf-4b29-a671-3dad17e99d7b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032899305s Apr 27 13:46:03.524: INFO: Pod "pod-a44debba-96bf-4b29-a671-3dad17e99d7b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.036853547s Apr 27 13:46:05.528: INFO: Pod "pod-a44debba-96bf-4b29-a671-3dad17e99d7b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.040411024s Apr 27 13:46:07.656: INFO: Pod "pod-a44debba-96bf-4b29-a671-3dad17e99d7b": Phase="Pending", Reason="", readiness=false. Elapsed: 8.168498756s Apr 27 13:46:09.817: INFO: Pod "pod-a44debba-96bf-4b29-a671-3dad17e99d7b": Phase="Pending", Reason="", readiness=false. Elapsed: 10.330091646s Apr 27 13:46:11.820: INFO: Pod "pod-a44debba-96bf-4b29-a671-3dad17e99d7b": Phase="Pending", Reason="", readiness=false. Elapsed: 12.332788376s Apr 27 13:46:13.823: INFO: Pod "pod-a44debba-96bf-4b29-a671-3dad17e99d7b": Phase="Pending", Reason="", readiness=false. Elapsed: 14.335539575s Apr 27 13:46:16.451: INFO: Pod "pod-a44debba-96bf-4b29-a671-3dad17e99d7b": Phase="Pending", Reason="", readiness=false. Elapsed: 16.96422469s Apr 27 13:46:18.456: INFO: Pod "pod-a44debba-96bf-4b29-a671-3dad17e99d7b": Phase="Running", Reason="", readiness=true. Elapsed: 18.96852765s Apr 27 13:46:20.462: INFO: Pod "pod-a44debba-96bf-4b29-a671-3dad17e99d7b": Phase="Running", Reason="", readiness=true. Elapsed: 20.975284781s Apr 27 13:46:22.554: INFO: Pod "pod-a44debba-96bf-4b29-a671-3dad17e99d7b": Phase="Running", Reason="", readiness=true. Elapsed: 23.067096042s Apr 27 13:46:24.790: INFO: Pod "pod-a44debba-96bf-4b29-a671-3dad17e99d7b": Phase="Running", Reason="", readiness=true. Elapsed: 25.302451323s Apr 27 13:46:27.498: INFO: Pod "pod-a44debba-96bf-4b29-a671-3dad17e99d7b": Phase="Running", Reason="", readiness=true. Elapsed: 28.01131808s Apr 27 13:46:29.502: INFO: Pod "pod-a44debba-96bf-4b29-a671-3dad17e99d7b": Phase="Running", Reason="", readiness=true. Elapsed: 30.015194797s Apr 27 13:46:31.506: INFO: Pod "pod-a44debba-96bf-4b29-a671-3dad17e99d7b": Phase="Running", Reason="", readiness=true. Elapsed: 32.018921732s Apr 27 13:46:33.509: INFO: Pod "pod-a44debba-96bf-4b29-a671-3dad17e99d7b": Phase="Running", Reason="", readiness=true. Elapsed: 34.021830518s Apr 27 13:46:35.709: INFO: Pod "pod-a44debba-96bf-4b29-a671-3dad17e99d7b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 36.221402342s STEP: Saw pod success Apr 27 13:46:35.709: INFO: Pod "pod-a44debba-96bf-4b29-a671-3dad17e99d7b" satisfied condition "success or failure" Apr 27 13:46:35.711: INFO: Trying to get logs from node iruya-worker pod pod-a44debba-96bf-4b29-a671-3dad17e99d7b container test-container: STEP: delete the pod Apr 27 13:46:35.889: INFO: Waiting for pod pod-a44debba-96bf-4b29-a671-3dad17e99d7b to disappear Apr 27 13:46:35.906: INFO: Pod pod-a44debba-96bf-4b29-a671-3dad17e99d7b no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 27 13:46:35.906: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2931" for this suite. Apr 27 13:46:43.927: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 27 13:46:44.395: INFO: namespace emptydir-2931 deletion completed in 8.487015809s • [SLOW TEST:45.084 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 27 13:46:44.396: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Apr 27 13:46:45.045: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ec8bdb79-c77c-4102-aa2d-db0076170fe7" in namespace "downward-api-7434" to be "success or failure" Apr 27 13:46:46.163: INFO: Pod "downwardapi-volume-ec8bdb79-c77c-4102-aa2d-db0076170fe7": Phase="Pending", Reason="", readiness=false. Elapsed: 1.118603689s Apr 27 13:46:48.167: INFO: Pod "downwardapi-volume-ec8bdb79-c77c-4102-aa2d-db0076170fe7": Phase="Pending", Reason="", readiness=false. Elapsed: 3.12174904s Apr 27 13:46:50.169: INFO: Pod "downwardapi-volume-ec8bdb79-c77c-4102-aa2d-db0076170fe7": Phase="Pending", Reason="", readiness=false. Elapsed: 5.124570491s Apr 27 13:46:53.297: INFO: Pod "downwardapi-volume-ec8bdb79-c77c-4102-aa2d-db0076170fe7": Phase="Pending", Reason="", readiness=false. Elapsed: 8.252294041s Apr 27 13:46:55.779: INFO: Pod "downwardapi-volume-ec8bdb79-c77c-4102-aa2d-db0076170fe7": Phase="Pending", Reason="", readiness=false. Elapsed: 10.734209062s Apr 27 13:46:57.783: INFO: Pod "downwardapi-volume-ec8bdb79-c77c-4102-aa2d-db0076170fe7": Phase="Pending", Reason="", readiness=false. Elapsed: 12.738544518s Apr 27 13:46:59.787: INFO: Pod "downwardapi-volume-ec8bdb79-c77c-4102-aa2d-db0076170fe7": Phase="Pending", Reason="", readiness=false. Elapsed: 14.741733761s Apr 27 13:47:01.790: INFO: Pod "downwardapi-volume-ec8bdb79-c77c-4102-aa2d-db0076170fe7": Phase="Pending", Reason="", readiness=false. Elapsed: 16.744863712s Apr 27 13:47:03.968: INFO: Pod "downwardapi-volume-ec8bdb79-c77c-4102-aa2d-db0076170fe7": Phase="Pending", Reason="", readiness=false. Elapsed: 18.923282658s Apr 27 13:47:05.972: INFO: Pod "downwardapi-volume-ec8bdb79-c77c-4102-aa2d-db0076170fe7": Phase="Pending", Reason="", readiness=false. Elapsed: 20.926749319s Apr 27 13:47:10.070: INFO: Pod "downwardapi-volume-ec8bdb79-c77c-4102-aa2d-db0076170fe7": Phase="Pending", Reason="", readiness=false. Elapsed: 25.025330268s Apr 27 13:47:12.072: INFO: Pod "downwardapi-volume-ec8bdb79-c77c-4102-aa2d-db0076170fe7": Phase="Pending", Reason="", readiness=false. Elapsed: 27.027532745s Apr 27 13:47:14.076: INFO: Pod "downwardapi-volume-ec8bdb79-c77c-4102-aa2d-db0076170fe7": Phase="Pending", Reason="", readiness=false. Elapsed: 29.031026945s Apr 27 13:47:16.079: INFO: Pod "downwardapi-volume-ec8bdb79-c77c-4102-aa2d-db0076170fe7": Phase="Pending", Reason="", readiness=false. Elapsed: 31.033761482s Apr 27 13:47:18.920: INFO: Pod "downwardapi-volume-ec8bdb79-c77c-4102-aa2d-db0076170fe7": Phase="Pending", Reason="", readiness=false. Elapsed: 33.875217187s Apr 27 13:47:22.525: INFO: Pod "downwardapi-volume-ec8bdb79-c77c-4102-aa2d-db0076170fe7": Phase="Pending", Reason="", readiness=false. Elapsed: 37.47993423s Apr 27 13:47:24.529: INFO: Pod "downwardapi-volume-ec8bdb79-c77c-4102-aa2d-db0076170fe7": Phase="Pending", Reason="", readiness=false. Elapsed: 39.484447342s Apr 27 13:47:27.053: INFO: Pod "downwardapi-volume-ec8bdb79-c77c-4102-aa2d-db0076170fe7": Phase="Pending", Reason="", readiness=false. Elapsed: 42.007748959s Apr 27 13:47:29.056: INFO: Pod "downwardapi-volume-ec8bdb79-c77c-4102-aa2d-db0076170fe7": Phase="Pending", Reason="", readiness=false. Elapsed: 44.010773475s Apr 27 13:47:31.060: INFO: Pod "downwardapi-volume-ec8bdb79-c77c-4102-aa2d-db0076170fe7": Phase="Pending", Reason="", readiness=false. Elapsed: 46.015296092s Apr 27 13:47:33.294: INFO: Pod "downwardapi-volume-ec8bdb79-c77c-4102-aa2d-db0076170fe7": Phase="Pending", Reason="", readiness=false. Elapsed: 48.249061775s Apr 27 13:47:35.297: INFO: Pod "downwardapi-volume-ec8bdb79-c77c-4102-aa2d-db0076170fe7": Phase="Pending", Reason="", readiness=false. Elapsed: 50.252317565s Apr 27 13:47:37.521: INFO: Pod "downwardapi-volume-ec8bdb79-c77c-4102-aa2d-db0076170fe7": Phase="Pending", Reason="", readiness=false. Elapsed: 52.476286238s Apr 27 13:47:39.526: INFO: Pod "downwardapi-volume-ec8bdb79-c77c-4102-aa2d-db0076170fe7": Phase="Pending", Reason="", readiness=false. Elapsed: 54.48093753s Apr 27 13:47:41.848: INFO: Pod "downwardapi-volume-ec8bdb79-c77c-4102-aa2d-db0076170fe7": Phase="Pending", Reason="", readiness=false. Elapsed: 56.803573353s Apr 27 13:47:43.852: INFO: Pod "downwardapi-volume-ec8bdb79-c77c-4102-aa2d-db0076170fe7": Phase="Pending", Reason="", readiness=false. Elapsed: 58.807425301s Apr 27 13:47:46.506: INFO: Pod "downwardapi-volume-ec8bdb79-c77c-4102-aa2d-db0076170fe7": Phase="Pending", Reason="", readiness=false. Elapsed: 1m1.460688087s Apr 27 13:47:49.047: INFO: Pod "downwardapi-volume-ec8bdb79-c77c-4102-aa2d-db0076170fe7": Phase="Pending", Reason="", readiness=false. Elapsed: 1m4.001687491s Apr 27 13:47:54.317: INFO: Pod "downwardapi-volume-ec8bdb79-c77c-4102-aa2d-db0076170fe7": Phase="Pending", Reason="", readiness=false. Elapsed: 1m9.27229528s Apr 27 13:47:56.694: INFO: Pod "downwardapi-volume-ec8bdb79-c77c-4102-aa2d-db0076170fe7": Phase="Pending", Reason="", readiness=false. Elapsed: 1m11.648777621s Apr 27 13:47:58.697: INFO: Pod "downwardapi-volume-ec8bdb79-c77c-4102-aa2d-db0076170fe7": Phase="Pending", Reason="", readiness=false. Elapsed: 1m13.651688975s Apr 27 13:48:01.135: INFO: Pod "downwardapi-volume-ec8bdb79-c77c-4102-aa2d-db0076170fe7": Phase="Pending", Reason="", readiness=false. Elapsed: 1m16.090646027s Apr 27 13:48:03.139: INFO: Pod "downwardapi-volume-ec8bdb79-c77c-4102-aa2d-db0076170fe7": Phase="Pending", Reason="", readiness=false. Elapsed: 1m18.093980669s Apr 27 13:48:05.370: INFO: Pod "downwardapi-volume-ec8bdb79-c77c-4102-aa2d-db0076170fe7": Phase="Running", Reason="", readiness=true. Elapsed: 1m20.325510508s Apr 27 13:48:07.373: INFO: Pod "downwardapi-volume-ec8bdb79-c77c-4102-aa2d-db0076170fe7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 1m22.3284756s STEP: Saw pod success Apr 27 13:48:07.373: INFO: Pod "downwardapi-volume-ec8bdb79-c77c-4102-aa2d-db0076170fe7" satisfied condition "success or failure" Apr 27 13:48:07.375: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-ec8bdb79-c77c-4102-aa2d-db0076170fe7 container client-container: STEP: delete the pod Apr 27 13:48:07.528: INFO: Waiting for pod downwardapi-volume-ec8bdb79-c77c-4102-aa2d-db0076170fe7 to disappear Apr 27 13:48:07.548: INFO: Pod downwardapi-volume-ec8bdb79-c77c-4102-aa2d-db0076170fe7 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 27 13:48:07.548: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7434" for this suite. Apr 27 13:48:15.597: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 27 13:48:16.205: INFO: namespace downward-api-7434 deletion completed in 8.65396067s • [SLOW TEST:91.809 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 27 13:48:16.206: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [It] should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a replication controller Apr 27 13:48:16.359: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7646' Apr 27 13:48:16.673: INFO: stderr: "" Apr 27 13:48:16.673: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Apr 27 13:48:16.674: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7646' Apr 27 13:48:16.841: INFO: stderr: "" Apr 27 13:48:16.841: INFO: stdout: "update-demo-nautilus-796qv update-demo-nautilus-x8z2k " Apr 27 13:48:16.841: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-796qv -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7646' Apr 27 13:48:16.987: INFO: stderr: "" Apr 27 13:48:16.987: INFO: stdout: "" Apr 27 13:48:16.987: INFO: update-demo-nautilus-796qv is created but not running Apr 27 13:48:21.987: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7646' Apr 27 13:48:23.671: INFO: stderr: "" Apr 27 13:48:23.671: INFO: stdout: "update-demo-nautilus-796qv update-demo-nautilus-x8z2k " Apr 27 13:48:23.671: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-796qv -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7646' Apr 27 13:48:23.919: INFO: stderr: "" Apr 27 13:48:23.919: INFO: stdout: "" Apr 27 13:48:23.919: INFO: update-demo-nautilus-796qv is created but not running Apr 27 13:48:28.919: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7646' Apr 27 13:48:29.107: INFO: stderr: "" Apr 27 13:48:29.107: INFO: stdout: "update-demo-nautilus-796qv update-demo-nautilus-x8z2k " Apr 27 13:48:29.107: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-796qv -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7646' Apr 27 13:48:29.188: INFO: stderr: "" Apr 27 13:48:29.188: INFO: stdout: "" Apr 27 13:48:29.188: INFO: update-demo-nautilus-796qv is created but not running Apr 27 13:48:34.189: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7646' Apr 27 13:48:34.281: INFO: stderr: "" Apr 27 13:48:34.281: INFO: stdout: "update-demo-nautilus-796qv update-demo-nautilus-x8z2k " Apr 27 13:48:34.281: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-796qv -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7646' Apr 27 13:48:34.358: INFO: stderr: "" Apr 27 13:48:34.358: INFO: stdout: "true" Apr 27 13:48:34.358: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-796qv -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7646' Apr 27 13:48:34.556: INFO: stderr: "" Apr 27 13:48:34.556: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 27 13:48:34.556: INFO: validating pod update-demo-nautilus-796qv Apr 27 13:48:34.559: INFO: got data: { "image": "nautilus.jpg" } Apr 27 13:48:34.559: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 27 13:48:34.559: INFO: update-demo-nautilus-796qv is verified up and running Apr 27 13:48:34.559: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-x8z2k -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7646' Apr 27 13:48:34.645: INFO: stderr: "" Apr 27 13:48:34.645: INFO: stdout: "true" Apr 27 13:48:34.645: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-x8z2k -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7646' Apr 27 13:48:34.728: INFO: stderr: "" Apr 27 13:48:34.728: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 27 13:48:34.728: INFO: validating pod update-demo-nautilus-x8z2k Apr 27 13:48:34.731: INFO: got data: { "image": "nautilus.jpg" } Apr 27 13:48:34.731: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 27 13:48:34.731: INFO: update-demo-nautilus-x8z2k is verified up and running STEP: using delete to clean up resources Apr 27 13:48:34.731: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-7646' Apr 27 13:48:34.826: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 27 13:48:34.826: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Apr 27 13:48:34.826: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-7646' Apr 27 13:48:34.939: INFO: stderr: "No resources found.\n" Apr 27 13:48:34.939: INFO: stdout: "" Apr 27 13:48:34.939: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-7646 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Apr 27 13:48:35.616: INFO: stderr: "" Apr 27 13:48:35.616: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 27 13:48:35.616: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7646" for this suite. Apr 27 13:49:04.045: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 27 13:49:04.396: INFO: namespace kubectl-7646 deletion completed in 28.772266209s • [SLOW TEST:48.191 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 27 13:49:04.397: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 27 13:50:27.857: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-3048" for this suite. Apr 27 13:50:36.131: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 27 13:50:36.185: INFO: namespace container-runtime-3048 deletion completed in 8.326124802s • [SLOW TEST:91.789 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 when starting a container that exits /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:39 should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 27 13:50:36.186: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-5035 [It] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace statefulset-5035 STEP: Creating statefulset with conflicting port in namespace statefulset-5035 STEP: Waiting until pod test-pod will start running in namespace statefulset-5035 STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-5035 Apr 27 13:50:48.507: INFO: Observed stateful pod in namespace: statefulset-5035, name: ss-0, uid: c9f9fa22-4e46-496c-a287-9b7579788690, status phase: Pending. Waiting for statefulset controller to delete. Apr 27 13:50:52.145: INFO: Observed stateful pod in namespace: statefulset-5035, name: ss-0, uid: c9f9fa22-4e46-496c-a287-9b7579788690, status phase: Failed. Waiting for statefulset controller to delete. Apr 27 13:50:52.153: INFO: Observed stateful pod in namespace: statefulset-5035, name: ss-0, uid: c9f9fa22-4e46-496c-a287-9b7579788690, status phase: Failed. Waiting for statefulset controller to delete. Apr 27 13:50:52.240: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-5035 STEP: Removing pod with conflicting port in namespace statefulset-5035 STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-5035 and will be in running state [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 Apr 27 13:51:57.337: INFO: Deleting all statefulset in ns statefulset-5035 Apr 27 13:51:57.339: INFO: Scaling statefulset ss to 0 Apr 27 13:52:17.388: INFO: Waiting for statefulset status.replicas updated to 0 Apr 27 13:52:17.390: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 27 13:52:17.418: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-5035" for this suite. Apr 27 13:52:25.663: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 27 13:52:25.718: INFO: namespace statefulset-5035 deletion completed in 8.2987012s • [SLOW TEST:109.533 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 27 13:52:25.719: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 27 13:53:08.361: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-4700" for this suite. Apr 27 13:53:21.494: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 27 13:53:23.935: INFO: namespace emptydir-wrapper-4700 deletion completed in 15.519373759s • [SLOW TEST:58.217 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 27 13:53:23.936: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-03d87182-1094-4014-87bc-612a1e049fba STEP: Creating a pod to test consume configMaps Apr 27 13:53:25.749: INFO: Waiting up to 5m0s for pod "pod-configmaps-c571c22e-5300-4358-9657-04e894890b29" in namespace "configmap-7423" to be "success or failure" Apr 27 13:53:27.122: INFO: Pod "pod-configmaps-c571c22e-5300-4358-9657-04e894890b29": Phase="Pending", Reason="", readiness=false. Elapsed: 1.372770541s Apr 27 13:53:30.842: INFO: Pod "pod-configmaps-c571c22e-5300-4358-9657-04e894890b29": Phase="Pending", Reason="", readiness=false. Elapsed: 5.092615239s Apr 27 13:53:33.560: INFO: Pod "pod-configmaps-c571c22e-5300-4358-9657-04e894890b29": Phase="Pending", Reason="", readiness=false. Elapsed: 7.810762025s Apr 27 13:53:36.207: INFO: Pod "pod-configmaps-c571c22e-5300-4358-9657-04e894890b29": Phase="Pending", Reason="", readiness=false. Elapsed: 10.457612634s Apr 27 13:53:39.041: INFO: Pod "pod-configmaps-c571c22e-5300-4358-9657-04e894890b29": Phase="Pending", Reason="", readiness=false. Elapsed: 13.291690745s Apr 27 13:53:41.044: INFO: Pod "pod-configmaps-c571c22e-5300-4358-9657-04e894890b29": Phase="Pending", Reason="", readiness=false. Elapsed: 15.295008676s Apr 27 13:53:43.048: INFO: Pod "pod-configmaps-c571c22e-5300-4358-9657-04e894890b29": Phase="Pending", Reason="", readiness=false. Elapsed: 17.298708702s Apr 27 13:53:45.051: INFO: Pod "pod-configmaps-c571c22e-5300-4358-9657-04e894890b29": Phase="Pending", Reason="", readiness=false. Elapsed: 19.301892315s Apr 27 13:53:48.623: INFO: Pod "pod-configmaps-c571c22e-5300-4358-9657-04e894890b29": Phase="Pending", Reason="", readiness=false. Elapsed: 22.87413962s Apr 27 13:53:50.627: INFO: Pod "pod-configmaps-c571c22e-5300-4358-9657-04e894890b29": Phase="Pending", Reason="", readiness=false. Elapsed: 24.87768363s Apr 27 13:53:53.422: INFO: Pod "pod-configmaps-c571c22e-5300-4358-9657-04e894890b29": Phase="Pending", Reason="", readiness=false. Elapsed: 27.672896278s Apr 27 13:53:56.075: INFO: Pod "pod-configmaps-c571c22e-5300-4358-9657-04e894890b29": Phase="Pending", Reason="", readiness=false. Elapsed: 30.326292056s Apr 27 13:53:58.164: INFO: Pod "pod-configmaps-c571c22e-5300-4358-9657-04e894890b29": Phase="Pending", Reason="", readiness=false. Elapsed: 32.415184453s Apr 27 13:54:00.506: INFO: Pod "pod-configmaps-c571c22e-5300-4358-9657-04e894890b29": Phase="Pending", Reason="", readiness=false. Elapsed: 34.756999099s Apr 27 13:54:02.509: INFO: Pod "pod-configmaps-c571c22e-5300-4358-9657-04e894890b29": Phase="Pending", Reason="", readiness=false. Elapsed: 36.759817555s Apr 27 13:54:04.512: INFO: Pod "pod-configmaps-c571c22e-5300-4358-9657-04e894890b29": Phase="Pending", Reason="", readiness=false. Elapsed: 38.763011838s Apr 27 13:54:06.516: INFO: Pod "pod-configmaps-c571c22e-5300-4358-9657-04e894890b29": Phase="Pending", Reason="", readiness=false. Elapsed: 40.766509615s Apr 27 13:54:08.967: INFO: Pod "pod-configmaps-c571c22e-5300-4358-9657-04e894890b29": Phase="Pending", Reason="", readiness=false. Elapsed: 43.217915896s Apr 27 13:54:11.129: INFO: Pod "pod-configmaps-c571c22e-5300-4358-9657-04e894890b29": Phase="Pending", Reason="", readiness=false. Elapsed: 45.380274938s Apr 27 13:54:15.304: INFO: Pod "pod-configmaps-c571c22e-5300-4358-9657-04e894890b29": Phase="Pending", Reason="", readiness=false. Elapsed: 49.55486191s Apr 27 13:54:17.307: INFO: Pod "pod-configmaps-c571c22e-5300-4358-9657-04e894890b29": Phase="Pending", Reason="", readiness=false. Elapsed: 51.55815009s Apr 27 13:54:20.212: INFO: Pod "pod-configmaps-c571c22e-5300-4358-9657-04e894890b29": Phase="Pending", Reason="", readiness=false. Elapsed: 54.463164586s Apr 27 13:54:22.926: INFO: Pod "pod-configmaps-c571c22e-5300-4358-9657-04e894890b29": Phase="Pending", Reason="", readiness=false. Elapsed: 57.176577728s Apr 27 13:54:24.929: INFO: Pod "pod-configmaps-c571c22e-5300-4358-9657-04e894890b29": Phase="Pending", Reason="", readiness=false. Elapsed: 59.179448016s Apr 27 13:54:26.956: INFO: Pod "pod-configmaps-c571c22e-5300-4358-9657-04e894890b29": Phase="Pending", Reason="", readiness=false. Elapsed: 1m1.207243253s Apr 27 13:54:28.960: INFO: Pod "pod-configmaps-c571c22e-5300-4358-9657-04e894890b29": Phase="Pending", Reason="", readiness=false. Elapsed: 1m3.210685806s Apr 27 13:54:30.986: INFO: Pod "pod-configmaps-c571c22e-5300-4358-9657-04e894890b29": Phase="Pending", Reason="", readiness=false. Elapsed: 1m5.236782743s Apr 27 13:54:32.990: INFO: Pod "pod-configmaps-c571c22e-5300-4358-9657-04e894890b29": Phase="Pending", Reason="", readiness=false. Elapsed: 1m7.240719854s Apr 27 13:54:34.993: INFO: Pod "pod-configmaps-c571c22e-5300-4358-9657-04e894890b29": Phase="Pending", Reason="", readiness=false. Elapsed: 1m9.244334314s Apr 27 13:54:37.141: INFO: Pod "pod-configmaps-c571c22e-5300-4358-9657-04e894890b29": Phase="Pending", Reason="", readiness=false. Elapsed: 1m11.391836245s Apr 27 13:54:39.664: INFO: Pod "pod-configmaps-c571c22e-5300-4358-9657-04e894890b29": Phase="Running", Reason="", readiness=true. Elapsed: 1m13.914382091s Apr 27 13:54:41.689: INFO: Pod "pod-configmaps-c571c22e-5300-4358-9657-04e894890b29": Phase="Running", Reason="", readiness=true. Elapsed: 1m15.939739996s Apr 27 13:54:43.692: INFO: Pod "pod-configmaps-c571c22e-5300-4358-9657-04e894890b29": Phase="Running", Reason="", readiness=true. Elapsed: 1m17.942463429s Apr 27 13:54:45.694: INFO: Pod "pod-configmaps-c571c22e-5300-4358-9657-04e894890b29": Phase="Running", Reason="", readiness=true. Elapsed: 1m19.945223754s Apr 27 13:54:47.697: INFO: Pod "pod-configmaps-c571c22e-5300-4358-9657-04e894890b29": Phase="Succeeded", Reason="", readiness=false. Elapsed: 1m21.948120149s STEP: Saw pod success Apr 27 13:54:47.697: INFO: Pod "pod-configmaps-c571c22e-5300-4358-9657-04e894890b29" satisfied condition "success or failure" Apr 27 13:54:47.700: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-c571c22e-5300-4358-9657-04e894890b29 container configmap-volume-test: STEP: delete the pod Apr 27 13:54:47.774: INFO: Waiting for pod pod-configmaps-c571c22e-5300-4358-9657-04e894890b29 to disappear Apr 27 13:54:48.351: INFO: Pod pod-configmaps-c571c22e-5300-4358-9657-04e894890b29 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 27 13:54:48.351: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7423" for this suite. Apr 27 13:54:56.420: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 27 13:54:56.481: INFO: namespace configmap-7423 deletion completed in 8.092043726s • [SLOW TEST:92.545 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 27 13:54:56.481: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications Apr 27 13:54:56.631: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-2616,SelfLink:/api/v1/namespaces/watch-2616/configmaps/e2e-watch-test-watch-closed,UID:f83b4bf3-3c60-4cab-99ab-aad6ba98c049,ResourceVersion:7724782,Generation:0,CreationTimestamp:2020-04-27 13:54:56 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} Apr 27 13:54:56.631: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-2616,SelfLink:/api/v1/namespaces/watch-2616/configmaps/e2e-watch-test-watch-closed,UID:f83b4bf3-3c60-4cab-99ab-aad6ba98c049,ResourceVersion:7724783,Generation:0,CreationTimestamp:2020-04-27 13:54:56 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed Apr 27 13:54:56.686: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-2616,SelfLink:/api/v1/namespaces/watch-2616/configmaps/e2e-watch-test-watch-closed,UID:f83b4bf3-3c60-4cab-99ab-aad6ba98c049,ResourceVersion:7724784,Generation:0,CreationTimestamp:2020-04-27 13:54:56 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Apr 27 13:54:56.686: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-2616,SelfLink:/api/v1/namespaces/watch-2616/configmaps/e2e-watch-test-watch-closed,UID:f83b4bf3-3c60-4cab-99ab-aad6ba98c049,ResourceVersion:7724785,Generation:0,CreationTimestamp:2020-04-27 13:54:56 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 27 13:54:56.686: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-2616" for this suite. Apr 27 13:55:02.800: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 27 13:55:02.860: INFO: namespace watch-2616 deletion completed in 6.135354976s • [SLOW TEST:6.379 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 27 13:55:02.861: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-3efe636c-16f6-4961-b1d7-4d45419d7f63 STEP: Creating a pod to test consume secrets Apr 27 13:55:03.083: INFO: Waiting up to 5m0s for pod "pod-secrets-a7fcaa6e-bd35-43ed-998d-7e17b1948fcb" in namespace "secrets-6296" to be "success or failure" Apr 27 13:55:03.132: INFO: Pod "pod-secrets-a7fcaa6e-bd35-43ed-998d-7e17b1948fcb": Phase="Pending", Reason="", readiness=false. Elapsed: 48.140858ms Apr 27 13:55:05.136: INFO: Pod "pod-secrets-a7fcaa6e-bd35-43ed-998d-7e17b1948fcb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.052170995s Apr 27 13:55:07.285: INFO: Pod "pod-secrets-a7fcaa6e-bd35-43ed-998d-7e17b1948fcb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.20168587s Apr 27 13:55:09.289: INFO: Pod "pod-secrets-a7fcaa6e-bd35-43ed-998d-7e17b1948fcb": Phase="Pending", Reason="", readiness=false. Elapsed: 6.205737786s Apr 27 13:55:11.743: INFO: Pod "pod-secrets-a7fcaa6e-bd35-43ed-998d-7e17b1948fcb": Phase="Pending", Reason="", readiness=false. Elapsed: 8.659540654s Apr 27 13:55:13.746: INFO: Pod "pod-secrets-a7fcaa6e-bd35-43ed-998d-7e17b1948fcb": Phase="Pending", Reason="", readiness=false. Elapsed: 10.66285625s Apr 27 13:55:16.584: INFO: Pod "pod-secrets-a7fcaa6e-bd35-43ed-998d-7e17b1948fcb": Phase="Pending", Reason="", readiness=false. Elapsed: 13.500755459s Apr 27 13:55:18.588: INFO: Pod "pod-secrets-a7fcaa6e-bd35-43ed-998d-7e17b1948fcb": Phase="Pending", Reason="", readiness=false. Elapsed: 15.504559348s Apr 27 13:55:20.590: INFO: Pod "pod-secrets-a7fcaa6e-bd35-43ed-998d-7e17b1948fcb": Phase="Pending", Reason="", readiness=false. Elapsed: 17.506990159s Apr 27 13:55:23.262: INFO: Pod "pod-secrets-a7fcaa6e-bd35-43ed-998d-7e17b1948fcb": Phase="Pending", Reason="", readiness=false. Elapsed: 20.178857669s Apr 27 13:55:25.267: INFO: Pod "pod-secrets-a7fcaa6e-bd35-43ed-998d-7e17b1948fcb": Phase="Pending", Reason="", readiness=false. Elapsed: 22.183437548s Apr 27 13:55:27.271: INFO: Pod "pod-secrets-a7fcaa6e-bd35-43ed-998d-7e17b1948fcb": Phase="Pending", Reason="", readiness=false. Elapsed: 24.187468719s Apr 27 13:55:29.400: INFO: Pod "pod-secrets-a7fcaa6e-bd35-43ed-998d-7e17b1948fcb": Phase="Pending", Reason="", readiness=false. Elapsed: 26.316064425s Apr 27 13:55:31.910: INFO: Pod "pod-secrets-a7fcaa6e-bd35-43ed-998d-7e17b1948fcb": Phase="Pending", Reason="", readiness=false. Elapsed: 28.826378889s Apr 27 13:55:33.914: INFO: Pod "pod-secrets-a7fcaa6e-bd35-43ed-998d-7e17b1948fcb": Phase="Pending", Reason="", readiness=false. Elapsed: 30.830818658s Apr 27 13:55:36.304: INFO: Pod "pod-secrets-a7fcaa6e-bd35-43ed-998d-7e17b1948fcb": Phase="Pending", Reason="", readiness=false. Elapsed: 33.220723202s Apr 27 13:55:38.308: INFO: Pod "pod-secrets-a7fcaa6e-bd35-43ed-998d-7e17b1948fcb": Phase="Pending", Reason="", readiness=false. Elapsed: 35.224601953s Apr 27 13:55:40.311: INFO: Pod "pod-secrets-a7fcaa6e-bd35-43ed-998d-7e17b1948fcb": Phase="Pending", Reason="", readiness=false. Elapsed: 37.22792021s Apr 27 13:55:43.430: INFO: Pod "pod-secrets-a7fcaa6e-bd35-43ed-998d-7e17b1948fcb": Phase="Pending", Reason="", readiness=false. Elapsed: 40.346408832s Apr 27 13:55:45.434: INFO: Pod "pod-secrets-a7fcaa6e-bd35-43ed-998d-7e17b1948fcb": Phase="Pending", Reason="", readiness=false. Elapsed: 42.350563945s Apr 27 13:55:47.975: INFO: Pod "pod-secrets-a7fcaa6e-bd35-43ed-998d-7e17b1948fcb": Phase="Pending", Reason="", readiness=false. Elapsed: 44.891072358s Apr 27 13:55:49.978: INFO: Pod "pod-secrets-a7fcaa6e-bd35-43ed-998d-7e17b1948fcb": Phase="Pending", Reason="", readiness=false. Elapsed: 46.894657861s Apr 27 13:55:52.226: INFO: Pod "pod-secrets-a7fcaa6e-bd35-43ed-998d-7e17b1948fcb": Phase="Pending", Reason="", readiness=false. Elapsed: 49.142634997s Apr 27 13:55:54.699: INFO: Pod "pod-secrets-a7fcaa6e-bd35-43ed-998d-7e17b1948fcb": Phase="Pending", Reason="", readiness=false. Elapsed: 51.61556502s Apr 27 13:55:56.702: INFO: Pod "pod-secrets-a7fcaa6e-bd35-43ed-998d-7e17b1948fcb": Phase="Pending", Reason="", readiness=false. Elapsed: 53.618666086s Apr 27 13:55:58.706: INFO: Pod "pod-secrets-a7fcaa6e-bd35-43ed-998d-7e17b1948fcb": Phase="Pending", Reason="", readiness=false. Elapsed: 55.622162456s Apr 27 13:56:00.712: INFO: Pod "pod-secrets-a7fcaa6e-bd35-43ed-998d-7e17b1948fcb": Phase="Pending", Reason="", readiness=false. Elapsed: 57.628533736s Apr 27 13:56:02.716: INFO: Pod "pod-secrets-a7fcaa6e-bd35-43ed-998d-7e17b1948fcb": Phase="Pending", Reason="", readiness=false. Elapsed: 59.632335809s Apr 27 13:56:04.721: INFO: Pod "pod-secrets-a7fcaa6e-bd35-43ed-998d-7e17b1948fcb": Phase="Pending", Reason="", readiness=false. Elapsed: 1m1.637595272s Apr 27 13:56:06.725: INFO: Pod "pod-secrets-a7fcaa6e-bd35-43ed-998d-7e17b1948fcb": Phase="Pending", Reason="", readiness=false. Elapsed: 1m3.64166142s Apr 27 13:56:08.728: INFO: Pod "pod-secrets-a7fcaa6e-bd35-43ed-998d-7e17b1948fcb": Phase="Pending", Reason="", readiness=false. Elapsed: 1m5.644656475s Apr 27 13:56:10.731: INFO: Pod "pod-secrets-a7fcaa6e-bd35-43ed-998d-7e17b1948fcb": Phase="Pending", Reason="", readiness=false. Elapsed: 1m7.647861121s Apr 27 13:56:13.179: INFO: Pod "pod-secrets-a7fcaa6e-bd35-43ed-998d-7e17b1948fcb": Phase="Pending", Reason="", readiness=false. Elapsed: 1m10.095412858s Apr 27 13:56:15.490: INFO: Pod "pod-secrets-a7fcaa6e-bd35-43ed-998d-7e17b1948fcb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 1m12.406020802s STEP: Saw pod success Apr 27 13:56:15.490: INFO: Pod "pod-secrets-a7fcaa6e-bd35-43ed-998d-7e17b1948fcb" satisfied condition "success or failure" Apr 27 13:56:15.492: INFO: Trying to get logs from node iruya-worker2 pod pod-secrets-a7fcaa6e-bd35-43ed-998d-7e17b1948fcb container secret-volume-test: STEP: delete the pod Apr 27 13:56:15.651: INFO: Waiting for pod pod-secrets-a7fcaa6e-bd35-43ed-998d-7e17b1948fcb to disappear Apr 27 13:56:15.689: INFO: Pod pod-secrets-a7fcaa6e-bd35-43ed-998d-7e17b1948fcb no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 27 13:56:15.689: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-6296" for this suite. Apr 27 13:56:23.785: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 27 13:56:23.843: INFO: namespace secrets-6296 deletion completed in 8.141877739s • [SLOW TEST:80.982 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 27 13:56:23.843: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Apr 27 13:57:26.150: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Apr 27 13:57:26.195: INFO: Pod pod-with-prestop-http-hook still exists Apr 27 13:57:28.195: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Apr 27 13:57:28.199: INFO: Pod pod-with-prestop-http-hook still exists Apr 27 13:57:30.195: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Apr 27 13:57:30.198: INFO: Pod pod-with-prestop-http-hook still exists Apr 27 13:57:32.195: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Apr 27 13:57:32.227: INFO: Pod pod-with-prestop-http-hook still exists Apr 27 13:57:34.195: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Apr 27 13:57:34.199: INFO: Pod pod-with-prestop-http-hook still exists Apr 27 13:57:36.195: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Apr 27 13:57:36.251: INFO: Pod pod-with-prestop-http-hook still exists Apr 27 13:57:38.195: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Apr 27 13:57:38.198: INFO: Pod pod-with-prestop-http-hook still exists Apr 27 13:57:40.195: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Apr 27 13:57:40.274: INFO: Pod pod-with-prestop-http-hook still exists Apr 27 13:57:42.195: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Apr 27 13:57:42.199: INFO: Pod pod-with-prestop-http-hook still exists Apr 27 13:57:44.195: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Apr 27 13:57:44.198: INFO: Pod pod-with-prestop-http-hook still exists Apr 27 13:57:46.195: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Apr 27 13:57:46.199: INFO: Pod pod-with-prestop-http-hook still exists Apr 27 13:57:48.195: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Apr 27 13:57:48.198: INFO: Pod pod-with-prestop-http-hook still exists Apr 27 13:57:50.195: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Apr 27 13:57:50.198: INFO: Pod pod-with-prestop-http-hook still exists Apr 27 13:57:52.195: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Apr 27 13:57:52.199: INFO: Pod pod-with-prestop-http-hook still exists Apr 27 13:57:54.195: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Apr 27 13:57:54.198: INFO: Pod pod-with-prestop-http-hook still exists Apr 27 13:57:56.195: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Apr 27 13:57:56.198: INFO: Pod pod-with-prestop-http-hook still exists Apr 27 13:57:58.195: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Apr 27 13:57:58.199: INFO: Pod pod-with-prestop-http-hook still exists Apr 27 13:58:00.195: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Apr 27 13:58:00.199: INFO: Pod pod-with-prestop-http-hook still exists Apr 27 13:58:02.195: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Apr 27 13:58:02.199: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 27 13:58:02.208: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-4656" for this suite. Apr 27 13:59:28.434: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 27 13:59:28.511: INFO: namespace container-lifecycle-hook-4656 deletion completed in 1m26.300976942s • [SLOW TEST:184.669 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 27 13:59:28.512: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Apr 27 13:59:28.650: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8a09fc6d-c336-4c59-9dc4-587af93061d7" in namespace "projected-844" to be "success or failure" Apr 27 13:59:28.654: INFO: Pod "downwardapi-volume-8a09fc6d-c336-4c59-9dc4-587af93061d7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.191452ms Apr 27 13:59:30.678: INFO: Pod "downwardapi-volume-8a09fc6d-c336-4c59-9dc4-587af93061d7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028301367s Apr 27 13:59:32.995: INFO: Pod "downwardapi-volume-8a09fc6d-c336-4c59-9dc4-587af93061d7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.345867918s Apr 27 13:59:35.157: INFO: Pod "downwardapi-volume-8a09fc6d-c336-4c59-9dc4-587af93061d7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.507468398s Apr 27 13:59:37.159: INFO: Pod "downwardapi-volume-8a09fc6d-c336-4c59-9dc4-587af93061d7": Phase="Pending", Reason="", readiness=false. Elapsed: 8.509800963s Apr 27 13:59:39.163: INFO: Pod "downwardapi-volume-8a09fc6d-c336-4c59-9dc4-587af93061d7": Phase="Pending", Reason="", readiness=false. Elapsed: 10.513164952s Apr 27 13:59:41.168: INFO: Pod "downwardapi-volume-8a09fc6d-c336-4c59-9dc4-587af93061d7": Phase="Pending", Reason="", readiness=false. Elapsed: 12.518370741s Apr 27 13:59:43.241: INFO: Pod "downwardapi-volume-8a09fc6d-c336-4c59-9dc4-587af93061d7": Phase="Pending", Reason="", readiness=false. Elapsed: 14.591629359s Apr 27 13:59:45.246: INFO: Pod "downwardapi-volume-8a09fc6d-c336-4c59-9dc4-587af93061d7": Phase="Pending", Reason="", readiness=false. Elapsed: 16.596283209s Apr 27 13:59:47.290: INFO: Pod "downwardapi-volume-8a09fc6d-c336-4c59-9dc4-587af93061d7": Phase="Pending", Reason="", readiness=false. Elapsed: 18.640276244s Apr 27 13:59:49.367: INFO: Pod "downwardapi-volume-8a09fc6d-c336-4c59-9dc4-587af93061d7": Phase="Pending", Reason="", readiness=false. Elapsed: 20.717332866s Apr 27 13:59:51.636: INFO: Pod "downwardapi-volume-8a09fc6d-c336-4c59-9dc4-587af93061d7": Phase="Pending", Reason="", readiness=false. Elapsed: 22.986875415s Apr 27 13:59:53.678: INFO: Pod "downwardapi-volume-8a09fc6d-c336-4c59-9dc4-587af93061d7": Phase="Pending", Reason="", readiness=false. Elapsed: 25.028807954s Apr 27 13:59:55.894: INFO: Pod "downwardapi-volume-8a09fc6d-c336-4c59-9dc4-587af93061d7": Phase="Pending", Reason="", readiness=false. Elapsed: 27.244180518s Apr 27 13:59:57.897: INFO: Pod "downwardapi-volume-8a09fc6d-c336-4c59-9dc4-587af93061d7": Phase="Pending", Reason="", readiness=false. Elapsed: 29.247804644s Apr 27 13:59:59.948: INFO: Pod "downwardapi-volume-8a09fc6d-c336-4c59-9dc4-587af93061d7": Phase="Pending", Reason="", readiness=false. Elapsed: 31.298098835s Apr 27 14:00:02.027: INFO: Pod "downwardapi-volume-8a09fc6d-c336-4c59-9dc4-587af93061d7": Phase="Pending", Reason="", readiness=false. Elapsed: 33.377496508s Apr 27 14:00:04.324: INFO: Pod "downwardapi-volume-8a09fc6d-c336-4c59-9dc4-587af93061d7": Phase="Pending", Reason="", readiness=false. Elapsed: 35.67421933s Apr 27 14:00:06.329: INFO: Pod "downwardapi-volume-8a09fc6d-c336-4c59-9dc4-587af93061d7": Phase="Pending", Reason="", readiness=false. Elapsed: 37.679002873s Apr 27 14:00:08.332: INFO: Pod "downwardapi-volume-8a09fc6d-c336-4c59-9dc4-587af93061d7": Phase="Pending", Reason="", readiness=false. Elapsed: 39.682228042s Apr 27 14:00:10.336: INFO: Pod "downwardapi-volume-8a09fc6d-c336-4c59-9dc4-587af93061d7": Phase="Running", Reason="", readiness=true. Elapsed: 41.686151945s Apr 27 14:00:12.339: INFO: Pod "downwardapi-volume-8a09fc6d-c336-4c59-9dc4-587af93061d7": Phase="Running", Reason="", readiness=true. Elapsed: 43.689357072s Apr 27 14:00:14.342: INFO: Pod "downwardapi-volume-8a09fc6d-c336-4c59-9dc4-587af93061d7": Phase="Running", Reason="", readiness=true. Elapsed: 45.692251689s Apr 27 14:00:16.512: INFO: Pod "downwardapi-volume-8a09fc6d-c336-4c59-9dc4-587af93061d7": Phase="Running", Reason="", readiness=true. Elapsed: 47.862747584s Apr 27 14:00:19.150: INFO: Pod "downwardapi-volume-8a09fc6d-c336-4c59-9dc4-587af93061d7": Phase="Running", Reason="", readiness=true. Elapsed: 50.500702689s Apr 27 14:00:21.153: INFO: Pod "downwardapi-volume-8a09fc6d-c336-4c59-9dc4-587af93061d7": Phase="Running", Reason="", readiness=true. Elapsed: 52.50330202s Apr 27 14:00:23.157: INFO: Pod "downwardapi-volume-8a09fc6d-c336-4c59-9dc4-587af93061d7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 54.507714476s STEP: Saw pod success Apr 27 14:00:23.157: INFO: Pod "downwardapi-volume-8a09fc6d-c336-4c59-9dc4-587af93061d7" satisfied condition "success or failure" Apr 27 14:00:23.160: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-8a09fc6d-c336-4c59-9dc4-587af93061d7 container client-container: STEP: delete the pod Apr 27 14:00:23.410: INFO: Waiting for pod downwardapi-volume-8a09fc6d-c336-4c59-9dc4-587af93061d7 to disappear Apr 27 14:00:23.721: INFO: Pod downwardapi-volume-8a09fc6d-c336-4c59-9dc4-587af93061d7 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 27 14:00:23.721: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-844" for this suite. Apr 27 14:00:30.544: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 27 14:00:30.616: INFO: namespace projected-844 deletion completed in 6.891546061s • [SLOW TEST:62.104 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run pod should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 27 14:00:30.616: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1685 [It] should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Apr 27 14:00:30.759: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-1581' Apr 27 14:00:34.013: INFO: stderr: "" Apr 27 14:00:34.013: INFO: stdout: "pod/e2e-test-nginx-pod created\n" STEP: verifying the pod e2e-test-nginx-pod was created [AfterEach] [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1690 Apr 27 14:00:34.026: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=kubectl-1581' Apr 27 14:00:44.368: INFO: stderr: "" Apr 27 14:00:44.368: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 27 14:00:44.368: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1581" for this suite. Apr 27 14:00:51.356: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 27 14:00:51.406: INFO: namespace kubectl-1581 deletion completed in 6.192353198s • [SLOW TEST:20.790 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 27 14:00:51.406: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Given a Pod with a 'name' label pod-adoption-release is created STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change Apr 27 14:01:12.098: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 27 14:01:13.160: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-7378" for this suite. Apr 27 14:01:47.179: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 27 14:01:47.236: INFO: namespace replicaset-7378 deletion completed in 34.073127179s • [SLOW TEST:55.830 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 27 14:01:47.237: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 27 14:02:03.528: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-9132" for this suite. Apr 27 14:03:33.559: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 27 14:03:33.640: INFO: namespace kubelet-test-9132 deletion completed in 1m30.109773086s • [SLOW TEST:106.404 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a busybox command in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:40 should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run deployment should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 27 14:03:33.641: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1557 [It] should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Apr 27 14:03:33.893: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --generator=deployment/apps.v1 --namespace=kubectl-7798' Apr 27 14:03:34.097: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Apr 27 14:03:34.097: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n" STEP: verifying the deployment e2e-test-nginx-deployment was created STEP: verifying the pod controlled by deployment e2e-test-nginx-deployment was created [AfterEach] [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1562 Apr 27 14:03:36.331: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=kubectl-7798' Apr 27 14:03:36.515: INFO: stderr: "" Apr 27 14:03:36.515: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 27 14:03:36.515: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7798" for this suite. Apr 27 14:03:48.667: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 27 14:03:48.728: INFO: namespace kubectl-7798 deletion completed in 12.210014257s • [SLOW TEST:15.087 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 27 14:03:48.728: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars Apr 27 14:03:50.237: INFO: Waiting up to 5m0s for pod "downward-api-43d62b5e-438c-4c94-a04f-6f38e32f9288" in namespace "downward-api-8789" to be "success or failure" Apr 27 14:03:50.412: INFO: Pod "downward-api-43d62b5e-438c-4c94-a04f-6f38e32f9288": Phase="Pending", Reason="", readiness=false. Elapsed: 174.983393ms Apr 27 14:03:54.107: INFO: Pod "downward-api-43d62b5e-438c-4c94-a04f-6f38e32f9288": Phase="Pending", Reason="", readiness=false. Elapsed: 3.869701699s Apr 27 14:03:56.626: INFO: Pod "downward-api-43d62b5e-438c-4c94-a04f-6f38e32f9288": Phase="Pending", Reason="", readiness=false. Elapsed: 6.388320217s Apr 27 14:03:58.629: INFO: Pod "downward-api-43d62b5e-438c-4c94-a04f-6f38e32f9288": Phase="Pending", Reason="", readiness=false. Elapsed: 8.391820033s Apr 27 14:04:00.718: INFO: Pod "downward-api-43d62b5e-438c-4c94-a04f-6f38e32f9288": Phase="Pending", Reason="", readiness=false. Elapsed: 10.48035193s Apr 27 14:04:02.783: INFO: Pod "downward-api-43d62b5e-438c-4c94-a04f-6f38e32f9288": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.545630035s STEP: Saw pod success Apr 27 14:04:02.783: INFO: Pod "downward-api-43d62b5e-438c-4c94-a04f-6f38e32f9288" satisfied condition "success or failure" Apr 27 14:04:02.786: INFO: Trying to get logs from node iruya-worker2 pod downward-api-43d62b5e-438c-4c94-a04f-6f38e32f9288 container dapi-container: STEP: delete the pod Apr 27 14:04:02.833: INFO: Waiting for pod downward-api-43d62b5e-438c-4c94-a04f-6f38e32f9288 to disappear Apr 27 14:04:02.843: INFO: Pod downward-api-43d62b5e-438c-4c94-a04f-6f38e32f9288 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 27 14:04:02.843: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8789" for this suite. Apr 27 14:04:09.163: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 27 14:04:09.258: INFO: namespace downward-api-8789 deletion completed in 6.412502118s • [SLOW TEST:20.530 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 27 14:04:09.258: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating the pod Apr 27 14:04:18.170: INFO: Successfully updated pod "annotationupdate7e815a8a-3a4a-4db4-a81c-f059b3461afa" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 27 14:04:22.195: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6154" for this suite. Apr 27 14:04:44.277: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 27 14:04:44.337: INFO: namespace downward-api-6154 deletion completed in 22.138256518s • [SLOW TEST:35.079 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 27 14:04:44.337: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Apr 27 14:04:44.507: INFO: Creating ReplicaSet my-hostname-basic-42528de7-edff-46a6-97c8-f33b630a36c7 Apr 27 14:04:44.568: INFO: Pod name my-hostname-basic-42528de7-edff-46a6-97c8-f33b630a36c7: Found 0 pods out of 1 Apr 27 14:04:49.572: INFO: Pod name my-hostname-basic-42528de7-edff-46a6-97c8-f33b630a36c7: Found 1 pods out of 1 Apr 27 14:04:49.572: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-42528de7-edff-46a6-97c8-f33b630a36c7" is running Apr 27 14:04:55.579: INFO: Pod "my-hostname-basic-42528de7-edff-46a6-97c8-f33b630a36c7-w8lcz" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-27 14:04:44 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-27 14:04:44 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-42528de7-edff-46a6-97c8-f33b630a36c7]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-27 14:04:44 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-42528de7-edff-46a6-97c8-f33b630a36c7]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-27 14:04:44 +0000 UTC Reason: Message:}]) Apr 27 14:04:55.579: INFO: Trying to dial the pod Apr 27 14:05:00.594: INFO: Controller my-hostname-basic-42528de7-edff-46a6-97c8-f33b630a36c7: Got expected result from replica 1 [my-hostname-basic-42528de7-edff-46a6-97c8-f33b630a36c7-w8lcz]: "my-hostname-basic-42528de7-edff-46a6-97c8-f33b630a36c7-w8lcz", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 27 14:05:00.594: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-9816" for this suite. Apr 27 14:05:06.621: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 27 14:05:06.870: INFO: namespace replicaset-9816 deletion completed in 6.272093977s • [SLOW TEST:22.532 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 27 14:05:06.870: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-map-ded0a0f0-0439-49c8-ba40-b88116e6b02d STEP: Creating a pod to test consume configMaps Apr 27 14:05:07.164: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-3ebd2232-97a7-455b-b2f1-b41b255599d7" in namespace "projected-8747" to be "success or failure" Apr 27 14:05:07.348: INFO: Pod "pod-projected-configmaps-3ebd2232-97a7-455b-b2f1-b41b255599d7": Phase="Pending", Reason="", readiness=false. Elapsed: 184.1035ms Apr 27 14:05:09.352: INFO: Pod "pod-projected-configmaps-3ebd2232-97a7-455b-b2f1-b41b255599d7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.188059755s Apr 27 14:05:11.465: INFO: Pod "pod-projected-configmaps-3ebd2232-97a7-455b-b2f1-b41b255599d7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.301427911s Apr 27 14:05:14.006: INFO: Pod "pod-projected-configmaps-3ebd2232-97a7-455b-b2f1-b41b255599d7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.841998792s Apr 27 14:05:16.010: INFO: Pod "pod-projected-configmaps-3ebd2232-97a7-455b-b2f1-b41b255599d7": Phase="Pending", Reason="", readiness=false. Elapsed: 8.845636035s Apr 27 14:05:18.012: INFO: Pod "pod-projected-configmaps-3ebd2232-97a7-455b-b2f1-b41b255599d7": Phase="Pending", Reason="", readiness=false. Elapsed: 10.848548781s Apr 27 14:05:20.015: INFO: Pod "pod-projected-configmaps-3ebd2232-97a7-455b-b2f1-b41b255599d7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.851280413s STEP: Saw pod success Apr 27 14:05:20.015: INFO: Pod "pod-projected-configmaps-3ebd2232-97a7-455b-b2f1-b41b255599d7" satisfied condition "success or failure" Apr 27 14:05:20.017: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-configmaps-3ebd2232-97a7-455b-b2f1-b41b255599d7 container projected-configmap-volume-test: STEP: delete the pod Apr 27 14:05:20.126: INFO: Waiting for pod pod-projected-configmaps-3ebd2232-97a7-455b-b2f1-b41b255599d7 to disappear Apr 27 14:05:20.152: INFO: Pod pod-projected-configmaps-3ebd2232-97a7-455b-b2f1-b41b255599d7 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 27 14:05:20.152: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8747" for this suite. Apr 27 14:05:26.185: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 27 14:05:26.251: INFO: namespace projected-8747 deletion completed in 6.096227101s • [SLOW TEST:19.381 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 27 14:05:26.252: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-5d4e9846-61be-4a8d-b5d4-0d2dbc300d2c STEP: Creating a pod to test consume secrets Apr 27 14:05:26.467: INFO: Waiting up to 5m0s for pod "pod-secrets-8e3c08be-4a9f-4504-b0e7-250d8effd2f3" in namespace "secrets-5206" to be "success or failure" Apr 27 14:05:26.587: INFO: Pod "pod-secrets-8e3c08be-4a9f-4504-b0e7-250d8effd2f3": Phase="Pending", Reason="", readiness=false. Elapsed: 120.165917ms Apr 27 14:05:28.590: INFO: Pod "pod-secrets-8e3c08be-4a9f-4504-b0e7-250d8effd2f3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.123628764s Apr 27 14:05:30.610: INFO: Pod "pod-secrets-8e3c08be-4a9f-4504-b0e7-250d8effd2f3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.143502596s Apr 27 14:05:32.613: INFO: Pod "pod-secrets-8e3c08be-4a9f-4504-b0e7-250d8effd2f3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.14632337s STEP: Saw pod success Apr 27 14:05:32.613: INFO: Pod "pod-secrets-8e3c08be-4a9f-4504-b0e7-250d8effd2f3" satisfied condition "success or failure" Apr 27 14:05:32.615: INFO: Trying to get logs from node iruya-worker pod pod-secrets-8e3c08be-4a9f-4504-b0e7-250d8effd2f3 container secret-volume-test: STEP: delete the pod Apr 27 14:05:32.690: INFO: Waiting for pod pod-secrets-8e3c08be-4a9f-4504-b0e7-250d8effd2f3 to disappear Apr 27 14:05:32.730: INFO: Pod pod-secrets-8e3c08be-4a9f-4504-b0e7-250d8effd2f3 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 27 14:05:32.730: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-5206" for this suite. Apr 27 14:05:38.775: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 27 14:05:38.894: INFO: namespace secrets-5206 deletion completed in 6.161529904s • [SLOW TEST:12.642 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-apps] Job should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 27 14:05:38.894: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: delete a job STEP: deleting Job.batch foo in namespace job-6600, will wait for the garbage collector to delete the pods Apr 27 14:05:45.206: INFO: Deleting Job.batch foo took: 93.908175ms Apr 27 14:05:47.307: INFO: Terminating Job.batch foo pods took: 2.100293526s STEP: Ensuring job was deleted [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 27 14:06:31.953: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-6600" for this suite. Apr 27 14:06:37.978: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 27 14:06:38.048: INFO: namespace job-6600 deletion completed in 6.091241835s • [SLOW TEST:59.154 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 27 14:06:38.048: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Apr 27 14:06:38.294: INFO: Create a RollingUpdate DaemonSet Apr 27 14:06:38.298: INFO: Check that daemon pods launch on every node of the cluster Apr 27 14:06:38.307: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 27 14:06:38.350: INFO: Number of nodes with available pods: 0 Apr 27 14:06:38.350: INFO: Node iruya-worker is running more than one daemon pod Apr 27 14:06:39.354: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 27 14:06:39.357: INFO: Number of nodes with available pods: 0 Apr 27 14:06:39.358: INFO: Node iruya-worker is running more than one daemon pod Apr 27 14:06:40.399: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 27 14:06:40.403: INFO: Number of nodes with available pods: 0 Apr 27 14:06:40.403: INFO: Node iruya-worker is running more than one daemon pod Apr 27 14:06:41.680: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 27 14:06:41.682: INFO: Number of nodes with available pods: 0 Apr 27 14:06:41.682: INFO: Node iruya-worker is running more than one daemon pod Apr 27 14:06:42.456: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 27 14:06:42.500: INFO: Number of nodes with available pods: 0 Apr 27 14:06:42.500: INFO: Node iruya-worker is running more than one daemon pod Apr 27 14:06:43.355: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 27 14:06:43.359: INFO: Number of nodes with available pods: 2 Apr 27 14:06:43.359: INFO: Number of running nodes: 2, number of available pods: 2 Apr 27 14:06:43.359: INFO: Update the DaemonSet to trigger a rollout Apr 27 14:06:43.367: INFO: Updating DaemonSet daemon-set Apr 27 14:06:48.467: INFO: Roll back the DaemonSet before rollout is complete Apr 27 14:06:48.472: INFO: Updating DaemonSet daemon-set Apr 27 14:06:48.472: INFO: Make sure DaemonSet rollback is complete Apr 27 14:06:48.490: INFO: Wrong image for pod: daemon-set-fbf98. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent. Apr 27 14:06:48.490: INFO: Pod daemon-set-fbf98 is not available Apr 27 14:06:48.566: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 27 14:06:49.569: INFO: Wrong image for pod: daemon-set-fbf98. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent. Apr 27 14:06:49.570: INFO: Pod daemon-set-fbf98 is not available Apr 27 14:06:49.574: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 27 14:06:50.787: INFO: Pod daemon-set-lhg7p is not available Apr 27 14:06:50.815: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-5212, will wait for the garbage collector to delete the pods Apr 27 14:06:51.337: INFO: Deleting DaemonSet.extensions daemon-set took: 6.775008ms Apr 27 14:06:51.737: INFO: Terminating DaemonSet.extensions daemon-set pods took: 400.235151ms Apr 27 14:07:02.277: INFO: Number of nodes with available pods: 0 Apr 27 14:07:02.277: INFO: Number of running nodes: 0, number of available pods: 0 Apr 27 14:07:02.279: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-5212/daemonsets","resourceVersion":"7726585"},"items":null} Apr 27 14:07:02.281: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-5212/pods","resourceVersion":"7726585"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 27 14:07:02.287: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-5212" for this suite. Apr 27 14:07:10.314: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 27 14:07:10.387: INFO: namespace daemonsets-5212 deletion completed in 8.097544844s • [SLOW TEST:32.338 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 27 14:07:10.387: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-9fc99258-36b6-40b9-affd-27b5276438d9 STEP: Creating a pod to test consume secrets Apr 27 14:07:10.568: INFO: Waiting up to 5m0s for pod "pod-secrets-b03a78ea-5725-4854-b9f5-ebbb5f88b58c" in namespace "secrets-2292" to be "success or failure" Apr 27 14:07:10.631: INFO: Pod "pod-secrets-b03a78ea-5725-4854-b9f5-ebbb5f88b58c": Phase="Pending", Reason="", readiness=false. Elapsed: 63.044264ms Apr 27 14:07:12.635: INFO: Pod "pod-secrets-b03a78ea-5725-4854-b9f5-ebbb5f88b58c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.067583363s Apr 27 14:07:14.639: INFO: Pod "pod-secrets-b03a78ea-5725-4854-b9f5-ebbb5f88b58c": Phase="Running", Reason="", readiness=true. Elapsed: 4.071285921s Apr 27 14:07:16.643: INFO: Pod "pod-secrets-b03a78ea-5725-4854-b9f5-ebbb5f88b58c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.075343461s STEP: Saw pod success Apr 27 14:07:16.643: INFO: Pod "pod-secrets-b03a78ea-5725-4854-b9f5-ebbb5f88b58c" satisfied condition "success or failure" Apr 27 14:07:16.646: INFO: Trying to get logs from node iruya-worker pod pod-secrets-b03a78ea-5725-4854-b9f5-ebbb5f88b58c container secret-env-test: STEP: delete the pod Apr 27 14:07:16.704: INFO: Waiting for pod pod-secrets-b03a78ea-5725-4854-b9f5-ebbb5f88b58c to disappear Apr 27 14:07:16.739: INFO: Pod pod-secrets-b03a78ea-5725-4854-b9f5-ebbb5f88b58c no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 27 14:07:16.739: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-2292" for this suite. Apr 27 14:07:22.794: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 27 14:07:22.871: INFO: namespace secrets-2292 deletion completed in 6.127871201s • [SLOW TEST:12.485 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31 should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 27 14:07:22.872: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-2fa3333b-c6e3-4910-a479-758af454f83e STEP: Creating a pod to test consume configMaps Apr 27 14:07:23.180: INFO: Waiting up to 5m0s for pod "pod-configmaps-7705d501-b19c-42ae-9d5c-b21e9315870b" in namespace "configmap-3605" to be "success or failure" Apr 27 14:07:23.242: INFO: Pod "pod-configmaps-7705d501-b19c-42ae-9d5c-b21e9315870b": Phase="Pending", Reason="", readiness=false. Elapsed: 61.781744ms Apr 27 14:07:25.246: INFO: Pod "pod-configmaps-7705d501-b19c-42ae-9d5c-b21e9315870b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.066109566s Apr 27 14:07:27.355: INFO: Pod "pod-configmaps-7705d501-b19c-42ae-9d5c-b21e9315870b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.175164943s Apr 27 14:07:29.359: INFO: Pod "pod-configmaps-7705d501-b19c-42ae-9d5c-b21e9315870b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.179461551s STEP: Saw pod success Apr 27 14:07:29.359: INFO: Pod "pod-configmaps-7705d501-b19c-42ae-9d5c-b21e9315870b" satisfied condition "success or failure" Apr 27 14:07:29.364: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-7705d501-b19c-42ae-9d5c-b21e9315870b container configmap-volume-test: STEP: delete the pod Apr 27 14:07:29.537: INFO: Waiting for pod pod-configmaps-7705d501-b19c-42ae-9d5c-b21e9315870b to disappear Apr 27 14:07:29.629: INFO: Pod pod-configmaps-7705d501-b19c-42ae-9d5c-b21e9315870b no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 27 14:07:29.629: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3605" for this suite. Apr 27 14:07:35.804: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 27 14:07:35.878: INFO: namespace configmap-3605 deletion completed in 6.245745979s • [SLOW TEST:13.007 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 27 14:07:35.879: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod Apr 27 14:07:36.015: INFO: PodSpec: initContainers in spec.initContainers Apr 27 14:08:30.178: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-ea9d6a82-5254-448d-bee7-64ac18af43a4", GenerateName:"", Namespace:"init-container-59", SelfLink:"/api/v1/namespaces/init-container-59/pods/pod-init-ea9d6a82-5254-448d-bee7-64ac18af43a4", UID:"9f065ee3-d9d7-47e7-a45e-0802fcaed91f", ResourceVersion:"7726865", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63723593256, loc:(*time.Location)(0x7ead8c0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"15164911"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-tkrgl", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc0027de500), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-tkrgl", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-tkrgl", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-tkrgl", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc002baf138), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"iruya-worker2", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc002c91440), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc002baf1d0)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc002baf1f0)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc002baf1f8), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc002baf1fc), PreemptionPolicy:(*v1.PreemptionPolicy)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723593256, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723593256, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723593256, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723593256, loc:(*time.Location)(0x7ead8c0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.17.0.5", PodIP:"10.244.1.97", StartTime:(*v1.Time)(0xc002f3e500), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc002c76bd0)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc002c76c40)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://90fb3a4fcd731eece988a76e4f19d9d80dcba4ac2b4507b885c57849e6d66621"}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc002f3ec00), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:""}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc002f3ebe0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:""}}, QOSClass:"Guaranteed"}} [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 27 14:08:30.179: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-59" for this suite. Apr 27 14:08:52.362: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 27 14:08:52.437: INFO: namespace init-container-59 deletion completed in 22.212174803s • [SLOW TEST:76.558 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 27 14:08:52.437: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:179 [It] should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 27 14:08:53.566: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-3824" for this suite. Apr 27 14:10:55.691: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 27 14:10:55.763: INFO: namespace pods-3824 deletion completed in 2m2.160589021s • [SLOW TEST:123.326 seconds] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 27 14:10:55.764: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars Apr 27 14:10:55.928: INFO: Waiting up to 5m0s for pod "downward-api-f4b88a90-f60f-4798-8536-38a22b0aa935" in namespace "downward-api-2705" to be "success or failure" Apr 27 14:10:55.957: INFO: Pod "downward-api-f4b88a90-f60f-4798-8536-38a22b0aa935": Phase="Pending", Reason="", readiness=false. Elapsed: 28.766156ms Apr 27 14:10:58.131: INFO: Pod "downward-api-f4b88a90-f60f-4798-8536-38a22b0aa935": Phase="Pending", Reason="", readiness=false. Elapsed: 2.20285247s Apr 27 14:11:00.135: INFO: Pod "downward-api-f4b88a90-f60f-4798-8536-38a22b0aa935": Phase="Pending", Reason="", readiness=false. Elapsed: 4.206807333s Apr 27 14:11:02.139: INFO: Pod "downward-api-f4b88a90-f60f-4798-8536-38a22b0aa935": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.211332637s STEP: Saw pod success Apr 27 14:11:02.139: INFO: Pod "downward-api-f4b88a90-f60f-4798-8536-38a22b0aa935" satisfied condition "success or failure" Apr 27 14:11:02.143: INFO: Trying to get logs from node iruya-worker2 pod downward-api-f4b88a90-f60f-4798-8536-38a22b0aa935 container dapi-container: STEP: delete the pod Apr 27 14:11:02.252: INFO: Waiting for pod downward-api-f4b88a90-f60f-4798-8536-38a22b0aa935 to disappear Apr 27 14:11:02.268: INFO: Pod downward-api-f4b88a90-f60f-4798-8536-38a22b0aa935 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 27 14:11:02.268: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2705" for this suite. Apr 27 14:11:08.410: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 27 14:11:08.574: INFO: namespace downward-api-2705 deletion completed in 6.301960301s • [SLOW TEST:12.810 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 27 14:11:08.574: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Apr 27 14:11:08.749: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1489af3c-ce91-49cb-ba3e-e67b6b3b004e" in namespace "projected-134" to be "success or failure" Apr 27 14:11:08.831: INFO: Pod "downwardapi-volume-1489af3c-ce91-49cb-ba3e-e67b6b3b004e": Phase="Pending", Reason="", readiness=false. Elapsed: 82.064266ms Apr 27 14:11:10.868: INFO: Pod "downwardapi-volume-1489af3c-ce91-49cb-ba3e-e67b6b3b004e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.118232813s Apr 27 14:11:12.872: INFO: Pod "downwardapi-volume-1489af3c-ce91-49cb-ba3e-e67b6b3b004e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.122462402s Apr 27 14:11:14.884: INFO: Pod "downwardapi-volume-1489af3c-ce91-49cb-ba3e-e67b6b3b004e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.135133562s STEP: Saw pod success Apr 27 14:11:14.885: INFO: Pod "downwardapi-volume-1489af3c-ce91-49cb-ba3e-e67b6b3b004e" satisfied condition "success or failure" Apr 27 14:11:14.886: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-1489af3c-ce91-49cb-ba3e-e67b6b3b004e container client-container: STEP: delete the pod Apr 27 14:11:14.959: INFO: Waiting for pod downwardapi-volume-1489af3c-ce91-49cb-ba3e-e67b6b3b004e to disappear Apr 27 14:11:14.975: INFO: Pod downwardapi-volume-1489af3c-ce91-49cb-ba3e-e67b6b3b004e no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 27 14:11:14.975: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-134" for this suite. Apr 27 14:11:21.105: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 27 14:11:21.284: INFO: namespace projected-134 deletion completed in 6.305941965s • [SLOW TEST:12.710 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 27 14:11:21.284: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-map-a058abc5-1eca-4afb-8b23-10410c89edbc STEP: Creating a pod to test consume secrets Apr 27 14:11:21.450: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-06fe62de-ba7f-4509-8136-6c1b6285e4fb" in namespace "projected-5825" to be "success or failure" Apr 27 14:11:21.474: INFO: Pod "pod-projected-secrets-06fe62de-ba7f-4509-8136-6c1b6285e4fb": Phase="Pending", Reason="", readiness=false. Elapsed: 24.217546ms Apr 27 14:11:23.478: INFO: Pod "pod-projected-secrets-06fe62de-ba7f-4509-8136-6c1b6285e4fb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028411642s Apr 27 14:11:25.484: INFO: Pod "pod-projected-secrets-06fe62de-ba7f-4509-8136-6c1b6285e4fb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.034130281s Apr 27 14:11:27.488: INFO: Pod "pod-projected-secrets-06fe62de-ba7f-4509-8136-6c1b6285e4fb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.037948129s STEP: Saw pod success Apr 27 14:11:27.488: INFO: Pod "pod-projected-secrets-06fe62de-ba7f-4509-8136-6c1b6285e4fb" satisfied condition "success or failure" Apr 27 14:11:27.491: INFO: Trying to get logs from node iruya-worker pod pod-projected-secrets-06fe62de-ba7f-4509-8136-6c1b6285e4fb container projected-secret-volume-test: STEP: delete the pod Apr 27 14:11:27.524: INFO: Waiting for pod pod-projected-secrets-06fe62de-ba7f-4509-8136-6c1b6285e4fb to disappear Apr 27 14:11:27.622: INFO: Pod pod-projected-secrets-06fe62de-ba7f-4509-8136-6c1b6285e4fb no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 27 14:11:27.622: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5825" for this suite. Apr 27 14:11:33.682: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 27 14:11:33.757: INFO: namespace projected-5825 deletion completed in 6.13054148s • [SLOW TEST:12.473 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 27 14:11:33.758: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-upd-9ec1bca5-95f9-449d-982d-2273cf164570 STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 27 14:11:39.997: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6699" for this suite. Apr 27 14:12:02.027: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 27 14:12:02.097: INFO: namespace configmap-6699 deletion completed in 22.096340182s • [SLOW TEST:28.339 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 27 14:12:02.098: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-upd-106a07a2-07b0-4b87-abff-4ed44c59c72f STEP: Creating the pod STEP: Updating configmap configmap-test-upd-106a07a2-07b0-4b87-abff-4ed44c59c72f STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 27 14:12:10.283: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4792" for this suite. Apr 27 14:12:32.304: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 27 14:12:32.382: INFO: namespace configmap-4792 deletion completed in 22.094324625s • [SLOW TEST:30.284 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 27 14:12:32.382: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Apr 27 14:12:32.452: INFO: Waiting up to 5m0s for pod "downwardapi-volume-89594f92-3f96-41a6-bd9c-b7f7cae848f8" in namespace "projected-970" to be "success or failure" Apr 27 14:12:32.456: INFO: Pod "downwardapi-volume-89594f92-3f96-41a6-bd9c-b7f7cae848f8": Phase="Pending", Reason="", readiness=false. Elapsed: 3.769011ms Apr 27 14:12:34.459: INFO: Pod "downwardapi-volume-89594f92-3f96-41a6-bd9c-b7f7cae848f8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007035924s Apr 27 14:12:36.464: INFO: Pod "downwardapi-volume-89594f92-3f96-41a6-bd9c-b7f7cae848f8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011690349s STEP: Saw pod success Apr 27 14:12:36.464: INFO: Pod "downwardapi-volume-89594f92-3f96-41a6-bd9c-b7f7cae848f8" satisfied condition "success or failure" Apr 27 14:12:36.467: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-89594f92-3f96-41a6-bd9c-b7f7cae848f8 container client-container: STEP: delete the pod Apr 27 14:12:36.487: INFO: Waiting for pod downwardapi-volume-89594f92-3f96-41a6-bd9c-b7f7cae848f8 to disappear Apr 27 14:12:36.491: INFO: Pod downwardapi-volume-89594f92-3f96-41a6-bd9c-b7f7cae848f8 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 27 14:12:36.492: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-970" for this suite. Apr 27 14:12:42.519: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 27 14:12:42.585: INFO: namespace projected-970 deletion completed in 6.09077752s • [SLOW TEST:10.203 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl logs should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 27 14:12:42.586: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1292 STEP: creating an rc Apr 27 14:12:42.619: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1991' Apr 27 14:12:45.550: INFO: stderr: "" Apr 27 14:12:45.550: INFO: stdout: "replicationcontroller/redis-master created\n" [It] should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Waiting for Redis master to start. Apr 27 14:12:46.554: INFO: Selector matched 1 pods for map[app:redis] Apr 27 14:12:46.554: INFO: Found 0 / 1 Apr 27 14:12:47.555: INFO: Selector matched 1 pods for map[app:redis] Apr 27 14:12:47.555: INFO: Found 0 / 1 Apr 27 14:12:48.555: INFO: Selector matched 1 pods for map[app:redis] Apr 27 14:12:48.555: INFO: Found 0 / 1 Apr 27 14:12:49.554: INFO: Selector matched 1 pods for map[app:redis] Apr 27 14:12:49.554: INFO: Found 1 / 1 Apr 27 14:12:49.554: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Apr 27 14:12:49.556: INFO: Selector matched 1 pods for map[app:redis] Apr 27 14:12:49.556: INFO: ForEach: Found 1 pods from the filter. Now looping through them. STEP: checking for a matching strings Apr 27 14:12:49.556: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-f7p9s redis-master --namespace=kubectl-1991' Apr 27 14:12:49.675: INFO: stderr: "" Apr 27 14:12:49.675: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 27 Apr 14:12:48.208 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 27 Apr 14:12:48.208 # Server started, Redis version 3.2.12\n1:M 27 Apr 14:12:48.208 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 27 Apr 14:12:48.209 * The server is now ready to accept connections on port 6379\n" STEP: limiting log lines Apr 27 14:12:49.675: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-f7p9s redis-master --namespace=kubectl-1991 --tail=1' Apr 27 14:12:49.794: INFO: stderr: "" Apr 27 14:12:49.794: INFO: stdout: "1:M 27 Apr 14:12:48.209 * The server is now ready to accept connections on port 6379\n" STEP: limiting log bytes Apr 27 14:12:49.794: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-f7p9s redis-master --namespace=kubectl-1991 --limit-bytes=1' Apr 27 14:12:49.899: INFO: stderr: "" Apr 27 14:12:49.899: INFO: stdout: " " STEP: exposing timestamps Apr 27 14:12:49.899: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-f7p9s redis-master --namespace=kubectl-1991 --tail=1 --timestamps' Apr 27 14:12:50.005: INFO: stderr: "" Apr 27 14:12:50.005: INFO: stdout: "2020-04-27T14:12:48.209468468Z 1:M 27 Apr 14:12:48.209 * The server is now ready to accept connections on port 6379\n" STEP: restricting to a time range Apr 27 14:12:52.505: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-f7p9s redis-master --namespace=kubectl-1991 --since=1s' Apr 27 14:12:52.628: INFO: stderr: "" Apr 27 14:12:52.628: INFO: stdout: "" Apr 27 14:12:52.628: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-f7p9s redis-master --namespace=kubectl-1991 --since=24h' Apr 27 14:12:52.728: INFO: stderr: "" Apr 27 14:12:52.728: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 27 Apr 14:12:48.208 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 27 Apr 14:12:48.208 # Server started, Redis version 3.2.12\n1:M 27 Apr 14:12:48.208 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 27 Apr 14:12:48.209 * The server is now ready to accept connections on port 6379\n" [AfterEach] [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1298 STEP: using delete to clean up resources Apr 27 14:12:52.728: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1991' Apr 27 14:12:52.845: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 27 14:12:52.845: INFO: stdout: "replicationcontroller \"redis-master\" force deleted\n" Apr 27 14:12:52.845: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=nginx --no-headers --namespace=kubectl-1991' Apr 27 14:12:52.952: INFO: stderr: "No resources found.\n" Apr 27 14:12:52.952: INFO: stdout: "" Apr 27 14:12:52.952: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=nginx --namespace=kubectl-1991 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Apr 27 14:12:53.066: INFO: stderr: "" Apr 27 14:12:53.066: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 27 14:12:53.066: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1991" for this suite. Apr 27 14:13:15.089: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 27 14:13:15.169: INFO: namespace kubectl-1991 deletion completed in 22.099461102s • [SLOW TEST:32.583 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 27 14:13:15.169: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-downwardapi-qdcs STEP: Creating a pod to test atomic-volume-subpath Apr 27 14:13:15.268: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-qdcs" in namespace "subpath-2801" to be "success or failure" Apr 27 14:13:15.288: INFO: Pod "pod-subpath-test-downwardapi-qdcs": Phase="Pending", Reason="", readiness=false. Elapsed: 20.788758ms Apr 27 14:13:17.294: INFO: Pod "pod-subpath-test-downwardapi-qdcs": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026015978s Apr 27 14:13:19.298: INFO: Pod "pod-subpath-test-downwardapi-qdcs": Phase="Running", Reason="", readiness=true. Elapsed: 4.03038846s Apr 27 14:13:21.303: INFO: Pod "pod-subpath-test-downwardapi-qdcs": Phase="Running", Reason="", readiness=true. Elapsed: 6.03498567s Apr 27 14:13:23.307: INFO: Pod "pod-subpath-test-downwardapi-qdcs": Phase="Running", Reason="", readiness=true. Elapsed: 8.039597946s Apr 27 14:13:25.312: INFO: Pod "pod-subpath-test-downwardapi-qdcs": Phase="Running", Reason="", readiness=true. Elapsed: 10.044193313s Apr 27 14:13:27.316: INFO: Pod "pod-subpath-test-downwardapi-qdcs": Phase="Running", Reason="", readiness=true. Elapsed: 12.048762079s Apr 27 14:13:29.321: INFO: Pod "pod-subpath-test-downwardapi-qdcs": Phase="Running", Reason="", readiness=true. Elapsed: 14.05336782s Apr 27 14:13:31.326: INFO: Pod "pod-subpath-test-downwardapi-qdcs": Phase="Running", Reason="", readiness=true. Elapsed: 16.058166364s Apr 27 14:13:33.330: INFO: Pod "pod-subpath-test-downwardapi-qdcs": Phase="Running", Reason="", readiness=true. Elapsed: 18.062457377s Apr 27 14:13:35.334: INFO: Pod "pod-subpath-test-downwardapi-qdcs": Phase="Running", Reason="", readiness=true. Elapsed: 20.066369959s Apr 27 14:13:37.338: INFO: Pod "pod-subpath-test-downwardapi-qdcs": Phase="Running", Reason="", readiness=true. Elapsed: 22.070521825s Apr 27 14:13:39.343: INFO: Pod "pod-subpath-test-downwardapi-qdcs": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.074902334s STEP: Saw pod success Apr 27 14:13:39.343: INFO: Pod "pod-subpath-test-downwardapi-qdcs" satisfied condition "success or failure" Apr 27 14:13:39.345: INFO: Trying to get logs from node iruya-worker pod pod-subpath-test-downwardapi-qdcs container test-container-subpath-downwardapi-qdcs: STEP: delete the pod Apr 27 14:13:39.369: INFO: Waiting for pod pod-subpath-test-downwardapi-qdcs to disappear Apr 27 14:13:39.432: INFO: Pod pod-subpath-test-downwardapi-qdcs no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-qdcs Apr 27 14:13:39.432: INFO: Deleting pod "pod-subpath-test-downwardapi-qdcs" in namespace "subpath-2801" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 27 14:13:39.434: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-2801" for this suite. Apr 27 14:13:45.466: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 27 14:13:45.542: INFO: namespace subpath-2801 deletion completed in 6.096284464s • [SLOW TEST:30.373 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 27 14:13:45.544: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification Apr 27 14:13:45.619: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-2648,SelfLink:/api/v1/namespaces/watch-2648/configmaps/e2e-watch-test-configmap-a,UID:cf78eb48-e2ef-4a6b-a9df-3842a33c34d1,ResourceVersion:7727732,Generation:0,CreationTimestamp:2020-04-27 14:13:45 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} Apr 27 14:13:45.619: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-2648,SelfLink:/api/v1/namespaces/watch-2648/configmaps/e2e-watch-test-configmap-a,UID:cf78eb48-e2ef-4a6b-a9df-3842a33c34d1,ResourceVersion:7727732,Generation:0,CreationTimestamp:2020-04-27 14:13:45 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: modifying configmap A and ensuring the correct watchers observe the notification Apr 27 14:13:55.627: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-2648,SelfLink:/api/v1/namespaces/watch-2648/configmaps/e2e-watch-test-configmap-a,UID:cf78eb48-e2ef-4a6b-a9df-3842a33c34d1,ResourceVersion:7727752,Generation:0,CreationTimestamp:2020-04-27 14:13:45 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} Apr 27 14:13:55.627: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-2648,SelfLink:/api/v1/namespaces/watch-2648/configmaps/e2e-watch-test-configmap-a,UID:cf78eb48-e2ef-4a6b-a9df-3842a33c34d1,ResourceVersion:7727752,Generation:0,CreationTimestamp:2020-04-27 14:13:45 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying configmap A again and ensuring the correct watchers observe the notification Apr 27 14:14:05.634: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-2648,SelfLink:/api/v1/namespaces/watch-2648/configmaps/e2e-watch-test-configmap-a,UID:cf78eb48-e2ef-4a6b-a9df-3842a33c34d1,ResourceVersion:7727772,Generation:0,CreationTimestamp:2020-04-27 14:13:45 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Apr 27 14:14:05.634: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-2648,SelfLink:/api/v1/namespaces/watch-2648/configmaps/e2e-watch-test-configmap-a,UID:cf78eb48-e2ef-4a6b-a9df-3842a33c34d1,ResourceVersion:7727772,Generation:0,CreationTimestamp:2020-04-27 14:13:45 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: deleting configmap A and ensuring the correct watchers observe the notification Apr 27 14:14:15.642: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-2648,SelfLink:/api/v1/namespaces/watch-2648/configmaps/e2e-watch-test-configmap-a,UID:cf78eb48-e2ef-4a6b-a9df-3842a33c34d1,ResourceVersion:7727793,Generation:0,CreationTimestamp:2020-04-27 14:13:45 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Apr 27 14:14:15.642: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-2648,SelfLink:/api/v1/namespaces/watch-2648/configmaps/e2e-watch-test-configmap-a,UID:cf78eb48-e2ef-4a6b-a9df-3842a33c34d1,ResourceVersion:7727793,Generation:0,CreationTimestamp:2020-04-27 14:13:45 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification Apr 27 14:14:25.794: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-2648,SelfLink:/api/v1/namespaces/watch-2648/configmaps/e2e-watch-test-configmap-b,UID:b9b296a3-1f18-49bb-ae25-f222bebffda9,ResourceVersion:7727814,Generation:0,CreationTimestamp:2020-04-27 14:14:25 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} Apr 27 14:14:25.794: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-2648,SelfLink:/api/v1/namespaces/watch-2648/configmaps/e2e-watch-test-configmap-b,UID:b9b296a3-1f18-49bb-ae25-f222bebffda9,ResourceVersion:7727814,Generation:0,CreationTimestamp:2020-04-27 14:14:25 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: deleting configmap B and ensuring the correct watchers observe the notification Apr 27 14:14:35.801: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-2648,SelfLink:/api/v1/namespaces/watch-2648/configmaps/e2e-watch-test-configmap-b,UID:b9b296a3-1f18-49bb-ae25-f222bebffda9,ResourceVersion:7727834,Generation:0,CreationTimestamp:2020-04-27 14:14:25 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} Apr 27 14:14:35.801: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-2648,SelfLink:/api/v1/namespaces/watch-2648/configmaps/e2e-watch-test-configmap-b,UID:b9b296a3-1f18-49bb-ae25-f222bebffda9,ResourceVersion:7727834,Generation:0,CreationTimestamp:2020-04-27 14:14:25 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 27 14:14:45.801: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-2648" for this suite. Apr 27 14:14:51.827: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 27 14:14:51.901: INFO: namespace watch-2648 deletion completed in 6.09383604s • [SLOW TEST:66.358 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 27 14:14:51.901: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: validating cluster-info Apr 27 14:14:51.966: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info' Apr 27 14:14:52.064: INFO: stderr: "" Apr 27 14:14:52.064: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32769\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32769/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 27 14:14:52.064: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7639" for this suite. Apr 27 14:14:58.085: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 27 14:14:58.201: INFO: namespace kubectl-7639 deletion completed in 6.133444048s • [SLOW TEST:6.300 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl cluster-info /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 27 14:14:58.201: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W0427 14:15:08.335398 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Apr 27 14:15:08.335: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 27 14:15:08.335: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-6535" for this suite. Apr 27 14:15:14.355: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 27 14:15:14.430: INFO: namespace gc-6535 deletion completed in 6.09189238s • [SLOW TEST:16.229 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 27 14:15:14.430: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-532ba6bd-8729-40a2-a38f-0faa395292a4 STEP: Creating a pod to test consume secrets Apr 27 14:15:14.516: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-78eabf3f-a50b-4eb6-9317-a441a38b6f5e" in namespace "projected-8445" to be "success or failure" Apr 27 14:15:14.555: INFO: Pod "pod-projected-secrets-78eabf3f-a50b-4eb6-9317-a441a38b6f5e": Phase="Pending", Reason="", readiness=false. Elapsed: 39.085467ms Apr 27 14:15:16.559: INFO: Pod "pod-projected-secrets-78eabf3f-a50b-4eb6-9317-a441a38b6f5e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.043629279s Apr 27 14:15:18.564: INFO: Pod "pod-projected-secrets-78eabf3f-a50b-4eb6-9317-a441a38b6f5e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.047815514s STEP: Saw pod success Apr 27 14:15:18.564: INFO: Pod "pod-projected-secrets-78eabf3f-a50b-4eb6-9317-a441a38b6f5e" satisfied condition "success or failure" Apr 27 14:15:18.566: INFO: Trying to get logs from node iruya-worker pod pod-projected-secrets-78eabf3f-a50b-4eb6-9317-a441a38b6f5e container projected-secret-volume-test: STEP: delete the pod Apr 27 14:15:18.617: INFO: Waiting for pod pod-projected-secrets-78eabf3f-a50b-4eb6-9317-a441a38b6f5e to disappear Apr 27 14:15:18.627: INFO: Pod pod-projected-secrets-78eabf3f-a50b-4eb6-9317-a441a38b6f5e no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 27 14:15:18.627: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8445" for this suite. Apr 27 14:15:24.642: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 27 14:15:24.720: INFO: namespace projected-8445 deletion completed in 6.089092073s • [SLOW TEST:10.289 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Aggregator Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 27 14:15:24.720: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename aggregator STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76 Apr 27 14:15:24.852: INFO: >>> kubeConfig: /root/.kube/config [It] Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Registering the sample API server. Apr 27 14:15:25.419: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set Apr 27 14:15:28.007: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723593725, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723593725, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723593725, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723593725, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 27 14:15:30.745: INFO: Waited 719.999035ms for the sample-apiserver to be ready to handle requests. [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67 [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 27 14:15:31.158: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "aggregator-2736" for this suite. Apr 27 14:15:37.308: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 27 14:15:37.377: INFO: namespace aggregator-2736 deletion completed in 6.183600639s • [SLOW TEST:12.657 seconds] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 27 14:15:37.377: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Apr 27 14:15:37.438: INFO: Waiting up to 5m0s for pod "downwardapi-volume-cced99e1-5aa8-4b7e-ade2-21365b77b331" in namespace "projected-8387" to be "success or failure" Apr 27 14:15:37.450: INFO: Pod "downwardapi-volume-cced99e1-5aa8-4b7e-ade2-21365b77b331": Phase="Pending", Reason="", readiness=false. Elapsed: 11.921695ms Apr 27 14:15:39.454: INFO: Pod "downwardapi-volume-cced99e1-5aa8-4b7e-ade2-21365b77b331": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016086508s Apr 27 14:15:41.459: INFO: Pod "downwardapi-volume-cced99e1-5aa8-4b7e-ade2-21365b77b331": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.021036665s STEP: Saw pod success Apr 27 14:15:41.459: INFO: Pod "downwardapi-volume-cced99e1-5aa8-4b7e-ade2-21365b77b331" satisfied condition "success or failure" Apr 27 14:15:41.462: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-cced99e1-5aa8-4b7e-ade2-21365b77b331 container client-container: STEP: delete the pod Apr 27 14:15:41.485: INFO: Waiting for pod downwardapi-volume-cced99e1-5aa8-4b7e-ade2-21365b77b331 to disappear Apr 27 14:15:41.508: INFO: Pod downwardapi-volume-cced99e1-5aa8-4b7e-ade2-21365b77b331 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 27 14:15:41.508: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8387" for this suite. Apr 27 14:15:47.544: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 27 14:15:47.641: INFO: namespace projected-8387 deletion completed in 6.129407137s • [SLOW TEST:10.264 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 27 14:15:47.641: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Apr 27 14:15:47.732: INFO: Waiting up to 5m0s for pod "downwardapi-volume-fb158109-7e1f-48da-b62a-26b0c0d8a194" in namespace "projected-1372" to be "success or failure" Apr 27 14:15:47.767: INFO: Pod "downwardapi-volume-fb158109-7e1f-48da-b62a-26b0c0d8a194": Phase="Pending", Reason="", readiness=false. Elapsed: 35.054205ms Apr 27 14:15:49.793: INFO: Pod "downwardapi-volume-fb158109-7e1f-48da-b62a-26b0c0d8a194": Phase="Pending", Reason="", readiness=false. Elapsed: 2.061278423s Apr 27 14:15:51.797: INFO: Pod "downwardapi-volume-fb158109-7e1f-48da-b62a-26b0c0d8a194": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.065706553s STEP: Saw pod success Apr 27 14:15:51.797: INFO: Pod "downwardapi-volume-fb158109-7e1f-48da-b62a-26b0c0d8a194" satisfied condition "success or failure" Apr 27 14:15:51.800: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-fb158109-7e1f-48da-b62a-26b0c0d8a194 container client-container: STEP: delete the pod Apr 27 14:15:51.849: INFO: Waiting for pod downwardapi-volume-fb158109-7e1f-48da-b62a-26b0c0d8a194 to disappear Apr 27 14:15:51.874: INFO: Pod downwardapi-volume-fb158109-7e1f-48da-b62a-26b0c0d8a194 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 27 14:15:51.874: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1372" for this suite. Apr 27 14:15:57.890: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 27 14:15:57.956: INFO: namespace projected-1372 deletion completed in 6.078901621s • [SLOW TEST:10.315 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 27 14:15:57.957: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating Pod STEP: Waiting for the pod running STEP: Geting the pod STEP: Reading file content from the nginx-container Apr 27 14:16:02.053: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec pod-sharedvolume-7c52e5d8-6c70-4554-a3af-8b9445d8cefb -c busybox-main-container --namespace=emptydir-1506 -- cat /usr/share/volumeshare/shareddata.txt' Apr 27 14:16:02.271: INFO: stderr: "I0427 14:16:02.184198 2550 log.go:172] (0xc000852840) (0xc0004b8be0) Create stream\nI0427 14:16:02.184247 2550 log.go:172] (0xc000852840) (0xc0004b8be0) Stream added, broadcasting: 1\nI0427 14:16:02.186976 2550 log.go:172] (0xc000852840) Reply frame received for 1\nI0427 14:16:02.187032 2550 log.go:172] (0xc000852840) (0xc000a40000) Create stream\nI0427 14:16:02.187050 2550 log.go:172] (0xc000852840) (0xc000a40000) Stream added, broadcasting: 3\nI0427 14:16:02.187947 2550 log.go:172] (0xc000852840) Reply frame received for 3\nI0427 14:16:02.187973 2550 log.go:172] (0xc000852840) (0xc0004b8c80) Create stream\nI0427 14:16:02.187979 2550 log.go:172] (0xc000852840) (0xc0004b8c80) Stream added, broadcasting: 5\nI0427 14:16:02.189233 2550 log.go:172] (0xc000852840) Reply frame received for 5\nI0427 14:16:02.263572 2550 log.go:172] (0xc000852840) Data frame received for 5\nI0427 14:16:02.263593 2550 log.go:172] (0xc0004b8c80) (5) Data frame handling\nI0427 14:16:02.263631 2550 log.go:172] (0xc000852840) Data frame received for 3\nI0427 14:16:02.263664 2550 log.go:172] (0xc000a40000) (3) Data frame handling\nI0427 14:16:02.263683 2550 log.go:172] (0xc000a40000) (3) Data frame sent\nI0427 14:16:02.263693 2550 log.go:172] (0xc000852840) Data frame received for 3\nI0427 14:16:02.263713 2550 log.go:172] (0xc000a40000) (3) Data frame handling\nI0427 14:16:02.265477 2550 log.go:172] (0xc000852840) Data frame received for 1\nI0427 14:16:02.265514 2550 log.go:172] (0xc0004b8be0) (1) Data frame handling\nI0427 14:16:02.265533 2550 log.go:172] (0xc0004b8be0) (1) Data frame sent\nI0427 14:16:02.265554 2550 log.go:172] (0xc000852840) (0xc0004b8be0) Stream removed, broadcasting: 1\nI0427 14:16:02.265605 2550 log.go:172] (0xc000852840) Go away received\nI0427 14:16:02.265954 2550 log.go:172] (0xc000852840) (0xc0004b8be0) Stream removed, broadcasting: 1\nI0427 14:16:02.265972 2550 log.go:172] (0xc000852840) (0xc000a40000) Stream removed, broadcasting: 3\nI0427 14:16:02.265982 2550 log.go:172] (0xc000852840) (0xc0004b8c80) Stream removed, broadcasting: 5\n" Apr 27 14:16:02.271: INFO: stdout: "Hello from the busy-box sub-container\n" [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 27 14:16:02.271: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1506" for this suite. Apr 27 14:16:08.291: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 27 14:16:08.368: INFO: namespace emptydir-1506 deletion completed in 6.093336764s • [SLOW TEST:10.412 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 27 14:16:08.369: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-4211 [It] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a new StatefulSet Apr 27 14:16:08.479: INFO: Found 0 stateful pods, waiting for 3 Apr 27 14:16:18.485: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Apr 27 14:16:18.485: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Apr 27 14:16:18.485: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=false Apr 27 14:16:28.507: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Apr 27 14:16:28.507: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Apr 27 14:16:28.507: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true Apr 27 14:16:28.516: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4211 ss2-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Apr 27 14:16:28.843: INFO: stderr: "I0427 14:16:28.658700 2571 log.go:172] (0xc000116dc0) (0xc00043c6e0) Create stream\nI0427 14:16:28.658771 2571 log.go:172] (0xc000116dc0) (0xc00043c6e0) Stream added, broadcasting: 1\nI0427 14:16:28.661041 2571 log.go:172] (0xc000116dc0) Reply frame received for 1\nI0427 14:16:28.661086 2571 log.go:172] (0xc000116dc0) (0xc000986000) Create stream\nI0427 14:16:28.661100 2571 log.go:172] (0xc000116dc0) (0xc000986000) Stream added, broadcasting: 3\nI0427 14:16:28.662075 2571 log.go:172] (0xc000116dc0) Reply frame received for 3\nI0427 14:16:28.662106 2571 log.go:172] (0xc000116dc0) (0xc000690500) Create stream\nI0427 14:16:28.662118 2571 log.go:172] (0xc000116dc0) (0xc000690500) Stream added, broadcasting: 5\nI0427 14:16:28.662976 2571 log.go:172] (0xc000116dc0) Reply frame received for 5\nI0427 14:16:28.762214 2571 log.go:172] (0xc000116dc0) Data frame received for 5\nI0427 14:16:28.762238 2571 log.go:172] (0xc000690500) (5) Data frame handling\nI0427 14:16:28.762249 2571 log.go:172] (0xc000690500) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0427 14:16:28.835864 2571 log.go:172] (0xc000116dc0) Data frame received for 3\nI0427 14:16:28.835891 2571 log.go:172] (0xc000986000) (3) Data frame handling\nI0427 14:16:28.835910 2571 log.go:172] (0xc000986000) (3) Data frame sent\nI0427 14:16:28.835915 2571 log.go:172] (0xc000116dc0) Data frame received for 3\nI0427 14:16:28.835920 2571 log.go:172] (0xc000986000) (3) Data frame handling\nI0427 14:16:28.835940 2571 log.go:172] (0xc000116dc0) Data frame received for 5\nI0427 14:16:28.835946 2571 log.go:172] (0xc000690500) (5) Data frame handling\nI0427 14:16:28.838015 2571 log.go:172] (0xc000116dc0) Data frame received for 1\nI0427 14:16:28.838057 2571 log.go:172] (0xc00043c6e0) (1) Data frame handling\nI0427 14:16:28.838098 2571 log.go:172] (0xc00043c6e0) (1) Data frame sent\nI0427 14:16:28.838162 2571 log.go:172] (0xc000116dc0) (0xc00043c6e0) Stream removed, broadcasting: 1\nI0427 14:16:28.838309 2571 log.go:172] (0xc000116dc0) Go away received\nI0427 14:16:28.838728 2571 log.go:172] (0xc000116dc0) (0xc00043c6e0) Stream removed, broadcasting: 1\nI0427 14:16:28.838757 2571 log.go:172] (0xc000116dc0) (0xc000986000) Stream removed, broadcasting: 3\nI0427 14:16:28.838769 2571 log.go:172] (0xc000116dc0) (0xc000690500) Stream removed, broadcasting: 5\n" Apr 27 14:16:28.843: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Apr 27 14:16:28.843: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine Apr 27 14:16:38.875: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order Apr 27 14:16:48.953: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4211 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 27 14:16:49.196: INFO: stderr: "I0427 14:16:49.086206 2591 log.go:172] (0xc000a26370) (0xc000836640) Create stream\nI0427 14:16:49.086270 2591 log.go:172] (0xc000a26370) (0xc000836640) Stream added, broadcasting: 1\nI0427 14:16:49.089439 2591 log.go:172] (0xc000a26370) Reply frame received for 1\nI0427 14:16:49.089483 2591 log.go:172] (0xc000a26370) (0xc0008e0000) Create stream\nI0427 14:16:49.089496 2591 log.go:172] (0xc000a26370) (0xc0008e0000) Stream added, broadcasting: 3\nI0427 14:16:49.090662 2591 log.go:172] (0xc000a26370) Reply frame received for 3\nI0427 14:16:49.090704 2591 log.go:172] (0xc000a26370) (0xc0008366e0) Create stream\nI0427 14:16:49.090717 2591 log.go:172] (0xc000a26370) (0xc0008366e0) Stream added, broadcasting: 5\nI0427 14:16:49.091843 2591 log.go:172] (0xc000a26370) Reply frame received for 5\nI0427 14:16:49.188442 2591 log.go:172] (0xc000a26370) Data frame received for 3\nI0427 14:16:49.188494 2591 log.go:172] (0xc0008e0000) (3) Data frame handling\nI0427 14:16:49.188510 2591 log.go:172] (0xc0008e0000) (3) Data frame sent\nI0427 14:16:49.188520 2591 log.go:172] (0xc000a26370) Data frame received for 3\nI0427 14:16:49.188529 2591 log.go:172] (0xc0008e0000) (3) Data frame handling\nI0427 14:16:49.188577 2591 log.go:172] (0xc000a26370) Data frame received for 5\nI0427 14:16:49.188619 2591 log.go:172] (0xc0008366e0) (5) Data frame handling\nI0427 14:16:49.188640 2591 log.go:172] (0xc0008366e0) (5) Data frame sent\nI0427 14:16:49.188655 2591 log.go:172] (0xc000a26370) Data frame received for 5\nI0427 14:16:49.188662 2591 log.go:172] (0xc0008366e0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0427 14:16:49.190542 2591 log.go:172] (0xc000a26370) Data frame received for 1\nI0427 14:16:49.190572 2591 log.go:172] (0xc000836640) (1) Data frame handling\nI0427 14:16:49.190586 2591 log.go:172] (0xc000836640) (1) Data frame sent\nI0427 14:16:49.190599 2591 log.go:172] (0xc000a26370) (0xc000836640) Stream removed, broadcasting: 1\nI0427 14:16:49.190687 2591 log.go:172] (0xc000a26370) Go away received\nI0427 14:16:49.190947 2591 log.go:172] (0xc000a26370) (0xc000836640) Stream removed, broadcasting: 1\nI0427 14:16:49.190967 2591 log.go:172] (0xc000a26370) (0xc0008e0000) Stream removed, broadcasting: 3\nI0427 14:16:49.190977 2591 log.go:172] (0xc000a26370) (0xc0008366e0) Stream removed, broadcasting: 5\n" Apr 27 14:16:49.196: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Apr 27 14:16:49.196: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Apr 27 14:16:59.215: INFO: Waiting for StatefulSet statefulset-4211/ss2 to complete update Apr 27 14:16:59.216: INFO: Waiting for Pod statefulset-4211/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Apr 27 14:16:59.216: INFO: Waiting for Pod statefulset-4211/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Apr 27 14:16:59.216: INFO: Waiting for Pod statefulset-4211/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Apr 27 14:17:09.234: INFO: Waiting for StatefulSet statefulset-4211/ss2 to complete update Apr 27 14:17:09.234: INFO: Waiting for Pod statefulset-4211/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Apr 27 14:17:09.234: INFO: Waiting for Pod statefulset-4211/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Apr 27 14:17:19.224: INFO: Waiting for StatefulSet statefulset-4211/ss2 to complete update Apr 27 14:17:19.224: INFO: Waiting for Pod statefulset-4211/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c STEP: Rolling back to a previous revision Apr 27 14:17:29.229: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4211 ss2-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Apr 27 14:17:29.522: INFO: stderr: "I0427 14:17:29.365521 2612 log.go:172] (0xc00012afd0) (0xc0006a8960) Create stream\nI0427 14:17:29.365594 2612 log.go:172] (0xc00012afd0) (0xc0006a8960) Stream added, broadcasting: 1\nI0427 14:17:29.368843 2612 log.go:172] (0xc00012afd0) Reply frame received for 1\nI0427 14:17:29.368892 2612 log.go:172] (0xc00012afd0) (0xc0006a8a00) Create stream\nI0427 14:17:29.368919 2612 log.go:172] (0xc00012afd0) (0xc0006a8a00) Stream added, broadcasting: 3\nI0427 14:17:29.370394 2612 log.go:172] (0xc00012afd0) Reply frame received for 3\nI0427 14:17:29.370445 2612 log.go:172] (0xc00012afd0) (0xc0002e0140) Create stream\nI0427 14:17:29.370461 2612 log.go:172] (0xc00012afd0) (0xc0002e0140) Stream added, broadcasting: 5\nI0427 14:17:29.371548 2612 log.go:172] (0xc00012afd0) Reply frame received for 5\nI0427 14:17:29.477779 2612 log.go:172] (0xc00012afd0) Data frame received for 5\nI0427 14:17:29.477808 2612 log.go:172] (0xc0002e0140) (5) Data frame handling\nI0427 14:17:29.477833 2612 log.go:172] (0xc0002e0140) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0427 14:17:29.515363 2612 log.go:172] (0xc00012afd0) Data frame received for 5\nI0427 14:17:29.515395 2612 log.go:172] (0xc0002e0140) (5) Data frame handling\nI0427 14:17:29.515442 2612 log.go:172] (0xc00012afd0) Data frame received for 3\nI0427 14:17:29.515503 2612 log.go:172] (0xc0006a8a00) (3) Data frame handling\nI0427 14:17:29.515582 2612 log.go:172] (0xc0006a8a00) (3) Data frame sent\nI0427 14:17:29.515605 2612 log.go:172] (0xc00012afd0) Data frame received for 3\nI0427 14:17:29.515646 2612 log.go:172] (0xc0006a8a00) (3) Data frame handling\nI0427 14:17:29.516373 2612 log.go:172] (0xc00012afd0) Data frame received for 1\nI0427 14:17:29.516392 2612 log.go:172] (0xc0006a8960) (1) Data frame handling\nI0427 14:17:29.516403 2612 log.go:172] (0xc0006a8960) (1) Data frame sent\nI0427 14:17:29.516414 2612 log.go:172] (0xc00012afd0) (0xc0006a8960) Stream removed, broadcasting: 1\nI0427 14:17:29.516425 2612 log.go:172] (0xc00012afd0) Go away received\nI0427 14:17:29.517004 2612 log.go:172] (0xc00012afd0) (0xc0006a8960) Stream removed, broadcasting: 1\nI0427 14:17:29.517027 2612 log.go:172] (0xc00012afd0) (0xc0006a8a00) Stream removed, broadcasting: 3\nI0427 14:17:29.517039 2612 log.go:172] (0xc00012afd0) (0xc0002e0140) Stream removed, broadcasting: 5\n" Apr 27 14:17:29.522: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Apr 27 14:17:29.522: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Apr 27 14:17:39.553: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order Apr 27 14:17:49.578: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4211 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 27 14:17:49.823: INFO: stderr: "I0427 14:17:49.722574 2633 log.go:172] (0xc0009d82c0) (0xc0007966e0) Create stream\nI0427 14:17:49.722643 2633 log.go:172] (0xc0009d82c0) (0xc0007966e0) Stream added, broadcasting: 1\nI0427 14:17:49.724870 2633 log.go:172] (0xc0009d82c0) Reply frame received for 1\nI0427 14:17:49.724910 2633 log.go:172] (0xc0009d82c0) (0xc00029e320) Create stream\nI0427 14:17:49.724922 2633 log.go:172] (0xc0009d82c0) (0xc00029e320) Stream added, broadcasting: 3\nI0427 14:17:49.726359 2633 log.go:172] (0xc0009d82c0) Reply frame received for 3\nI0427 14:17:49.726427 2633 log.go:172] (0xc0009d82c0) (0xc00029a000) Create stream\nI0427 14:17:49.726453 2633 log.go:172] (0xc0009d82c0) (0xc00029a000) Stream added, broadcasting: 5\nI0427 14:17:49.727487 2633 log.go:172] (0xc0009d82c0) Reply frame received for 5\nI0427 14:17:49.817539 2633 log.go:172] (0xc0009d82c0) Data frame received for 5\nI0427 14:17:49.817564 2633 log.go:172] (0xc00029a000) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0427 14:17:49.817591 2633 log.go:172] (0xc0009d82c0) Data frame received for 3\nI0427 14:17:49.817632 2633 log.go:172] (0xc00029e320) (3) Data frame handling\nI0427 14:17:49.817657 2633 log.go:172] (0xc00029e320) (3) Data frame sent\nI0427 14:17:49.817693 2633 log.go:172] (0xc00029a000) (5) Data frame sent\nI0427 14:17:49.817709 2633 log.go:172] (0xc0009d82c0) Data frame received for 5\nI0427 14:17:49.817723 2633 log.go:172] (0xc00029a000) (5) Data frame handling\nI0427 14:17:49.817751 2633 log.go:172] (0xc0009d82c0) Data frame received for 3\nI0427 14:17:49.817771 2633 log.go:172] (0xc00029e320) (3) Data frame handling\nI0427 14:17:49.819198 2633 log.go:172] (0xc0009d82c0) Data frame received for 1\nI0427 14:17:49.819209 2633 log.go:172] (0xc0007966e0) (1) Data frame handling\nI0427 14:17:49.819215 2633 log.go:172] (0xc0007966e0) (1) Data frame sent\nI0427 14:17:49.819221 2633 log.go:172] (0xc0009d82c0) (0xc0007966e0) Stream removed, broadcasting: 1\nI0427 14:17:49.819228 2633 log.go:172] (0xc0009d82c0) Go away received\nI0427 14:17:49.819721 2633 log.go:172] (0xc0009d82c0) (0xc0007966e0) Stream removed, broadcasting: 1\nI0427 14:17:49.819748 2633 log.go:172] (0xc0009d82c0) (0xc00029e320) Stream removed, broadcasting: 3\nI0427 14:17:49.819760 2633 log.go:172] (0xc0009d82c0) (0xc00029a000) Stream removed, broadcasting: 5\n" Apr 27 14:17:49.824: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Apr 27 14:17:49.824: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Apr 27 14:17:59.843: INFO: Waiting for StatefulSet statefulset-4211/ss2 to complete update Apr 27 14:17:59.843: INFO: Waiting for Pod statefulset-4211/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Apr 27 14:17:59.843: INFO: Waiting for Pod statefulset-4211/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Apr 27 14:17:59.843: INFO: Waiting for Pod statefulset-4211/ss2-2 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Apr 27 14:18:09.852: INFO: Waiting for StatefulSet statefulset-4211/ss2 to complete update Apr 27 14:18:09.852: INFO: Waiting for Pod statefulset-4211/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Apr 27 14:18:19.852: INFO: Waiting for StatefulSet statefulset-4211/ss2 to complete update Apr 27 14:18:19.852: INFO: Waiting for Pod statefulset-4211/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 Apr 27 14:18:29.852: INFO: Deleting all statefulset in ns statefulset-4211 Apr 27 14:18:29.855: INFO: Scaling statefulset ss2 to 0 Apr 27 14:18:49.920: INFO: Waiting for statefulset status.replicas updated to 0 Apr 27 14:18:49.923: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 27 14:18:49.940: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-4211" for this suite. Apr 27 14:18:57.954: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 27 14:18:58.036: INFO: namespace statefulset-4211 deletion completed in 8.092952672s • [SLOW TEST:169.667 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 27 14:18:58.036: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 27 14:19:04.356: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-795" for this suite. Apr 27 14:19:10.375: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 27 14:19:10.455: INFO: namespace namespaces-795 deletion completed in 6.094619851s STEP: Destroying namespace "nsdeletetest-3839" for this suite. Apr 27 14:19:10.457: INFO: Namespace nsdeletetest-3839 was already deleted STEP: Destroying namespace "nsdeletetest-4457" for this suite. Apr 27 14:19:16.477: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 27 14:19:16.568: INFO: namespace nsdeletetest-4457 deletion completed in 6.111238607s • [SLOW TEST:18.532 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 27 14:19:16.568: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted STEP: Gathering metrics W0427 14:19:28.031334 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Apr 27 14:19:28.031: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 27 14:19:28.031: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-1771" for this suite. Apr 27 14:19:36.051: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 27 14:19:36.124: INFO: namespace gc-1771 deletion completed in 8.088695219s • [SLOW TEST:19.555 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 27 14:19:36.124: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted Apr 27 14:19:42.768: INFO: 0 pods remaining Apr 27 14:19:42.768: INFO: 0 pods has nil DeletionTimestamp Apr 27 14:19:42.768: INFO: Apr 27 14:19:44.355: INFO: 0 pods remaining Apr 27 14:19:44.355: INFO: 0 pods has nil DeletionTimestamp Apr 27 14:19:44.355: INFO: STEP: Gathering metrics W0427 14:19:45.258507 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Apr 27 14:19:45.258: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 27 14:19:45.258: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-251" for this suite. Apr 27 14:19:51.303: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 27 14:19:51.386: INFO: namespace gc-251 deletion completed in 6.125451066s • [SLOW TEST:15.262 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 27 14:19:51.387: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:167 [It] should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating server pod server in namespace prestop-1745 STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace prestop-1745 STEP: Deleting pre-stop pod Apr 27 14:20:04.572: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 27 14:20:04.577: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "prestop-1745" for this suite. Apr 27 14:20:42.617: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 27 14:20:42.703: INFO: namespace prestop-1745 deletion completed in 38.113392324s • [SLOW TEST:51.316 seconds] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run default should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 27 14:20:42.703: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1420 [It] should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Apr 27 14:20:42.756: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-987' Apr 27 14:20:42.862: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Apr 27 14:20:42.862: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n" STEP: verifying the pod controlled by e2e-test-nginx-deployment gets created [AfterEach] [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1426 Apr 27 14:20:44.878: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=kubectl-987' Apr 27 14:20:44.987: INFO: stderr: "" Apr 27 14:20:44.987: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 27 14:20:44.987: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-987" for this suite. Apr 27 14:20:51.073: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 27 14:20:51.164: INFO: namespace kubectl-987 deletion completed in 6.174414086s • [SLOW TEST:8.461 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 27 14:20:51.165: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: getting the auto-created API token STEP: reading a file in the container Apr 27 14:20:55.809: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-9546 pod-service-account-ac0a2577-414d-413a-a9b6-9f5321329562 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token' STEP: reading a file in the container Apr 27 14:20:56.045: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-9546 pod-service-account-ac0a2577-414d-413a-a9b6-9f5321329562 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt' STEP: reading a file in the container Apr 27 14:20:56.249: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-9546 pod-service-account-ac0a2577-414d-413a-a9b6-9f5321329562 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace' [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 27 14:20:56.447: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-9546" for this suite. Apr 27 14:21:02.468: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 27 14:21:02.546: INFO: namespace svcaccounts-9546 deletion completed in 6.096143876s • [SLOW TEST:11.381 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 27 14:21:02.547: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-87942f7f-b175-4db2-a33c-f14cd46faa86 STEP: Creating a pod to test consume secrets Apr 27 14:21:02.735: INFO: Waiting up to 5m0s for pod "pod-secrets-b33bdb63-b1a6-4227-b5b6-026a77842702" in namespace "secrets-6307" to be "success or failure" Apr 27 14:21:02.786: INFO: Pod "pod-secrets-b33bdb63-b1a6-4227-b5b6-026a77842702": Phase="Pending", Reason="", readiness=false. Elapsed: 50.605128ms Apr 27 14:21:04.790: INFO: Pod "pod-secrets-b33bdb63-b1a6-4227-b5b6-026a77842702": Phase="Pending", Reason="", readiness=false. Elapsed: 2.054867966s Apr 27 14:21:06.795: INFO: Pod "pod-secrets-b33bdb63-b1a6-4227-b5b6-026a77842702": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.059541546s STEP: Saw pod success Apr 27 14:21:06.795: INFO: Pod "pod-secrets-b33bdb63-b1a6-4227-b5b6-026a77842702" satisfied condition "success or failure" Apr 27 14:21:06.798: INFO: Trying to get logs from node iruya-worker2 pod pod-secrets-b33bdb63-b1a6-4227-b5b6-026a77842702 container secret-volume-test: STEP: delete the pod Apr 27 14:21:06.819: INFO: Waiting for pod pod-secrets-b33bdb63-b1a6-4227-b5b6-026a77842702 to disappear Apr 27 14:21:06.837: INFO: Pod pod-secrets-b33bdb63-b1a6-4227-b5b6-026a77842702 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 27 14:21:06.837: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-6307" for this suite. Apr 27 14:21:12.907: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 27 14:21:12.984: INFO: namespace secrets-6307 deletion completed in 6.144181114s STEP: Destroying namespace "secret-namespace-1126" for this suite. Apr 27 14:21:18.999: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 27 14:21:19.066: INFO: namespace secret-namespace-1126 deletion completed in 6.081928063s • [SLOW TEST:16.520 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 27 14:21:19.067: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 27 14:21:23.150: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-5309" for this suite. Apr 27 14:21:29.173: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 27 14:21:29.277: INFO: namespace kubelet-test-5309 deletion completed in 6.12203824s • [SLOW TEST:10.210 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78 should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 27 14:21:29.278: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update Apr 27 14:21:29.400: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:watch-1260,SelfLink:/api/v1/namespaces/watch-1260/configmaps/e2e-watch-test-resource-version,UID:5eac1099-75af-4596-839a-a79b52652b9f,ResourceVersion:7729786,Generation:0,CreationTimestamp:2020-04-27 14:21:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Apr 27 14:21:29.400: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:watch-1260,SelfLink:/api/v1/namespaces/watch-1260/configmaps/e2e-watch-test-resource-version,UID:5eac1099-75af-4596-839a-a79b52652b9f,ResourceVersion:7729787,Generation:0,CreationTimestamp:2020-04-27 14:21:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 27 14:21:29.401: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-1260" for this suite. Apr 27 14:21:35.419: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 27 14:21:35.488: INFO: namespace watch-1260 deletion completed in 6.078406319s • [SLOW TEST:6.210 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 27 14:21:35.488: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap that has name configmap-test-emptyKey-d1cb378f-cf80-48b6-bd6f-9e1281d5cdd9 [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 27 14:21:35.599: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9151" for this suite. Apr 27 14:21:41.619: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 27 14:21:41.697: INFO: namespace configmap-9151 deletion completed in 6.093921972s • [SLOW TEST:6.209 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run job should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 27 14:21:41.698: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1612 [It] should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Apr 27 14:21:41.789: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-9163' Apr 27 14:21:41.898: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Apr 27 14:21:41.898: INFO: stdout: "job.batch/e2e-test-nginx-job created\n" STEP: verifying the job e2e-test-nginx-job was created [AfterEach] [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1617 Apr 27 14:21:41.933: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-nginx-job --namespace=kubectl-9163' Apr 27 14:21:42.099: INFO: stderr: "" Apr 27 14:21:42.099: INFO: stdout: "job.batch \"e2e-test-nginx-job\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 27 14:21:42.099: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9163" for this suite. Apr 27 14:21:48.161: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 27 14:21:48.241: INFO: namespace kubectl-9163 deletion completed in 6.139160443s • [SLOW TEST:6.543 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run --rm job should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 27 14:21:48.242: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: executing a command with run --rm and attach with stdin Apr 27 14:21:48.291: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-2005 run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed'' Apr 27 14:21:51.211: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\nI0427 14:21:51.120690 2799 log.go:172] (0xc000118b00) (0xc0007743c0) Create stream\nI0427 14:21:51.120747 2799 log.go:172] (0xc000118b00) (0xc0007743c0) Stream added, broadcasting: 1\nI0427 14:21:51.126273 2799 log.go:172] (0xc000118b00) Reply frame received for 1\nI0427 14:21:51.126339 2799 log.go:172] (0xc000118b00) (0xc000774000) Create stream\nI0427 14:21:51.126360 2799 log.go:172] (0xc000118b00) (0xc000774000) Stream added, broadcasting: 3\nI0427 14:21:51.127190 2799 log.go:172] (0xc000118b00) Reply frame received for 3\nI0427 14:21:51.127226 2799 log.go:172] (0xc000118b00) (0xc000014000) Create stream\nI0427 14:21:51.127241 2799 log.go:172] (0xc000118b00) (0xc000014000) Stream added, broadcasting: 5\nI0427 14:21:51.128067 2799 log.go:172] (0xc000118b00) Reply frame received for 5\nI0427 14:21:51.128109 2799 log.go:172] (0xc000118b00) (0xc000182000) Create stream\nI0427 14:21:51.128122 2799 log.go:172] (0xc000118b00) (0xc000182000) Stream added, broadcasting: 7\nI0427 14:21:51.129059 2799 log.go:172] (0xc000118b00) Reply frame received for 7\nI0427 14:21:51.129319 2799 log.go:172] (0xc000774000) (3) Writing data frame\nI0427 14:21:51.129470 2799 log.go:172] (0xc000774000) (3) Writing data frame\nI0427 14:21:51.130263 2799 log.go:172] (0xc000118b00) Data frame received for 5\nI0427 14:21:51.130293 2799 log.go:172] (0xc000014000) (5) Data frame handling\nI0427 14:21:51.130320 2799 log.go:172] (0xc000014000) (5) Data frame sent\nI0427 14:21:51.130806 2799 log.go:172] (0xc000118b00) Data frame received for 5\nI0427 14:21:51.130823 2799 log.go:172] (0xc000014000) (5) Data frame handling\nI0427 14:21:51.130837 2799 log.go:172] (0xc000014000) (5) Data frame sent\nI0427 14:21:51.175325 2799 log.go:172] (0xc000118b00) Data frame received for 7\nI0427 14:21:51.175373 2799 log.go:172] (0xc000182000) (7) Data frame handling\nI0427 14:21:51.175399 2799 log.go:172] (0xc000118b00) Data frame received for 5\nI0427 14:21:51.175410 2799 log.go:172] (0xc000014000) (5) Data frame handling\nI0427 14:21:51.175706 2799 log.go:172] (0xc000118b00) Data frame received for 1\nI0427 14:21:51.175736 2799 log.go:172] (0xc0007743c0) (1) Data frame handling\nI0427 14:21:51.175764 2799 log.go:172] (0xc0007743c0) (1) Data frame sent\nI0427 14:21:51.175779 2799 log.go:172] (0xc000118b00) (0xc0007743c0) Stream removed, broadcasting: 1\nI0427 14:21:51.175902 2799 log.go:172] (0xc000118b00) (0xc0007743c0) Stream removed, broadcasting: 1\nI0427 14:21:51.175924 2799 log.go:172] (0xc000118b00) (0xc000774000) Stream removed, broadcasting: 3\nI0427 14:21:51.175957 2799 log.go:172] (0xc000118b00) (0xc000014000) Stream removed, broadcasting: 5\nI0427 14:21:51.176145 2799 log.go:172] (0xc000118b00) Go away received\nI0427 14:21:51.176186 2799 log.go:172] (0xc000118b00) (0xc000182000) Stream removed, broadcasting: 7\n" Apr 27 14:21:51.211: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n" STEP: verifying the job e2e-test-rm-busybox-job was deleted [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 27 14:21:53.225: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2005" for this suite. Apr 27 14:22:03.246: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 27 14:22:03.324: INFO: namespace kubectl-2005 deletion completed in 10.095822249s • [SLOW TEST:15.082 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run --rm job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 27 14:22:03.324: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-638.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-638.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-638.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-638.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-638.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-638.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe /etc/hosts STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 27 14:22:09.449: INFO: DNS probes using dns-638/dns-test-ed1cfed4-868d-4f51-9f8a-99ec62638ba2 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 27 14:22:09.476: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-638" for this suite. Apr 27 14:22:15.563: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 27 14:22:15.678: INFO: namespace dns-638 deletion completed in 6.155449608s • [SLOW TEST:12.354 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Service endpoints latency should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 27 14:22:15.678: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating replication controller svc-latency-rc in namespace svc-latency-7710 I0427 14:22:15.718195 6 runners.go:180] Created replication controller with name: svc-latency-rc, namespace: svc-latency-7710, replica count: 1 I0427 14:22:16.768604 6 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0427 14:22:17.768801 6 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0427 14:22:18.768992 6 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0427 14:22:19.769326 6 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Apr 27 14:22:19.949: INFO: Created: latency-svc-kcfw5 Apr 27 14:22:19.953: INFO: Got endpoints: latency-svc-kcfw5 [83.74757ms] Apr 27 14:22:20.007: INFO: Created: latency-svc-tfm57 Apr 27 14:22:20.022: INFO: Got endpoints: latency-svc-tfm57 [67.752892ms] Apr 27 14:22:20.043: INFO: Created: latency-svc-gmh7z Apr 27 14:22:20.086: INFO: Got endpoints: latency-svc-gmh7z [132.890515ms] Apr 27 14:22:20.095: INFO: Created: latency-svc-9fgj7 Apr 27 14:22:20.115: INFO: Got endpoints: latency-svc-9fgj7 [160.949274ms] Apr 27 14:22:20.155: INFO: Created: latency-svc-dhd2k Apr 27 14:22:20.182: INFO: Got endpoints: latency-svc-dhd2k [227.415431ms] Apr 27 14:22:20.242: INFO: Created: latency-svc-ddskx Apr 27 14:22:20.260: INFO: Got endpoints: latency-svc-ddskx [304.30409ms] Apr 27 14:22:20.314: INFO: Created: latency-svc-dkgmx Apr 27 14:22:20.339: INFO: Got endpoints: latency-svc-dkgmx [383.407578ms] Apr 27 14:22:20.386: INFO: Created: latency-svc-bjv44 Apr 27 14:22:20.392: INFO: Got endpoints: latency-svc-bjv44 [436.270475ms] Apr 27 14:22:20.440: INFO: Created: latency-svc-tjp58 Apr 27 14:22:20.459: INFO: Got endpoints: latency-svc-tjp58 [502.298703ms] Apr 27 14:22:20.524: INFO: Created: latency-svc-g4tv5 Apr 27 14:22:20.543: INFO: Got endpoints: latency-svc-g4tv5 [586.039781ms] Apr 27 14:22:20.576: INFO: Created: latency-svc-dt6m4 Apr 27 14:22:20.591: INFO: Got endpoints: latency-svc-dt6m4 [636.042451ms] Apr 27 14:22:20.617: INFO: Created: latency-svc-8t65m Apr 27 14:22:20.663: INFO: Got endpoints: latency-svc-8t65m [708.136974ms] Apr 27 14:22:20.697: INFO: Created: latency-svc-2d7qk Apr 27 14:22:20.712: INFO: Got endpoints: latency-svc-2d7qk [755.764539ms] Apr 27 14:22:20.739: INFO: Created: latency-svc-cwjdk Apr 27 14:22:20.755: INFO: Got endpoints: latency-svc-cwjdk [798.548212ms] Apr 27 14:22:20.818: INFO: Created: latency-svc-lfjkf Apr 27 14:22:20.833: INFO: Got endpoints: latency-svc-lfjkf [877.819684ms] Apr 27 14:22:20.857: INFO: Created: latency-svc-hk8xs Apr 27 14:22:20.887: INFO: Got endpoints: latency-svc-hk8xs [929.725425ms] Apr 27 14:22:20.956: INFO: Created: latency-svc-b986q Apr 27 14:22:20.972: INFO: Got endpoints: latency-svc-b986q [949.91905ms] Apr 27 14:22:20.998: INFO: Created: latency-svc-sdc4q Apr 27 14:22:21.014: INFO: Got endpoints: latency-svc-sdc4q [927.66138ms] Apr 27 14:22:21.050: INFO: Created: latency-svc-ldfd6 Apr 27 14:22:21.099: INFO: Got endpoints: latency-svc-ldfd6 [983.524884ms] Apr 27 14:22:21.121: INFO: Created: latency-svc-mh5dg Apr 27 14:22:21.147: INFO: Got endpoints: latency-svc-mh5dg [964.618447ms] Apr 27 14:22:21.189: INFO: Created: latency-svc-nl6hm Apr 27 14:22:21.236: INFO: Got endpoints: latency-svc-nl6hm [976.261762ms] Apr 27 14:22:21.262: INFO: Created: latency-svc-zndjx Apr 27 14:22:21.280: INFO: Got endpoints: latency-svc-zndjx [941.144753ms] Apr 27 14:22:21.301: INFO: Created: latency-svc-v9qqw Apr 27 14:22:21.316: INFO: Got endpoints: latency-svc-v9qqw [923.239695ms] Apr 27 14:22:21.387: INFO: Created: latency-svc-87tq8 Apr 27 14:22:21.389: INFO: Got endpoints: latency-svc-87tq8 [930.703086ms] Apr 27 14:22:21.429: INFO: Created: latency-svc-2rvtg Apr 27 14:22:21.442: INFO: Got endpoints: latency-svc-2rvtg [899.158753ms] Apr 27 14:22:21.465: INFO: Created: latency-svc-lr8st Apr 27 14:22:21.479: INFO: Got endpoints: latency-svc-lr8st [887.594789ms] Apr 27 14:22:21.534: INFO: Created: latency-svc-9hcsq Apr 27 14:22:21.534: INFO: Got endpoints: latency-svc-9hcsq [871.415529ms] Apr 27 14:22:21.571: INFO: Created: latency-svc-wp6dd Apr 27 14:22:21.588: INFO: Got endpoints: latency-svc-wp6dd [876.517364ms] Apr 27 14:22:21.619: INFO: Created: latency-svc-n2zmz Apr 27 14:22:21.662: INFO: Got endpoints: latency-svc-n2zmz [906.743678ms] Apr 27 14:22:21.698: INFO: Created: latency-svc-p7lpd Apr 27 14:22:21.714: INFO: Got endpoints: latency-svc-p7lpd [881.602649ms] Apr 27 14:22:21.799: INFO: Created: latency-svc-tsbt4 Apr 27 14:22:21.813: INFO: Got endpoints: latency-svc-tsbt4 [925.5183ms] Apr 27 14:22:21.835: INFO: Created: latency-svc-mj8k4 Apr 27 14:22:21.847: INFO: Got endpoints: latency-svc-mj8k4 [875.663017ms] Apr 27 14:22:21.873: INFO: Created: latency-svc-ns4p5 Apr 27 14:22:21.884: INFO: Got endpoints: latency-svc-ns4p5 [869.829955ms] Apr 27 14:22:21.944: INFO: Created: latency-svc-jd54j Apr 27 14:22:21.963: INFO: Got endpoints: latency-svc-jd54j [863.847156ms] Apr 27 14:22:22.012: INFO: Created: latency-svc-j8tf2 Apr 27 14:22:22.040: INFO: Got endpoints: latency-svc-j8tf2 [893.784619ms] Apr 27 14:22:22.099: INFO: Created: latency-svc-lxsq9 Apr 27 14:22:22.119: INFO: Got endpoints: latency-svc-lxsq9 [883.12864ms] Apr 27 14:22:22.161: INFO: Created: latency-svc-2w427 Apr 27 14:22:22.218: INFO: Got endpoints: latency-svc-2w427 [938.400933ms] Apr 27 14:22:22.254: INFO: Created: latency-svc-fgfff Apr 27 14:22:22.285: INFO: Got endpoints: latency-svc-fgfff [969.254157ms] Apr 27 14:22:22.368: INFO: Created: latency-svc-jl7lk Apr 27 14:22:22.384: INFO: Got endpoints: latency-svc-jl7lk [994.353178ms] Apr 27 14:22:22.419: INFO: Created: latency-svc-x47rg Apr 27 14:22:22.439: INFO: Got endpoints: latency-svc-x47rg [996.476271ms] Apr 27 14:22:22.462: INFO: Created: latency-svc-f8kj9 Apr 27 14:22:22.523: INFO: Got endpoints: latency-svc-f8kj9 [1.044628829s] Apr 27 14:22:22.555: INFO: Created: latency-svc-rdjcr Apr 27 14:22:22.570: INFO: Got endpoints: latency-svc-rdjcr [1.035964555s] Apr 27 14:22:22.599: INFO: Created: latency-svc-nn8r8 Apr 27 14:22:22.661: INFO: Got endpoints: latency-svc-nn8r8 [1.072688139s] Apr 27 14:22:22.689: INFO: Created: latency-svc-lmk9w Apr 27 14:22:22.704: INFO: Got endpoints: latency-svc-lmk9w [1.041895836s] Apr 27 14:22:22.725: INFO: Created: latency-svc-7cz42 Apr 27 14:22:22.740: INFO: Got endpoints: latency-svc-7cz42 [1.026051218s] Apr 27 14:22:22.811: INFO: Created: latency-svc-7ktj6 Apr 27 14:22:22.818: INFO: Got endpoints: latency-svc-7ktj6 [1.005442273s] Apr 27 14:22:22.843: INFO: Created: latency-svc-qj9mx Apr 27 14:22:22.855: INFO: Got endpoints: latency-svc-qj9mx [1.007444287s] Apr 27 14:22:22.881: INFO: Created: latency-svc-8mkzs Apr 27 14:22:22.898: INFO: Got endpoints: latency-svc-8mkzs [1.013950747s] Apr 27 14:22:22.967: INFO: Created: latency-svc-8sn29 Apr 27 14:22:22.976: INFO: Got endpoints: latency-svc-8sn29 [1.013159795s] Apr 27 14:22:23.011: INFO: Created: latency-svc-55vdz Apr 27 14:22:23.024: INFO: Got endpoints: latency-svc-55vdz [983.366228ms] Apr 27 14:22:23.047: INFO: Created: latency-svc-29xnk Apr 27 14:22:23.060: INFO: Got endpoints: latency-svc-29xnk [940.998883ms] Apr 27 14:22:23.111: INFO: Created: latency-svc-vvxcp Apr 27 14:22:23.121: INFO: Got endpoints: latency-svc-vvxcp [902.202746ms] Apr 27 14:22:23.145: INFO: Created: latency-svc-cvg2w Apr 27 14:22:23.163: INFO: Got endpoints: latency-svc-cvg2w [878.330311ms] Apr 27 14:22:23.187: INFO: Created: latency-svc-zb7ff Apr 27 14:22:23.206: INFO: Got endpoints: latency-svc-zb7ff [821.891172ms] Apr 27 14:22:23.270: INFO: Created: latency-svc-dfbcs Apr 27 14:22:23.284: INFO: Got endpoints: latency-svc-dfbcs [844.973602ms] Apr 27 14:22:23.311: INFO: Created: latency-svc-rrxtv Apr 27 14:22:23.336: INFO: Got endpoints: latency-svc-rrxtv [812.530967ms] Apr 27 14:22:23.405: INFO: Created: latency-svc-6v4w8 Apr 27 14:22:23.407: INFO: Got endpoints: latency-svc-6v4w8 [836.408354ms] Apr 27 14:22:23.445: INFO: Created: latency-svc-qps6g Apr 27 14:22:23.459: INFO: Got endpoints: latency-svc-qps6g [797.99569ms] Apr 27 14:22:23.479: INFO: Created: latency-svc-4kb2w Apr 27 14:22:23.496: INFO: Got endpoints: latency-svc-4kb2w [792.106874ms] Apr 27 14:22:23.548: INFO: Created: latency-svc-h4gdn Apr 27 14:22:23.556: INFO: Got endpoints: latency-svc-h4gdn [815.615535ms] Apr 27 14:22:23.582: INFO: Created: latency-svc-nrc4n Apr 27 14:22:23.598: INFO: Got endpoints: latency-svc-nrc4n [779.940056ms] Apr 27 14:22:23.624: INFO: Created: latency-svc-sx6wh Apr 27 14:22:23.641: INFO: Got endpoints: latency-svc-sx6wh [785.754809ms] Apr 27 14:22:23.685: INFO: Created: latency-svc-p6drt Apr 27 14:22:23.718: INFO: Got endpoints: latency-svc-p6drt [820.174862ms] Apr 27 14:22:23.754: INFO: Created: latency-svc-wrb7m Apr 27 14:22:23.771: INFO: Got endpoints: latency-svc-wrb7m [794.868552ms] Apr 27 14:22:23.818: INFO: Created: latency-svc-xk25m Apr 27 14:22:23.821: INFO: Got endpoints: latency-svc-xk25m [797.143815ms] Apr 27 14:22:23.876: INFO: Created: latency-svc-nqppg Apr 27 14:22:23.891: INFO: Got endpoints: latency-svc-nqppg [830.855559ms] Apr 27 14:22:23.957: INFO: Created: latency-svc-mv24m Apr 27 14:22:23.982: INFO: Got endpoints: latency-svc-mv24m [861.218061ms] Apr 27 14:22:24.026: INFO: Created: latency-svc-fddw7 Apr 27 14:22:24.042: INFO: Got endpoints: latency-svc-fddw7 [878.728432ms] Apr 27 14:22:24.117: INFO: Created: latency-svc-h4vzh Apr 27 14:22:24.120: INFO: Got endpoints: latency-svc-h4vzh [914.268917ms] Apr 27 14:22:24.156: INFO: Created: latency-svc-c7cpd Apr 27 14:22:24.180: INFO: Got endpoints: latency-svc-c7cpd [895.836345ms] Apr 27 14:22:24.284: INFO: Created: latency-svc-w8tcz Apr 27 14:22:24.295: INFO: Got endpoints: latency-svc-w8tcz [959.324589ms] Apr 27 14:22:24.324: INFO: Created: latency-svc-9fv9r Apr 27 14:22:24.338: INFO: Got endpoints: latency-svc-9fv9r [930.793599ms] Apr 27 14:22:24.372: INFO: Created: latency-svc-gkrw7 Apr 27 14:22:24.439: INFO: Got endpoints: latency-svc-gkrw7 [980.328255ms] Apr 27 14:22:24.442: INFO: Created: latency-svc-tb7vm Apr 27 14:22:24.452: INFO: Got endpoints: latency-svc-tb7vm [956.015879ms] Apr 27 14:22:24.482: INFO: Created: latency-svc-d42gm Apr 27 14:22:24.494: INFO: Got endpoints: latency-svc-d42gm [938.262957ms] Apr 27 14:22:24.523: INFO: Created: latency-svc-r4vsm Apr 27 14:22:24.537: INFO: Got endpoints: latency-svc-r4vsm [938.752794ms] Apr 27 14:22:24.590: INFO: Created: latency-svc-xfrd5 Apr 27 14:22:24.592: INFO: Got endpoints: latency-svc-xfrd5 [951.875873ms] Apr 27 14:22:24.626: INFO: Created: latency-svc-hm27b Apr 27 14:22:24.640: INFO: Got endpoints: latency-svc-hm27b [921.467155ms] Apr 27 14:22:24.662: INFO: Created: latency-svc-fxnrg Apr 27 14:22:24.676: INFO: Got endpoints: latency-svc-fxnrg [905.351262ms] Apr 27 14:22:24.752: INFO: Created: latency-svc-j74wx Apr 27 14:22:24.755: INFO: Got endpoints: latency-svc-j74wx [933.584182ms] Apr 27 14:22:24.834: INFO: Created: latency-svc-k4hk9 Apr 27 14:22:24.913: INFO: Got endpoints: latency-svc-k4hk9 [1.02131161s] Apr 27 14:22:24.916: INFO: Created: latency-svc-v6dkr Apr 27 14:22:24.941: INFO: Got endpoints: latency-svc-v6dkr [959.480697ms] Apr 27 14:22:24.980: INFO: Created: latency-svc-j9sjx Apr 27 14:22:24.989: INFO: Got endpoints: latency-svc-j9sjx [947.204383ms] Apr 27 14:22:25.051: INFO: Created: latency-svc-6rlf5 Apr 27 14:22:25.055: INFO: Got endpoints: latency-svc-6rlf5 [935.037965ms] Apr 27 14:22:25.116: INFO: Created: latency-svc-xm6qn Apr 27 14:22:25.134: INFO: Got endpoints: latency-svc-xm6qn [954.418334ms] Apr 27 14:22:25.188: INFO: Created: latency-svc-bwlnw Apr 27 14:22:25.200: INFO: Got endpoints: latency-svc-bwlnw [904.664474ms] Apr 27 14:22:25.232: INFO: Created: latency-svc-nqq9l Apr 27 14:22:25.243: INFO: Got endpoints: latency-svc-nqq9l [905.036112ms] Apr 27 14:22:25.265: INFO: Created: latency-svc-862bl Apr 27 14:22:25.314: INFO: Got endpoints: latency-svc-862bl [113.662358ms] Apr 27 14:22:25.344: INFO: Created: latency-svc-mgxgk Apr 27 14:22:25.357: INFO: Got endpoints: latency-svc-mgxgk [917.787283ms] Apr 27 14:22:25.382: INFO: Created: latency-svc-jbvvk Apr 27 14:22:25.400: INFO: Got endpoints: latency-svc-jbvvk [948.003356ms] Apr 27 14:22:25.452: INFO: Created: latency-svc-n2kxs Apr 27 14:22:25.461: INFO: Got endpoints: latency-svc-n2kxs [966.187721ms] Apr 27 14:22:25.486: INFO: Created: latency-svc-k7fnt Apr 27 14:22:25.497: INFO: Got endpoints: latency-svc-k7fnt [959.453965ms] Apr 27 14:22:25.532: INFO: Created: latency-svc-l9p64 Apr 27 14:22:25.619: INFO: Created: latency-svc-fxzq9 Apr 27 14:22:25.620: INFO: Got endpoints: latency-svc-l9p64 [1.027506648s] Apr 27 14:22:25.636: INFO: Got endpoints: latency-svc-fxzq9 [996.740226ms] Apr 27 14:22:25.665: INFO: Created: latency-svc-gkt5c Apr 27 14:22:25.688: INFO: Got endpoints: latency-svc-gkt5c [1.011571305s] Apr 27 14:22:25.775: INFO: Created: latency-svc-hzsv2 Apr 27 14:22:25.780: INFO: Got endpoints: latency-svc-hzsv2 [1.025151637s] Apr 27 14:22:25.806: INFO: Created: latency-svc-vhh6f Apr 27 14:22:25.823: INFO: Got endpoints: latency-svc-vhh6f [910.006082ms] Apr 27 14:22:25.847: INFO: Created: latency-svc-qcl2h Apr 27 14:22:25.865: INFO: Got endpoints: latency-svc-qcl2h [923.5092ms] Apr 27 14:22:25.910: INFO: Created: latency-svc-2q9jg Apr 27 14:22:25.927: INFO: Got endpoints: latency-svc-2q9jg [937.227102ms] Apr 27 14:22:25.952: INFO: Created: latency-svc-mv7wv Apr 27 14:22:25.962: INFO: Got endpoints: latency-svc-mv7wv [906.41589ms] Apr 27 14:22:26.063: INFO: Created: latency-svc-q4lrc Apr 27 14:22:26.065: INFO: Got endpoints: latency-svc-q4lrc [930.760212ms] Apr 27 14:22:26.120: INFO: Created: latency-svc-fg8sb Apr 27 14:22:26.130: INFO: Got endpoints: latency-svc-fg8sb [887.053998ms] Apr 27 14:22:26.156: INFO: Created: latency-svc-75v4t Apr 27 14:22:26.200: INFO: Got endpoints: latency-svc-75v4t [886.614245ms] Apr 27 14:22:26.226: INFO: Created: latency-svc-n2cmq Apr 27 14:22:26.239: INFO: Got endpoints: latency-svc-n2cmq [881.402367ms] Apr 27 14:22:26.274: INFO: Created: latency-svc-zrmvz Apr 27 14:22:26.293: INFO: Got endpoints: latency-svc-zrmvz [893.468709ms] Apr 27 14:22:26.350: INFO: Created: latency-svc-g4bgc Apr 27 14:22:26.353: INFO: Got endpoints: latency-svc-g4bgc [892.753038ms] Apr 27 14:22:26.390: INFO: Created: latency-svc-b8kqt Apr 27 14:22:26.396: INFO: Got endpoints: latency-svc-b8kqt [899.306162ms] Apr 27 14:22:26.430: INFO: Created: latency-svc-vclgz Apr 27 14:22:26.438: INFO: Got endpoints: latency-svc-vclgz [818.348542ms] Apr 27 14:22:26.494: INFO: Created: latency-svc-4m9ll Apr 27 14:22:26.497: INFO: Got endpoints: latency-svc-4m9ll [860.700244ms] Apr 27 14:22:26.546: INFO: Created: latency-svc-cf8vj Apr 27 14:22:26.559: INFO: Got endpoints: latency-svc-cf8vj [871.36057ms] Apr 27 14:22:26.589: INFO: Created: latency-svc-twqc2 Apr 27 14:22:26.631: INFO: Got endpoints: latency-svc-twqc2 [851.273489ms] Apr 27 14:22:26.652: INFO: Created: latency-svc-mntk5 Apr 27 14:22:26.668: INFO: Got endpoints: latency-svc-mntk5 [845.468723ms] Apr 27 14:22:26.694: INFO: Created: latency-svc-kz42l Apr 27 14:22:26.711: INFO: Got endpoints: latency-svc-kz42l [845.719219ms] Apr 27 14:22:26.775: INFO: Created: latency-svc-plhd4 Apr 27 14:22:26.778: INFO: Got endpoints: latency-svc-plhd4 [851.093373ms] Apr 27 14:22:26.816: INFO: Created: latency-svc-gl2rt Apr 27 14:22:26.826: INFO: Got endpoints: latency-svc-gl2rt [863.70272ms] Apr 27 14:22:26.867: INFO: Created: latency-svc-bsf24 Apr 27 14:22:26.925: INFO: Got endpoints: latency-svc-bsf24 [859.404617ms] Apr 27 14:22:26.940: INFO: Created: latency-svc-6vbcf Apr 27 14:22:26.958: INFO: Got endpoints: latency-svc-6vbcf [828.004854ms] Apr 27 14:22:26.996: INFO: Created: latency-svc-ljswg Apr 27 14:22:27.012: INFO: Got endpoints: latency-svc-ljswg [811.795678ms] Apr 27 14:22:27.051: INFO: Created: latency-svc-5rtbj Apr 27 14:22:27.054: INFO: Got endpoints: latency-svc-5rtbj [815.270728ms] Apr 27 14:22:27.083: INFO: Created: latency-svc-b2fs2 Apr 27 14:22:27.097: INFO: Got endpoints: latency-svc-b2fs2 [803.841735ms] Apr 27 14:22:27.126: INFO: Created: latency-svc-5bnzg Apr 27 14:22:27.144: INFO: Got endpoints: latency-svc-5bnzg [790.807355ms] Apr 27 14:22:27.230: INFO: Created: latency-svc-bznqq Apr 27 14:22:27.246: INFO: Got endpoints: latency-svc-bznqq [850.256118ms] Apr 27 14:22:27.278: INFO: Created: latency-svc-ffm9n Apr 27 14:22:27.295: INFO: Got endpoints: latency-svc-ffm9n [856.336551ms] Apr 27 14:22:27.317: INFO: Created: latency-svc-89qcq Apr 27 14:22:27.380: INFO: Got endpoints: latency-svc-89qcq [882.50119ms] Apr 27 14:22:27.383: INFO: Created: latency-svc-dzdqj Apr 27 14:22:27.403: INFO: Got endpoints: latency-svc-dzdqj [844.117036ms] Apr 27 14:22:27.430: INFO: Created: latency-svc-2vprl Apr 27 14:22:27.446: INFO: Got endpoints: latency-svc-2vprl [814.402018ms] Apr 27 14:22:27.471: INFO: Created: latency-svc-sjrh6 Apr 27 14:22:27.547: INFO: Got endpoints: latency-svc-sjrh6 [879.284029ms] Apr 27 14:22:27.554: INFO: Created: latency-svc-9q7qm Apr 27 14:22:27.560: INFO: Got endpoints: latency-svc-9q7qm [849.354772ms] Apr 27 14:22:27.606: INFO: Created: latency-svc-sv4cv Apr 27 14:22:27.622: INFO: Got endpoints: latency-svc-sv4cv [843.828179ms] Apr 27 14:22:27.643: INFO: Created: latency-svc-n65vq Apr 27 14:22:27.697: INFO: Got endpoints: latency-svc-n65vq [871.795833ms] Apr 27 14:22:27.709: INFO: Created: latency-svc-79rp6 Apr 27 14:22:27.723: INFO: Got endpoints: latency-svc-79rp6 [798.636108ms] Apr 27 14:22:27.746: INFO: Created: latency-svc-8rwzg Apr 27 14:22:27.779: INFO: Got endpoints: latency-svc-8rwzg [820.871381ms] Apr 27 14:22:27.843: INFO: Created: latency-svc-dfrj4 Apr 27 14:22:27.845: INFO: Got endpoints: latency-svc-dfrj4 [833.074593ms] Apr 27 14:22:27.905: INFO: Created: latency-svc-kbns6 Apr 27 14:22:27.917: INFO: Got endpoints: latency-svc-kbns6 [862.839026ms] Apr 27 14:22:27.937: INFO: Created: latency-svc-8rzwn Apr 27 14:22:27.985: INFO: Got endpoints: latency-svc-8rzwn [887.306002ms] Apr 27 14:22:27.999: INFO: Created: latency-svc-xhbrj Apr 27 14:22:28.014: INFO: Got endpoints: latency-svc-xhbrj [869.818154ms] Apr 27 14:22:28.043: INFO: Created: latency-svc-rzwsm Apr 27 14:22:28.062: INFO: Got endpoints: latency-svc-rzwsm [815.423114ms] Apr 27 14:22:28.123: INFO: Created: latency-svc-5ptb2 Apr 27 14:22:28.134: INFO: Got endpoints: latency-svc-5ptb2 [839.16297ms] Apr 27 14:22:28.159: INFO: Created: latency-svc-r2dwh Apr 27 14:22:28.189: INFO: Got endpoints: latency-svc-r2dwh [809.53569ms] Apr 27 14:22:28.290: INFO: Created: latency-svc-xlv2g Apr 27 14:22:28.293: INFO: Got endpoints: latency-svc-xlv2g [889.950026ms] Apr 27 14:22:28.370: INFO: Created: latency-svc-hfggw Apr 27 14:22:28.381: INFO: Got endpoints: latency-svc-hfggw [935.473714ms] Apr 27 14:22:28.458: INFO: Created: latency-svc-v2jm6 Apr 27 14:22:28.461: INFO: Got endpoints: latency-svc-v2jm6 [913.237384ms] Apr 27 14:22:28.529: INFO: Created: latency-svc-8mkc5 Apr 27 14:22:28.544: INFO: Got endpoints: latency-svc-8mkc5 [983.452263ms] Apr 27 14:22:28.602: INFO: Created: latency-svc-zmvgz Apr 27 14:22:28.616: INFO: Got endpoints: latency-svc-zmvgz [994.265842ms] Apr 27 14:22:28.637: INFO: Created: latency-svc-qtcdb Apr 27 14:22:28.659: INFO: Got endpoints: latency-svc-qtcdb [961.215917ms] Apr 27 14:22:28.681: INFO: Created: latency-svc-f4v5w Apr 27 14:22:28.695: INFO: Got endpoints: latency-svc-f4v5w [971.545223ms] Apr 27 14:22:28.747: INFO: Created: latency-svc-hx6t4 Apr 27 14:22:28.767: INFO: Got endpoints: latency-svc-hx6t4 [988.372777ms] Apr 27 14:22:28.805: INFO: Created: latency-svc-rpnn5 Apr 27 14:22:28.828: INFO: Got endpoints: latency-svc-rpnn5 [982.300335ms] Apr 27 14:22:28.877: INFO: Created: latency-svc-bdm5t Apr 27 14:22:28.888: INFO: Got endpoints: latency-svc-bdm5t [971.210835ms] Apr 27 14:22:28.921: INFO: Created: latency-svc-zfwdw Apr 27 14:22:28.936: INFO: Got endpoints: latency-svc-zfwdw [951.740451ms] Apr 27 14:22:28.969: INFO: Created: latency-svc-xth9j Apr 27 14:22:29.032: INFO: Got endpoints: latency-svc-xth9j [1.018271734s] Apr 27 14:22:29.063: INFO: Created: latency-svc-v7m67 Apr 27 14:22:29.081: INFO: Got endpoints: latency-svc-v7m67 [1.01954813s] Apr 27 14:22:29.105: INFO: Created: latency-svc-f5vpz Apr 27 14:22:29.123: INFO: Got endpoints: latency-svc-f5vpz [989.30657ms] Apr 27 14:22:29.173: INFO: Created: latency-svc-pdm5t Apr 27 14:22:29.190: INFO: Got endpoints: latency-svc-pdm5t [1.000630658s] Apr 27 14:22:29.227: INFO: Created: latency-svc-bl72l Apr 27 14:22:29.250: INFO: Got endpoints: latency-svc-bl72l [956.640701ms] Apr 27 14:22:29.309: INFO: Created: latency-svc-hgrbw Apr 27 14:22:29.317: INFO: Got endpoints: latency-svc-hgrbw [935.325002ms] Apr 27 14:22:29.339: INFO: Created: latency-svc-4ltbg Apr 27 14:22:29.357: INFO: Got endpoints: latency-svc-4ltbg [896.193859ms] Apr 27 14:22:29.392: INFO: Created: latency-svc-w2wcj Apr 27 14:22:29.402: INFO: Got endpoints: latency-svc-w2wcj [858.153063ms] Apr 27 14:22:29.452: INFO: Created: latency-svc-vt2cl Apr 27 14:22:29.462: INFO: Got endpoints: latency-svc-vt2cl [845.880904ms] Apr 27 14:22:29.491: INFO: Created: latency-svc-gjtn2 Apr 27 14:22:29.510: INFO: Got endpoints: latency-svc-gjtn2 [851.475748ms] Apr 27 14:22:29.531: INFO: Created: latency-svc-m8zqn Apr 27 14:22:29.601: INFO: Got endpoints: latency-svc-m8zqn [906.424725ms] Apr 27 14:22:29.604: INFO: Created: latency-svc-gc55g Apr 27 14:22:29.619: INFO: Got endpoints: latency-svc-gc55g [851.61961ms] Apr 27 14:22:29.647: INFO: Created: latency-svc-sskst Apr 27 14:22:29.665: INFO: Got endpoints: latency-svc-sskst [836.857368ms] Apr 27 14:22:29.683: INFO: Created: latency-svc-nmqd6 Apr 27 14:22:29.698: INFO: Got endpoints: latency-svc-nmqd6 [809.536888ms] Apr 27 14:22:29.746: INFO: Created: latency-svc-tqbzv Apr 27 14:22:29.764: INFO: Got endpoints: latency-svc-tqbzv [827.793525ms] Apr 27 14:22:29.812: INFO: Created: latency-svc-wrzp4 Apr 27 14:22:29.824: INFO: Got endpoints: latency-svc-wrzp4 [791.832074ms] Apr 27 14:22:29.895: INFO: Created: latency-svc-975vd Apr 27 14:22:29.902: INFO: Got endpoints: latency-svc-975vd [820.916547ms] Apr 27 14:22:29.923: INFO: Created: latency-svc-gxmgp Apr 27 14:22:29.939: INFO: Got endpoints: latency-svc-gxmgp [815.423854ms] Apr 27 14:22:29.962: INFO: Created: latency-svc-c6xrf Apr 27 14:22:29.981: INFO: Got endpoints: latency-svc-c6xrf [791.431681ms] Apr 27 14:22:30.034: INFO: Created: latency-svc-xjppl Apr 27 14:22:30.037: INFO: Got endpoints: latency-svc-xjppl [786.871439ms] Apr 27 14:22:30.089: INFO: Created: latency-svc-9466p Apr 27 14:22:30.108: INFO: Got endpoints: latency-svc-9466p [791.663836ms] Apr 27 14:22:30.132: INFO: Created: latency-svc-6pjfk Apr 27 14:22:30.182: INFO: Got endpoints: latency-svc-6pjfk [825.173045ms] Apr 27 14:22:30.199: INFO: Created: latency-svc-sjsrp Apr 27 14:22:30.218: INFO: Got endpoints: latency-svc-sjsrp [816.153537ms] Apr 27 14:22:30.275: INFO: Created: latency-svc-rw5nn Apr 27 14:22:30.326: INFO: Got endpoints: latency-svc-rw5nn [863.947813ms] Apr 27 14:22:30.347: INFO: Created: latency-svc-p7q6c Apr 27 14:22:30.366: INFO: Got endpoints: latency-svc-p7q6c [856.033283ms] Apr 27 14:22:30.464: INFO: Created: latency-svc-jhvj8 Apr 27 14:22:30.467: INFO: Got endpoints: latency-svc-jhvj8 [865.168743ms] Apr 27 14:22:30.503: INFO: Created: latency-svc-nb8cs Apr 27 14:22:30.512: INFO: Got endpoints: latency-svc-nb8cs [893.056194ms] Apr 27 14:22:30.534: INFO: Created: latency-svc-l5dh6 Apr 27 14:22:30.537: INFO: Got endpoints: latency-svc-l5dh6 [872.072019ms] Apr 27 14:22:30.562: INFO: Created: latency-svc-tc2hn Apr 27 14:22:30.606: INFO: Got endpoints: latency-svc-tc2hn [908.447043ms] Apr 27 14:22:30.637: INFO: Created: latency-svc-l8zpz Apr 27 14:22:30.646: INFO: Got endpoints: latency-svc-l8zpz [881.623846ms] Apr 27 14:22:30.740: INFO: Created: latency-svc-9ttw7 Apr 27 14:22:30.755: INFO: Got endpoints: latency-svc-9ttw7 [930.509257ms] Apr 27 14:22:30.791: INFO: Created: latency-svc-lpsqb Apr 27 14:22:30.828: INFO: Got endpoints: latency-svc-lpsqb [925.603706ms] Apr 27 14:22:30.883: INFO: Created: latency-svc-8w42j Apr 27 14:22:30.899: INFO: Got endpoints: latency-svc-8w42j [960.144864ms] Apr 27 14:22:30.958: INFO: Created: latency-svc-d8v8k Apr 27 14:22:31.021: INFO: Got endpoints: latency-svc-d8v8k [1.039660558s] Apr 27 14:22:31.030: INFO: Created: latency-svc-xk876 Apr 27 14:22:31.044: INFO: Got endpoints: latency-svc-xk876 [1.006547623s] Apr 27 14:22:31.070: INFO: Created: latency-svc-gcv9m Apr 27 14:22:31.105: INFO: Got endpoints: latency-svc-gcv9m [996.086236ms] Apr 27 14:22:31.170: INFO: Created: latency-svc-qmvf9 Apr 27 14:22:31.189: INFO: Got endpoints: latency-svc-qmvf9 [1.006945395s] Apr 27 14:22:31.216: INFO: Created: latency-svc-mfp7r Apr 27 14:22:31.231: INFO: Got endpoints: latency-svc-mfp7r [1.013135799s] Apr 27 14:22:31.264: INFO: Created: latency-svc-5rp78 Apr 27 14:22:31.308: INFO: Got endpoints: latency-svc-5rp78 [981.96379ms] Apr 27 14:22:31.320: INFO: Created: latency-svc-dvjzl Apr 27 14:22:31.344: INFO: Got endpoints: latency-svc-dvjzl [977.631563ms] Apr 27 14:22:31.380: INFO: Created: latency-svc-m28zf Apr 27 14:22:31.394: INFO: Got endpoints: latency-svc-m28zf [927.351094ms] Apr 27 14:22:31.446: INFO: Created: latency-svc-ft4pz Apr 27 14:22:31.448: INFO: Got endpoints: latency-svc-ft4pz [936.223987ms] Apr 27 14:22:31.480: INFO: Created: latency-svc-2ghvm Apr 27 14:22:31.497: INFO: Got endpoints: latency-svc-2ghvm [959.607277ms] Apr 27 14:22:31.518: INFO: Created: latency-svc-l9cg7 Apr 27 14:22:31.542: INFO: Got endpoints: latency-svc-l9cg7 [935.400738ms] Apr 27 14:22:31.601: INFO: Created: latency-svc-t22qk Apr 27 14:22:31.605: INFO: Got endpoints: latency-svc-t22qk [959.241904ms] Apr 27 14:22:31.630: INFO: Created: latency-svc-2rfgq Apr 27 14:22:31.649: INFO: Got endpoints: latency-svc-2rfgq [894.063155ms] Apr 27 14:22:31.672: INFO: Created: latency-svc-q7555 Apr 27 14:22:31.690: INFO: Got endpoints: latency-svc-q7555 [862.08371ms] Apr 27 14:22:31.734: INFO: Created: latency-svc-jxt67 Apr 27 14:22:31.736: INFO: Got endpoints: latency-svc-jxt67 [836.751229ms] Apr 27 14:22:31.764: INFO: Created: latency-svc-vc8b7 Apr 27 14:22:31.781: INFO: Got endpoints: latency-svc-vc8b7 [760.086128ms] Apr 27 14:22:31.806: INFO: Created: latency-svc-k6jj7 Apr 27 14:22:31.818: INFO: Got endpoints: latency-svc-k6jj7 [774.283285ms] Apr 27 14:22:31.865: INFO: Created: latency-svc-tlfz4 Apr 27 14:22:31.867: INFO: Got endpoints: latency-svc-tlfz4 [762.626501ms] Apr 27 14:22:31.867: INFO: Latencies: [67.752892ms 113.662358ms 132.890515ms 160.949274ms 227.415431ms 304.30409ms 383.407578ms 436.270475ms 502.298703ms 586.039781ms 636.042451ms 708.136974ms 755.764539ms 760.086128ms 762.626501ms 774.283285ms 779.940056ms 785.754809ms 786.871439ms 790.807355ms 791.431681ms 791.663836ms 791.832074ms 792.106874ms 794.868552ms 797.143815ms 797.99569ms 798.548212ms 798.636108ms 803.841735ms 809.53569ms 809.536888ms 811.795678ms 812.530967ms 814.402018ms 815.270728ms 815.423114ms 815.423854ms 815.615535ms 816.153537ms 818.348542ms 820.174862ms 820.871381ms 820.916547ms 821.891172ms 825.173045ms 827.793525ms 828.004854ms 830.855559ms 833.074593ms 836.408354ms 836.751229ms 836.857368ms 839.16297ms 843.828179ms 844.117036ms 844.973602ms 845.468723ms 845.719219ms 845.880904ms 849.354772ms 850.256118ms 851.093373ms 851.273489ms 851.475748ms 851.61961ms 856.033283ms 856.336551ms 858.153063ms 859.404617ms 860.700244ms 861.218061ms 862.08371ms 862.839026ms 863.70272ms 863.847156ms 863.947813ms 865.168743ms 869.818154ms 869.829955ms 871.36057ms 871.415529ms 871.795833ms 872.072019ms 875.663017ms 876.517364ms 877.819684ms 878.330311ms 878.728432ms 879.284029ms 881.402367ms 881.602649ms 881.623846ms 882.50119ms 883.12864ms 886.614245ms 887.053998ms 887.306002ms 887.594789ms 889.950026ms 892.753038ms 893.056194ms 893.468709ms 893.784619ms 894.063155ms 895.836345ms 896.193859ms 899.158753ms 899.306162ms 902.202746ms 904.664474ms 905.036112ms 905.351262ms 906.41589ms 906.424725ms 906.743678ms 908.447043ms 910.006082ms 913.237384ms 914.268917ms 917.787283ms 921.467155ms 923.239695ms 923.5092ms 925.5183ms 925.603706ms 927.351094ms 927.66138ms 929.725425ms 930.509257ms 930.703086ms 930.760212ms 930.793599ms 933.584182ms 935.037965ms 935.325002ms 935.400738ms 935.473714ms 936.223987ms 937.227102ms 938.262957ms 938.400933ms 938.752794ms 940.998883ms 941.144753ms 947.204383ms 948.003356ms 949.91905ms 951.740451ms 951.875873ms 954.418334ms 956.015879ms 956.640701ms 959.241904ms 959.324589ms 959.453965ms 959.480697ms 959.607277ms 960.144864ms 961.215917ms 964.618447ms 966.187721ms 969.254157ms 971.210835ms 971.545223ms 976.261762ms 977.631563ms 980.328255ms 981.96379ms 982.300335ms 983.366228ms 983.452263ms 983.524884ms 988.372777ms 989.30657ms 994.265842ms 994.353178ms 996.086236ms 996.476271ms 996.740226ms 1.000630658s 1.005442273s 1.006547623s 1.006945395s 1.007444287s 1.011571305s 1.013135799s 1.013159795s 1.013950747s 1.018271734s 1.01954813s 1.02131161s 1.025151637s 1.026051218s 1.027506648s 1.035964555s 1.039660558s 1.041895836s 1.044628829s 1.072688139s] Apr 27 14:22:31.868: INFO: 50 %ile: 892.753038ms Apr 27 14:22:31.868: INFO: 90 %ile: 1.000630658s Apr 27 14:22:31.868: INFO: 99 %ile: 1.044628829s Apr 27 14:22:31.868: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 27 14:22:31.868: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svc-latency-7710" for this suite. Apr 27 14:23:07.906: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 27 14:23:07.981: INFO: namespace svc-latency-7710 deletion completed in 36.093031628s • [SLOW TEST:52.303 seconds] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 27 14:23:07.982: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars Apr 27 14:23:08.063: INFO: Waiting up to 5m0s for pod "downward-api-3e64ce87-53c0-41b1-9f9d-6fa66c1fff6d" in namespace "downward-api-9650" to be "success or failure" Apr 27 14:23:08.069: INFO: Pod "downward-api-3e64ce87-53c0-41b1-9f9d-6fa66c1fff6d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.121061ms Apr 27 14:23:10.074: INFO: Pod "downward-api-3e64ce87-53c0-41b1-9f9d-6fa66c1fff6d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010318321s Apr 27 14:23:12.077: INFO: Pod "downward-api-3e64ce87-53c0-41b1-9f9d-6fa66c1fff6d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014123496s STEP: Saw pod success Apr 27 14:23:12.077: INFO: Pod "downward-api-3e64ce87-53c0-41b1-9f9d-6fa66c1fff6d" satisfied condition "success or failure" Apr 27 14:23:12.080: INFO: Trying to get logs from node iruya-worker2 pod downward-api-3e64ce87-53c0-41b1-9f9d-6fa66c1fff6d container dapi-container: STEP: delete the pod Apr 27 14:23:12.101: INFO: Waiting for pod downward-api-3e64ce87-53c0-41b1-9f9d-6fa66c1fff6d to disappear Apr 27 14:23:12.123: INFO: Pod downward-api-3e64ce87-53c0-41b1-9f9d-6fa66c1fff6d no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 27 14:23:12.123: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9650" for this suite. Apr 27 14:23:18.145: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 27 14:23:18.216: INFO: namespace downward-api-9650 deletion completed in 6.08956384s • [SLOW TEST:10.234 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 27 14:23:18.216: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 27 14:23:22.325: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-8892" for this suite. Apr 27 14:24:00.342: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 27 14:24:00.425: INFO: namespace kubelet-test-8892 deletion completed in 38.096493535s • [SLOW TEST:42.209 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a read only busybox container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:187 should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 27 14:24:00.426: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name projected-secret-test-ea781e55-1f7b-4a94-8081-7f472c02c5d0 STEP: Creating a pod to test consume secrets Apr 27 14:24:00.537: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-43ad25a8-f805-4f6c-a725-3d3c2327caec" in namespace "projected-6592" to be "success or failure" Apr 27 14:24:00.550: INFO: Pod "pod-projected-secrets-43ad25a8-f805-4f6c-a725-3d3c2327caec": Phase="Pending", Reason="", readiness=false. Elapsed: 12.721852ms Apr 27 14:24:02.555: INFO: Pod "pod-projected-secrets-43ad25a8-f805-4f6c-a725-3d3c2327caec": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017734579s Apr 27 14:24:04.559: INFO: Pod "pod-projected-secrets-43ad25a8-f805-4f6c-a725-3d3c2327caec": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.022388231s STEP: Saw pod success Apr 27 14:24:04.559: INFO: Pod "pod-projected-secrets-43ad25a8-f805-4f6c-a725-3d3c2327caec" satisfied condition "success or failure" Apr 27 14:24:04.563: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-secrets-43ad25a8-f805-4f6c-a725-3d3c2327caec container secret-volume-test: STEP: delete the pod Apr 27 14:24:04.598: INFO: Waiting for pod pod-projected-secrets-43ad25a8-f805-4f6c-a725-3d3c2327caec to disappear Apr 27 14:24:04.603: INFO: Pod pod-projected-secrets-43ad25a8-f805-4f6c-a725-3d3c2327caec no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 27 14:24:04.603: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6592" for this suite. Apr 27 14:24:10.631: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 27 14:24:10.706: INFO: namespace projected-6592 deletion completed in 6.09950822s • [SLOW TEST:10.280 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 27 14:24:10.707: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-e2db46e0-460a-49d1-8d5a-364991a6be3e STEP: Creating a pod to test consume secrets Apr 27 14:24:10.791: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-24f9e5b6-790a-4fd9-8f4d-af24a8bac638" in namespace "projected-6980" to be "success or failure" Apr 27 14:24:10.795: INFO: Pod "pod-projected-secrets-24f9e5b6-790a-4fd9-8f4d-af24a8bac638": Phase="Pending", Reason="", readiness=false. Elapsed: 3.647417ms Apr 27 14:24:12.799: INFO: Pod "pod-projected-secrets-24f9e5b6-790a-4fd9-8f4d-af24a8bac638": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007501759s Apr 27 14:24:14.803: INFO: Pod "pod-projected-secrets-24f9e5b6-790a-4fd9-8f4d-af24a8bac638": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011483255s STEP: Saw pod success Apr 27 14:24:14.803: INFO: Pod "pod-projected-secrets-24f9e5b6-790a-4fd9-8f4d-af24a8bac638" satisfied condition "success or failure" Apr 27 14:24:14.805: INFO: Trying to get logs from node iruya-worker pod pod-projected-secrets-24f9e5b6-790a-4fd9-8f4d-af24a8bac638 container projected-secret-volume-test: STEP: delete the pod Apr 27 14:24:14.839: INFO: Waiting for pod pod-projected-secrets-24f9e5b6-790a-4fd9-8f4d-af24a8bac638 to disappear Apr 27 14:24:14.855: INFO: Pod pod-projected-secrets-24f9e5b6-790a-4fd9-8f4d-af24a8bac638 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 27 14:24:14.855: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6980" for this suite. Apr 27 14:24:20.871: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 27 14:24:20.983: INFO: namespace projected-6980 deletion completed in 6.124004219s • [SLOW TEST:10.277 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 27 14:24:20.984: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: starting an echo server on multiple ports STEP: creating replication controller proxy-service-c8nkr in namespace proxy-2092 I0427 14:24:21.108308 6 runners.go:180] Created replication controller with name: proxy-service-c8nkr, namespace: proxy-2092, replica count: 1 I0427 14:24:22.158702 6 runners.go:180] proxy-service-c8nkr Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0427 14:24:23.158871 6 runners.go:180] proxy-service-c8nkr Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0427 14:24:24.159076 6 runners.go:180] proxy-service-c8nkr Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0427 14:24:25.159269 6 runners.go:180] proxy-service-c8nkr Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0427 14:24:26.159491 6 runners.go:180] proxy-service-c8nkr Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0427 14:24:27.159711 6 runners.go:180] proxy-service-c8nkr Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Apr 27 14:24:27.163: INFO: setup took 6.127141136s, starting test cases STEP: running 16 cases, 20 attempts per case, 320 total attempts Apr 27 14:24:27.171: INFO: (0) /api/v1/namespaces/proxy-2092/services/proxy-service-c8nkr:portname1/proxy/: foo (200; 8.364328ms) Apr 27 14:24:27.171: INFO: (0) /api/v1/namespaces/proxy-2092/pods/http:proxy-service-c8nkr-wvfql:162/proxy/: bar (200; 8.707452ms) Apr 27 14:24:27.172: INFO: (0) /api/v1/namespaces/proxy-2092/pods/proxy-service-c8nkr-wvfql/proxy/: test (200; 8.699844ms) Apr 27 14:24:27.172: INFO: (0) /api/v1/namespaces/proxy-2092/pods/proxy-service-c8nkr-wvfql:162/proxy/: bar (200; 9.607718ms) Apr 27 14:24:27.173: INFO: (0) /api/v1/namespaces/proxy-2092/pods/proxy-service-c8nkr-wvfql:160/proxy/: foo (200; 9.969353ms) Apr 27 14:24:27.173: INFO: (0) /api/v1/namespaces/proxy-2092/services/http:proxy-service-c8nkr:portname1/proxy/: foo (200; 10.23877ms) Apr 27 14:24:27.173: INFO: (0) /api/v1/namespaces/proxy-2092/services/http:proxy-service-c8nkr:portname2/proxy/: bar (200; 10.49283ms) Apr 27 14:24:27.173: INFO: (0) /api/v1/namespaces/proxy-2092/pods/http:proxy-service-c8nkr-wvfql:1080/proxy/: ... (200; 10.464586ms) Apr 27 14:24:27.179: INFO: (0) /api/v1/namespaces/proxy-2092/pods/http:proxy-service-c8nkr-wvfql:160/proxy/: foo (200; 16.606469ms) Apr 27 14:24:27.179: INFO: (0) /api/v1/namespaces/proxy-2092/pods/proxy-service-c8nkr-wvfql:1080/proxy/: test<... (200; 16.491523ms) Apr 27 14:24:27.179: INFO: (0) /api/v1/namespaces/proxy-2092/services/proxy-service-c8nkr:portname2/proxy/: bar (200; 16.540141ms) Apr 27 14:24:27.182: INFO: (0) /api/v1/namespaces/proxy-2092/services/https:proxy-service-c8nkr:tlsportname2/proxy/: tls qux (200; 19.111526ms) Apr 27 14:24:27.182: INFO: (0) /api/v1/namespaces/proxy-2092/pods/https:proxy-service-c8nkr-wvfql:443/proxy/: test (200; 4.039705ms) Apr 27 14:24:27.188: INFO: (1) /api/v1/namespaces/proxy-2092/pods/https:proxy-service-c8nkr-wvfql:462/proxy/: tls qux (200; 4.129085ms) Apr 27 14:24:27.189: INFO: (1) /api/v1/namespaces/proxy-2092/services/http:proxy-service-c8nkr:portname2/proxy/: bar (200; 4.31214ms) Apr 27 14:24:27.189: INFO: (1) /api/v1/namespaces/proxy-2092/pods/http:proxy-service-c8nkr-wvfql:160/proxy/: foo (200; 4.591383ms) Apr 27 14:24:27.189: INFO: (1) /api/v1/namespaces/proxy-2092/pods/proxy-service-c8nkr-wvfql:160/proxy/: foo (200; 4.552897ms) Apr 27 14:24:27.189: INFO: (1) /api/v1/namespaces/proxy-2092/pods/https:proxy-service-c8nkr-wvfql:443/proxy/: ... (200; 4.6668ms) Apr 27 14:24:27.189: INFO: (1) /api/v1/namespaces/proxy-2092/pods/proxy-service-c8nkr-wvfql:1080/proxy/: test<... (200; 4.621204ms) Apr 27 14:24:27.189: INFO: (1) /api/v1/namespaces/proxy-2092/pods/https:proxy-service-c8nkr-wvfql:460/proxy/: tls baz (200; 4.908545ms) Apr 27 14:24:27.189: INFO: (1) /api/v1/namespaces/proxy-2092/services/http:proxy-service-c8nkr:portname1/proxy/: foo (200; 4.883379ms) Apr 27 14:24:27.189: INFO: (1) /api/v1/namespaces/proxy-2092/services/https:proxy-service-c8nkr:tlsportname2/proxy/: tls qux (200; 4.989352ms) Apr 27 14:24:27.189: INFO: (1) /api/v1/namespaces/proxy-2092/pods/http:proxy-service-c8nkr-wvfql:162/proxy/: bar (200; 4.930804ms) Apr 27 14:24:27.189: INFO: (1) /api/v1/namespaces/proxy-2092/services/https:proxy-service-c8nkr:tlsportname1/proxy/: tls baz (200; 5.079308ms) Apr 27 14:24:27.189: INFO: (1) /api/v1/namespaces/proxy-2092/services/proxy-service-c8nkr:portname2/proxy/: bar (200; 5.068966ms) Apr 27 14:24:27.190: INFO: (1) /api/v1/namespaces/proxy-2092/services/proxy-service-c8nkr:portname1/proxy/: foo (200; 5.765105ms) Apr 27 14:24:27.195: INFO: (2) /api/v1/namespaces/proxy-2092/pods/http:proxy-service-c8nkr-wvfql:160/proxy/: foo (200; 4.856048ms) Apr 27 14:24:27.195: INFO: (2) /api/v1/namespaces/proxy-2092/pods/proxy-service-c8nkr-wvfql:160/proxy/: foo (200; 4.89749ms) Apr 27 14:24:27.195: INFO: (2) /api/v1/namespaces/proxy-2092/pods/http:proxy-service-c8nkr-wvfql:162/proxy/: bar (200; 4.981641ms) Apr 27 14:24:27.195: INFO: (2) /api/v1/namespaces/proxy-2092/pods/proxy-service-c8nkr-wvfql:1080/proxy/: test<... (200; 5.03477ms) Apr 27 14:24:27.195: INFO: (2) /api/v1/namespaces/proxy-2092/pods/https:proxy-service-c8nkr-wvfql:462/proxy/: tls qux (200; 5.014567ms) Apr 27 14:24:27.195: INFO: (2) /api/v1/namespaces/proxy-2092/pods/proxy-service-c8nkr-wvfql:162/proxy/: bar (200; 5.058008ms) Apr 27 14:24:27.195: INFO: (2) /api/v1/namespaces/proxy-2092/pods/https:proxy-service-c8nkr-wvfql:443/proxy/: test (200; 5.279909ms) Apr 27 14:24:27.196: INFO: (2) /api/v1/namespaces/proxy-2092/services/proxy-service-c8nkr:portname1/proxy/: foo (200; 6.41552ms) Apr 27 14:24:27.196: INFO: (2) /api/v1/namespaces/proxy-2092/services/http:proxy-service-c8nkr:portname1/proxy/: foo (200; 6.508703ms) Apr 27 14:24:27.196: INFO: (2) /api/v1/namespaces/proxy-2092/pods/http:proxy-service-c8nkr-wvfql:1080/proxy/: ... (200; 6.460842ms) Apr 27 14:24:27.196: INFO: (2) /api/v1/namespaces/proxy-2092/services/proxy-service-c8nkr:portname2/proxy/: bar (200; 6.43915ms) Apr 27 14:24:27.197: INFO: (2) /api/v1/namespaces/proxy-2092/services/http:proxy-service-c8nkr:portname2/proxy/: bar (200; 6.51457ms) Apr 27 14:24:27.197: INFO: (2) /api/v1/namespaces/proxy-2092/services/https:proxy-service-c8nkr:tlsportname1/proxy/: tls baz (200; 6.548607ms) Apr 27 14:24:27.201: INFO: (3) /api/v1/namespaces/proxy-2092/pods/http:proxy-service-c8nkr-wvfql:160/proxy/: foo (200; 3.821541ms) Apr 27 14:24:27.201: INFO: (3) /api/v1/namespaces/proxy-2092/pods/proxy-service-c8nkr-wvfql:1080/proxy/: test<... (200; 4.146691ms) Apr 27 14:24:27.258: INFO: (3) /api/v1/namespaces/proxy-2092/pods/https:proxy-service-c8nkr-wvfql:460/proxy/: tls baz (200; 60.669393ms) Apr 27 14:24:27.258: INFO: (3) /api/v1/namespaces/proxy-2092/pods/http:proxy-service-c8nkr-wvfql:1080/proxy/: ... (200; 60.715527ms) Apr 27 14:24:27.258: INFO: (3) /api/v1/namespaces/proxy-2092/pods/proxy-service-c8nkr-wvfql:160/proxy/: foo (200; 60.86012ms) Apr 27 14:24:27.258: INFO: (3) /api/v1/namespaces/proxy-2092/pods/https:proxy-service-c8nkr-wvfql:462/proxy/: tls qux (200; 61.055511ms) Apr 27 14:24:27.258: INFO: (3) /api/v1/namespaces/proxy-2092/pods/proxy-service-c8nkr-wvfql/proxy/: test (200; 60.847526ms) Apr 27 14:24:27.258: INFO: (3) /api/v1/namespaces/proxy-2092/pods/http:proxy-service-c8nkr-wvfql:162/proxy/: bar (200; 60.800002ms) Apr 27 14:24:27.258: INFO: (3) /api/v1/namespaces/proxy-2092/pods/proxy-service-c8nkr-wvfql:162/proxy/: bar (200; 60.895634ms) Apr 27 14:24:27.258: INFO: (3) /api/v1/namespaces/proxy-2092/pods/https:proxy-service-c8nkr-wvfql:443/proxy/: ... (200; 6.394589ms) Apr 27 14:24:27.267: INFO: (4) /api/v1/namespaces/proxy-2092/pods/http:proxy-service-c8nkr-wvfql:162/proxy/: bar (200; 6.320663ms) Apr 27 14:24:27.267: INFO: (4) /api/v1/namespaces/proxy-2092/pods/https:proxy-service-c8nkr-wvfql:443/proxy/: test<... (200; 6.24763ms) Apr 27 14:24:27.267: INFO: (4) /api/v1/namespaces/proxy-2092/pods/proxy-service-c8nkr-wvfql:160/proxy/: foo (200; 6.339237ms) Apr 27 14:24:27.267: INFO: (4) /api/v1/namespaces/proxy-2092/pods/https:proxy-service-c8nkr-wvfql:462/proxy/: tls qux (200; 6.347616ms) Apr 27 14:24:27.267: INFO: (4) /api/v1/namespaces/proxy-2092/pods/proxy-service-c8nkr-wvfql/proxy/: test (200; 6.312604ms) Apr 27 14:24:27.267: INFO: (4) /api/v1/namespaces/proxy-2092/services/https:proxy-service-c8nkr:tlsportname2/proxy/: tls qux (200; 6.519467ms) Apr 27 14:24:27.267: INFO: (4) /api/v1/namespaces/proxy-2092/services/http:proxy-service-c8nkr:portname2/proxy/: bar (200; 7.237767ms) Apr 27 14:24:27.268: INFO: (4) /api/v1/namespaces/proxy-2092/services/https:proxy-service-c8nkr:tlsportname1/proxy/: tls baz (200; 7.554594ms) Apr 27 14:24:27.268: INFO: (4) /api/v1/namespaces/proxy-2092/services/proxy-service-c8nkr:portname2/proxy/: bar (200; 7.40363ms) Apr 27 14:24:27.268: INFO: (4) /api/v1/namespaces/proxy-2092/services/proxy-service-c8nkr:portname1/proxy/: foo (200; 7.533508ms) Apr 27 14:24:27.272: INFO: (5) /api/v1/namespaces/proxy-2092/pods/http:proxy-service-c8nkr-wvfql:160/proxy/: foo (200; 3.821217ms) Apr 27 14:24:27.273: INFO: (5) /api/v1/namespaces/proxy-2092/pods/http:proxy-service-c8nkr-wvfql:162/proxy/: bar (200; 5.086308ms) Apr 27 14:24:27.273: INFO: (5) /api/v1/namespaces/proxy-2092/pods/https:proxy-service-c8nkr-wvfql:460/proxy/: tls baz (200; 5.092129ms) Apr 27 14:24:27.273: INFO: (5) /api/v1/namespaces/proxy-2092/pods/proxy-service-c8nkr-wvfql/proxy/: test (200; 5.167153ms) Apr 27 14:24:27.273: INFO: (5) /api/v1/namespaces/proxy-2092/pods/https:proxy-service-c8nkr-wvfql:443/proxy/: test<... (200; 5.546658ms) Apr 27 14:24:27.273: INFO: (5) /api/v1/namespaces/proxy-2092/pods/http:proxy-service-c8nkr-wvfql:1080/proxy/: ... (200; 5.58363ms) Apr 27 14:24:27.273: INFO: (5) /api/v1/namespaces/proxy-2092/services/proxy-service-c8nkr:portname1/proxy/: foo (200; 5.548824ms) Apr 27 14:24:27.273: INFO: (5) /api/v1/namespaces/proxy-2092/pods/proxy-service-c8nkr-wvfql:160/proxy/: foo (200; 5.617836ms) Apr 27 14:24:27.273: INFO: (5) /api/v1/namespaces/proxy-2092/services/https:proxy-service-c8nkr:tlsportname1/proxy/: tls baz (200; 5.672433ms) Apr 27 14:24:27.274: INFO: (5) /api/v1/namespaces/proxy-2092/pods/proxy-service-c8nkr-wvfql:162/proxy/: bar (200; 5.718454ms) Apr 27 14:24:27.274: INFO: (5) /api/v1/namespaces/proxy-2092/services/http:proxy-service-c8nkr:portname1/proxy/: foo (200; 5.751716ms) Apr 27 14:24:27.274: INFO: (5) /api/v1/namespaces/proxy-2092/services/https:proxy-service-c8nkr:tlsportname2/proxy/: tls qux (200; 5.772122ms) Apr 27 14:24:27.276: INFO: (6) /api/v1/namespaces/proxy-2092/pods/https:proxy-service-c8nkr-wvfql:462/proxy/: tls qux (200; 2.170254ms) Apr 27 14:24:27.276: INFO: (6) /api/v1/namespaces/proxy-2092/pods/https:proxy-service-c8nkr-wvfql:443/proxy/: test<... (200; 4.167912ms) Apr 27 14:24:27.278: INFO: (6) /api/v1/namespaces/proxy-2092/pods/http:proxy-service-c8nkr-wvfql:162/proxy/: bar (200; 4.369076ms) Apr 27 14:24:27.278: INFO: (6) /api/v1/namespaces/proxy-2092/services/http:proxy-service-c8nkr:portname1/proxy/: foo (200; 4.527607ms) Apr 27 14:24:27.278: INFO: (6) /api/v1/namespaces/proxy-2092/pods/http:proxy-service-c8nkr-wvfql:1080/proxy/: ... (200; 3.484805ms) Apr 27 14:24:27.278: INFO: (6) /api/v1/namespaces/proxy-2092/pods/http:proxy-service-c8nkr-wvfql:160/proxy/: foo (200; 4.071932ms) Apr 27 14:24:27.278: INFO: (6) /api/v1/namespaces/proxy-2092/pods/proxy-service-c8nkr-wvfql/proxy/: test (200; 4.367787ms) Apr 27 14:24:27.279: INFO: (6) /api/v1/namespaces/proxy-2092/services/https:proxy-service-c8nkr:tlsportname1/proxy/: tls baz (200; 4.86534ms) Apr 27 14:24:27.279: INFO: (6) /api/v1/namespaces/proxy-2092/pods/https:proxy-service-c8nkr-wvfql:460/proxy/: tls baz (200; 3.418217ms) Apr 27 14:24:27.279: INFO: (6) /api/v1/namespaces/proxy-2092/services/proxy-service-c8nkr:portname1/proxy/: foo (200; 3.895969ms) Apr 27 14:24:27.279: INFO: (6) /api/v1/namespaces/proxy-2092/services/https:proxy-service-c8nkr:tlsportname2/proxy/: tls qux (200; 4.30755ms) Apr 27 14:24:27.279: INFO: (6) /api/v1/namespaces/proxy-2092/services/proxy-service-c8nkr:portname2/proxy/: bar (200; 4.538839ms) Apr 27 14:24:27.279: INFO: (6) /api/v1/namespaces/proxy-2092/services/http:proxy-service-c8nkr:portname2/proxy/: bar (200; 4.445639ms) Apr 27 14:24:27.279: INFO: (6) /api/v1/namespaces/proxy-2092/pods/proxy-service-c8nkr-wvfql:160/proxy/: foo (200; 4.926877ms) Apr 27 14:24:27.286: INFO: (7) /api/v1/namespaces/proxy-2092/services/proxy-service-c8nkr:portname2/proxy/: bar (200; 6.496961ms) Apr 27 14:24:27.286: INFO: (7) /api/v1/namespaces/proxy-2092/services/https:proxy-service-c8nkr:tlsportname1/proxy/: tls baz (200; 6.582413ms) Apr 27 14:24:27.286: INFO: (7) /api/v1/namespaces/proxy-2092/pods/https:proxy-service-c8nkr-wvfql:460/proxy/: tls baz (200; 6.888138ms) Apr 27 14:24:27.286: INFO: (7) /api/v1/namespaces/proxy-2092/services/http:proxy-service-c8nkr:portname2/proxy/: bar (200; 6.886487ms) Apr 27 14:24:27.286: INFO: (7) /api/v1/namespaces/proxy-2092/pods/https:proxy-service-c8nkr-wvfql:462/proxy/: tls qux (200; 7.109254ms) Apr 27 14:24:27.286: INFO: (7) /api/v1/namespaces/proxy-2092/pods/proxy-service-c8nkr-wvfql/proxy/: test (200; 7.090423ms) Apr 27 14:24:27.287: INFO: (7) /api/v1/namespaces/proxy-2092/services/https:proxy-service-c8nkr:tlsportname2/proxy/: tls qux (200; 7.271616ms) Apr 27 14:24:27.287: INFO: (7) /api/v1/namespaces/proxy-2092/pods/http:proxy-service-c8nkr-wvfql:160/proxy/: foo (200; 7.394298ms) Apr 27 14:24:27.287: INFO: (7) /api/v1/namespaces/proxy-2092/pods/proxy-service-c8nkr-wvfql:1080/proxy/: test<... (200; 7.509772ms) Apr 27 14:24:27.287: INFO: (7) /api/v1/namespaces/proxy-2092/pods/proxy-service-c8nkr-wvfql:162/proxy/: bar (200; 7.59282ms) Apr 27 14:24:27.287: INFO: (7) /api/v1/namespaces/proxy-2092/services/proxy-service-c8nkr:portname1/proxy/: foo (200; 7.575973ms) Apr 27 14:24:27.287: INFO: (7) /api/v1/namespaces/proxy-2092/pods/http:proxy-service-c8nkr-wvfql:162/proxy/: bar (200; 7.574299ms) Apr 27 14:24:27.287: INFO: (7) /api/v1/namespaces/proxy-2092/pods/https:proxy-service-c8nkr-wvfql:443/proxy/: ... (200; 7.67291ms) Apr 27 14:24:27.287: INFO: (7) /api/v1/namespaces/proxy-2092/pods/proxy-service-c8nkr-wvfql:160/proxy/: foo (200; 7.796043ms) Apr 27 14:24:27.287: INFO: (7) /api/v1/namespaces/proxy-2092/services/http:proxy-service-c8nkr:portname1/proxy/: foo (200; 7.924764ms) Apr 27 14:24:27.299: INFO: (8) /api/v1/namespaces/proxy-2092/pods/https:proxy-service-c8nkr-wvfql:460/proxy/: tls baz (200; 11.732562ms) Apr 27 14:24:27.299: INFO: (8) /api/v1/namespaces/proxy-2092/pods/proxy-service-c8nkr-wvfql:160/proxy/: foo (200; 11.716063ms) Apr 27 14:24:27.299: INFO: (8) /api/v1/namespaces/proxy-2092/pods/http:proxy-service-c8nkr-wvfql:1080/proxy/: ... (200; 11.954125ms) Apr 27 14:24:27.299: INFO: (8) /api/v1/namespaces/proxy-2092/pods/proxy-service-c8nkr-wvfql/proxy/: test (200; 11.981323ms) Apr 27 14:24:27.299: INFO: (8) /api/v1/namespaces/proxy-2092/pods/http:proxy-service-c8nkr-wvfql:160/proxy/: foo (200; 12.041028ms) Apr 27 14:24:27.300: INFO: (8) /api/v1/namespaces/proxy-2092/pods/https:proxy-service-c8nkr-wvfql:443/proxy/: test<... (200; 12.722114ms) Apr 27 14:24:27.300: INFO: (8) /api/v1/namespaces/proxy-2092/pods/http:proxy-service-c8nkr-wvfql:162/proxy/: bar (200; 12.731442ms) Apr 27 14:24:27.301: INFO: (8) /api/v1/namespaces/proxy-2092/services/http:proxy-service-c8nkr:portname2/proxy/: bar (200; 13.527734ms) Apr 27 14:24:27.302: INFO: (8) /api/v1/namespaces/proxy-2092/services/proxy-service-c8nkr:portname1/proxy/: foo (200; 14.526017ms) Apr 27 14:24:27.302: INFO: (8) /api/v1/namespaces/proxy-2092/services/https:proxy-service-c8nkr:tlsportname1/proxy/: tls baz (200; 14.620508ms) Apr 27 14:24:27.302: INFO: (8) /api/v1/namespaces/proxy-2092/services/http:proxy-service-c8nkr:portname1/proxy/: foo (200; 14.806473ms) Apr 27 14:24:27.302: INFO: (8) /api/v1/namespaces/proxy-2092/services/proxy-service-c8nkr:portname2/proxy/: bar (200; 14.929877ms) Apr 27 14:24:27.302: INFO: (8) /api/v1/namespaces/proxy-2092/services/https:proxy-service-c8nkr:tlsportname2/proxy/: tls qux (200; 14.818516ms) Apr 27 14:24:27.306: INFO: (9) /api/v1/namespaces/proxy-2092/pods/http:proxy-service-c8nkr-wvfql:160/proxy/: foo (200; 3.52814ms) Apr 27 14:24:27.306: INFO: (9) /api/v1/namespaces/proxy-2092/pods/proxy-service-c8nkr-wvfql:1080/proxy/: test<... (200; 3.51921ms) Apr 27 14:24:27.307: INFO: (9) /api/v1/namespaces/proxy-2092/pods/https:proxy-service-c8nkr-wvfql:443/proxy/: test (200; 4.241078ms) Apr 27 14:24:27.307: INFO: (9) /api/v1/namespaces/proxy-2092/pods/proxy-service-c8nkr-wvfql:160/proxy/: foo (200; 4.320133ms) Apr 27 14:24:27.307: INFO: (9) /api/v1/namespaces/proxy-2092/pods/proxy-service-c8nkr-wvfql:162/proxy/: bar (200; 4.857233ms) Apr 27 14:24:27.307: INFO: (9) /api/v1/namespaces/proxy-2092/pods/http:proxy-service-c8nkr-wvfql:1080/proxy/: ... (200; 4.806697ms) Apr 27 14:24:27.307: INFO: (9) /api/v1/namespaces/proxy-2092/pods/http:proxy-service-c8nkr-wvfql:162/proxy/: bar (200; 4.967356ms) Apr 27 14:24:27.307: INFO: (9) /api/v1/namespaces/proxy-2092/pods/https:proxy-service-c8nkr-wvfql:462/proxy/: tls qux (200; 5.035087ms) Apr 27 14:24:27.308: INFO: (9) /api/v1/namespaces/proxy-2092/pods/https:proxy-service-c8nkr-wvfql:460/proxy/: tls baz (200; 5.004044ms) Apr 27 14:24:27.308: INFO: (9) /api/v1/namespaces/proxy-2092/services/proxy-service-c8nkr:portname2/proxy/: bar (200; 5.319227ms) Apr 27 14:24:27.309: INFO: (9) /api/v1/namespaces/proxy-2092/services/http:proxy-service-c8nkr:portname2/proxy/: bar (200; 6.192406ms) Apr 27 14:24:27.309: INFO: (9) /api/v1/namespaces/proxy-2092/services/https:proxy-service-c8nkr:tlsportname1/proxy/: tls baz (200; 6.254147ms) Apr 27 14:24:27.309: INFO: (9) /api/v1/namespaces/proxy-2092/services/http:proxy-service-c8nkr:portname1/proxy/: foo (200; 6.242405ms) Apr 27 14:24:27.309: INFO: (9) /api/v1/namespaces/proxy-2092/services/https:proxy-service-c8nkr:tlsportname2/proxy/: tls qux (200; 6.318685ms) Apr 27 14:24:27.309: INFO: (9) /api/v1/namespaces/proxy-2092/services/proxy-service-c8nkr:portname1/proxy/: foo (200; 6.229099ms) Apr 27 14:24:27.312: INFO: (10) /api/v1/namespaces/proxy-2092/pods/https:proxy-service-c8nkr-wvfql:460/proxy/: tls baz (200; 2.799954ms) Apr 27 14:24:27.313: INFO: (10) /api/v1/namespaces/proxy-2092/pods/https:proxy-service-c8nkr-wvfql:462/proxy/: tls qux (200; 4.094742ms) Apr 27 14:24:27.313: INFO: (10) /api/v1/namespaces/proxy-2092/pods/http:proxy-service-c8nkr-wvfql:162/proxy/: bar (200; 4.104355ms) Apr 27 14:24:27.313: INFO: (10) /api/v1/namespaces/proxy-2092/pods/proxy-service-c8nkr-wvfql:1080/proxy/: test<... (200; 4.173246ms) Apr 27 14:24:27.313: INFO: (10) /api/v1/namespaces/proxy-2092/services/proxy-service-c8nkr:portname1/proxy/: foo (200; 4.255326ms) Apr 27 14:24:27.314: INFO: (10) /api/v1/namespaces/proxy-2092/services/proxy-service-c8nkr:portname2/proxy/: bar (200; 4.627907ms) Apr 27 14:24:27.314: INFO: (10) /api/v1/namespaces/proxy-2092/pods/http:proxy-service-c8nkr-wvfql:1080/proxy/: ... (200; 4.598581ms) Apr 27 14:24:27.314: INFO: (10) /api/v1/namespaces/proxy-2092/services/http:proxy-service-c8nkr:portname1/proxy/: foo (200; 4.62383ms) Apr 27 14:24:27.314: INFO: (10) /api/v1/namespaces/proxy-2092/services/https:proxy-service-c8nkr:tlsportname1/proxy/: tls baz (200; 4.840598ms) Apr 27 14:24:27.314: INFO: (10) /api/v1/namespaces/proxy-2092/services/http:proxy-service-c8nkr:portname2/proxy/: bar (200; 4.853826ms) Apr 27 14:24:27.314: INFO: (10) /api/v1/namespaces/proxy-2092/pods/proxy-service-c8nkr-wvfql/proxy/: test (200; 4.827858ms) Apr 27 14:24:27.314: INFO: (10) /api/v1/namespaces/proxy-2092/pods/https:proxy-service-c8nkr-wvfql:443/proxy/: ... (200; 3.707215ms) Apr 27 14:24:27.318: INFO: (11) /api/v1/namespaces/proxy-2092/pods/https:proxy-service-c8nkr-wvfql:460/proxy/: tls baz (200; 3.923411ms) Apr 27 14:24:27.318: INFO: (11) /api/v1/namespaces/proxy-2092/pods/proxy-service-c8nkr-wvfql:162/proxy/: bar (200; 3.789045ms) Apr 27 14:24:27.318: INFO: (11) /api/v1/namespaces/proxy-2092/pods/proxy-service-c8nkr-wvfql:160/proxy/: foo (200; 3.788256ms) Apr 27 14:24:27.318: INFO: (11) /api/v1/namespaces/proxy-2092/pods/proxy-service-c8nkr-wvfql/proxy/: test (200; 3.884789ms) Apr 27 14:24:27.318: INFO: (11) /api/v1/namespaces/proxy-2092/pods/http:proxy-service-c8nkr-wvfql:162/proxy/: bar (200; 3.817551ms) Apr 27 14:24:27.318: INFO: (11) /api/v1/namespaces/proxy-2092/pods/https:proxy-service-c8nkr-wvfql:462/proxy/: tls qux (200; 3.970723ms) Apr 27 14:24:27.319: INFO: (11) /api/v1/namespaces/proxy-2092/pods/https:proxy-service-c8nkr-wvfql:443/proxy/: test<... (200; 4.025461ms) Apr 27 14:24:27.319: INFO: (11) /api/v1/namespaces/proxy-2092/services/proxy-service-c8nkr:portname2/proxy/: bar (200; 4.813848ms) Apr 27 14:24:27.319: INFO: (11) /api/v1/namespaces/proxy-2092/services/proxy-service-c8nkr:portname1/proxy/: foo (200; 4.610432ms) Apr 27 14:24:27.319: INFO: (11) /api/v1/namespaces/proxy-2092/services/http:proxy-service-c8nkr:portname1/proxy/: foo (200; 4.807922ms) Apr 27 14:24:27.319: INFO: (11) /api/v1/namespaces/proxy-2092/services/https:proxy-service-c8nkr:tlsportname2/proxy/: tls qux (200; 4.969261ms) Apr 27 14:24:27.319: INFO: (11) /api/v1/namespaces/proxy-2092/services/http:proxy-service-c8nkr:portname2/proxy/: bar (200; 4.894669ms) Apr 27 14:24:27.320: INFO: (11) /api/v1/namespaces/proxy-2092/services/https:proxy-service-c8nkr:tlsportname1/proxy/: tls baz (200; 5.148389ms) Apr 27 14:24:27.322: INFO: (12) /api/v1/namespaces/proxy-2092/pods/https:proxy-service-c8nkr-wvfql:443/proxy/: test (200; 4.253724ms) Apr 27 14:24:27.324: INFO: (12) /api/v1/namespaces/proxy-2092/pods/https:proxy-service-c8nkr-wvfql:460/proxy/: tls baz (200; 4.38598ms) Apr 27 14:24:27.325: INFO: (12) /api/v1/namespaces/proxy-2092/pods/proxy-service-c8nkr-wvfql:160/proxy/: foo (200; 5.042544ms) Apr 27 14:24:27.325: INFO: (12) /api/v1/namespaces/proxy-2092/pods/http:proxy-service-c8nkr-wvfql:160/proxy/: foo (200; 5.226424ms) Apr 27 14:24:27.325: INFO: (12) /api/v1/namespaces/proxy-2092/pods/proxy-service-c8nkr-wvfql:1080/proxy/: test<... (200; 5.481771ms) Apr 27 14:24:27.325: INFO: (12) /api/v1/namespaces/proxy-2092/pods/https:proxy-service-c8nkr-wvfql:462/proxy/: tls qux (200; 5.489597ms) Apr 27 14:24:27.327: INFO: (12) /api/v1/namespaces/proxy-2092/services/http:proxy-service-c8nkr:portname1/proxy/: foo (200; 7.596994ms) Apr 27 14:24:27.328: INFO: (12) /api/v1/namespaces/proxy-2092/pods/http:proxy-service-c8nkr-wvfql:162/proxy/: bar (200; 7.733025ms) Apr 27 14:24:27.328: INFO: (12) /api/v1/namespaces/proxy-2092/pods/proxy-service-c8nkr-wvfql:162/proxy/: bar (200; 7.86635ms) Apr 27 14:24:27.328: INFO: (12) /api/v1/namespaces/proxy-2092/services/https:proxy-service-c8nkr:tlsportname2/proxy/: tls qux (200; 7.859482ms) Apr 27 14:24:27.328: INFO: (12) /api/v1/namespaces/proxy-2092/pods/http:proxy-service-c8nkr-wvfql:1080/proxy/: ... (200; 7.759931ms) Apr 27 14:24:27.328: INFO: (12) /api/v1/namespaces/proxy-2092/services/proxy-service-c8nkr:portname2/proxy/: bar (200; 7.843417ms) Apr 27 14:24:27.329: INFO: (12) /api/v1/namespaces/proxy-2092/services/proxy-service-c8nkr:portname1/proxy/: foo (200; 9.170383ms) Apr 27 14:24:27.329: INFO: (12) /api/v1/namespaces/proxy-2092/services/http:proxy-service-c8nkr:portname2/proxy/: bar (200; 9.225651ms) Apr 27 14:24:27.329: INFO: (12) /api/v1/namespaces/proxy-2092/services/https:proxy-service-c8nkr:tlsportname1/proxy/: tls baz (200; 9.257755ms) Apr 27 14:24:27.336: INFO: (13) /api/v1/namespaces/proxy-2092/pods/http:proxy-service-c8nkr-wvfql:162/proxy/: bar (200; 6.79443ms) Apr 27 14:24:27.336: INFO: (13) /api/v1/namespaces/proxy-2092/pods/http:proxy-service-c8nkr-wvfql:160/proxy/: foo (200; 6.774318ms) Apr 27 14:24:27.336: INFO: (13) /api/v1/namespaces/proxy-2092/pods/proxy-service-c8nkr-wvfql:160/proxy/: foo (200; 6.898323ms) Apr 27 14:24:27.336: INFO: (13) /api/v1/namespaces/proxy-2092/services/https:proxy-service-c8nkr:tlsportname2/proxy/: tls qux (200; 7.189648ms) Apr 27 14:24:27.336: INFO: (13) /api/v1/namespaces/proxy-2092/pods/proxy-service-c8nkr-wvfql/proxy/: test (200; 7.073335ms) Apr 27 14:24:27.337: INFO: (13) /api/v1/namespaces/proxy-2092/pods/http:proxy-service-c8nkr-wvfql:1080/proxy/: ... (200; 8.17351ms) Apr 27 14:24:27.337: INFO: (13) /api/v1/namespaces/proxy-2092/pods/proxy-service-c8nkr-wvfql:1080/proxy/: test<... (200; 8.091897ms) Apr 27 14:24:27.337: INFO: (13) /api/v1/namespaces/proxy-2092/pods/https:proxy-service-c8nkr-wvfql:443/proxy/: test<... (200; 3.965477ms) Apr 27 14:24:27.344: INFO: (14) /api/v1/namespaces/proxy-2092/pods/http:proxy-service-c8nkr-wvfql:162/proxy/: bar (200; 5.950156ms) Apr 27 14:24:27.344: INFO: (14) /api/v1/namespaces/proxy-2092/pods/http:proxy-service-c8nkr-wvfql:160/proxy/: foo (200; 5.644331ms) Apr 27 14:24:27.345: INFO: (14) /api/v1/namespaces/proxy-2092/pods/proxy-service-c8nkr-wvfql:160/proxy/: foo (200; 5.767733ms) Apr 27 14:24:27.345: INFO: (14) /api/v1/namespaces/proxy-2092/pods/proxy-service-c8nkr-wvfql:162/proxy/: bar (200; 5.725353ms) Apr 27 14:24:27.345: INFO: (14) /api/v1/namespaces/proxy-2092/pods/https:proxy-service-c8nkr-wvfql:443/proxy/: ... (200; 5.722742ms) Apr 27 14:24:27.345: INFO: (14) /api/v1/namespaces/proxy-2092/services/proxy-service-c8nkr:portname1/proxy/: foo (200; 6.204833ms) Apr 27 14:24:27.345: INFO: (14) /api/v1/namespaces/proxy-2092/pods/proxy-service-c8nkr-wvfql/proxy/: test (200; 6.375908ms) Apr 27 14:24:27.345: INFO: (14) /api/v1/namespaces/proxy-2092/pods/https:proxy-service-c8nkr-wvfql:460/proxy/: tls baz (200; 6.168656ms) Apr 27 14:24:27.345: INFO: (14) /api/v1/namespaces/proxy-2092/services/https:proxy-service-c8nkr:tlsportname2/proxy/: tls qux (200; 6.603015ms) Apr 27 14:24:27.346: INFO: (14) /api/v1/namespaces/proxy-2092/services/proxy-service-c8nkr:portname2/proxy/: bar (200; 6.91058ms) Apr 27 14:24:27.346: INFO: (14) /api/v1/namespaces/proxy-2092/services/http:proxy-service-c8nkr:portname2/proxy/: bar (200; 7.441078ms) Apr 27 14:24:27.346: INFO: (14) /api/v1/namespaces/proxy-2092/services/http:proxy-service-c8nkr:portname1/proxy/: foo (200; 7.354272ms) Apr 27 14:24:27.346: INFO: (14) /api/v1/namespaces/proxy-2092/services/https:proxy-service-c8nkr:tlsportname1/proxy/: tls baz (200; 7.489169ms) Apr 27 14:24:27.349: INFO: (15) /api/v1/namespaces/proxy-2092/pods/proxy-service-c8nkr-wvfql:162/proxy/: bar (200; 3.100539ms) Apr 27 14:24:27.349: INFO: (15) /api/v1/namespaces/proxy-2092/pods/http:proxy-service-c8nkr-wvfql:160/proxy/: foo (200; 3.047371ms) Apr 27 14:24:27.351: INFO: (15) /api/v1/namespaces/proxy-2092/pods/proxy-service-c8nkr-wvfql:1080/proxy/: test<... (200; 4.268805ms) Apr 27 14:24:27.351: INFO: (15) /api/v1/namespaces/proxy-2092/pods/proxy-service-c8nkr-wvfql/proxy/: test (200; 4.616543ms) Apr 27 14:24:27.351: INFO: (15) /api/v1/namespaces/proxy-2092/pods/http:proxy-service-c8nkr-wvfql:1080/proxy/: ... (200; 4.571477ms) Apr 27 14:24:27.351: INFO: (15) /api/v1/namespaces/proxy-2092/services/http:proxy-service-c8nkr:portname1/proxy/: foo (200; 4.676928ms) Apr 27 14:24:27.351: INFO: (15) /api/v1/namespaces/proxy-2092/services/https:proxy-service-c8nkr:tlsportname2/proxy/: tls qux (200; 4.81387ms) Apr 27 14:24:27.351: INFO: (15) /api/v1/namespaces/proxy-2092/services/https:proxy-service-c8nkr:tlsportname1/proxy/: tls baz (200; 4.998342ms) Apr 27 14:24:27.352: INFO: (15) /api/v1/namespaces/proxy-2092/pods/https:proxy-service-c8nkr-wvfql:443/proxy/: test<... (200; 3.273623ms) Apr 27 14:24:27.356: INFO: (16) /api/v1/namespaces/proxy-2092/pods/http:proxy-service-c8nkr-wvfql:1080/proxy/: ... (200; 4.030368ms) Apr 27 14:24:27.356: INFO: (16) /api/v1/namespaces/proxy-2092/pods/proxy-service-c8nkr-wvfql:160/proxy/: foo (200; 4.172859ms) Apr 27 14:24:27.356: INFO: (16) /api/v1/namespaces/proxy-2092/pods/http:proxy-service-c8nkr-wvfql:160/proxy/: foo (200; 4.0886ms) Apr 27 14:24:27.356: INFO: (16) /api/v1/namespaces/proxy-2092/pods/http:proxy-service-c8nkr-wvfql:162/proxy/: bar (200; 4.082026ms) Apr 27 14:24:27.356: INFO: (16) /api/v1/namespaces/proxy-2092/pods/https:proxy-service-c8nkr-wvfql:443/proxy/: test (200; 4.20623ms) Apr 27 14:24:27.356: INFO: (16) /api/v1/namespaces/proxy-2092/pods/https:proxy-service-c8nkr-wvfql:462/proxy/: tls qux (200; 4.171225ms) Apr 27 14:24:27.394: INFO: (16) /api/v1/namespaces/proxy-2092/services/proxy-service-c8nkr:portname1/proxy/: foo (200; 42.037198ms) Apr 27 14:24:27.394: INFO: (16) /api/v1/namespaces/proxy-2092/services/proxy-service-c8nkr:portname2/proxy/: bar (200; 41.967244ms) Apr 27 14:24:27.394: INFO: (16) /api/v1/namespaces/proxy-2092/services/https:proxy-service-c8nkr:tlsportname2/proxy/: tls qux (200; 42.309016ms) Apr 27 14:24:27.394: INFO: (16) /api/v1/namespaces/proxy-2092/services/https:proxy-service-c8nkr:tlsportname1/proxy/: tls baz (200; 42.179535ms) Apr 27 14:24:27.395: INFO: (16) /api/v1/namespaces/proxy-2092/services/http:proxy-service-c8nkr:portname1/proxy/: foo (200; 42.894198ms) Apr 27 14:24:27.396: INFO: (16) /api/v1/namespaces/proxy-2092/services/http:proxy-service-c8nkr:portname2/proxy/: bar (200; 43.813779ms) Apr 27 14:24:27.399: INFO: (17) /api/v1/namespaces/proxy-2092/pods/proxy-service-c8nkr-wvfql:160/proxy/: foo (200; 3.07182ms) Apr 27 14:24:27.401: INFO: (17) /api/v1/namespaces/proxy-2092/pods/http:proxy-service-c8nkr-wvfql:160/proxy/: foo (200; 4.254206ms) Apr 27 14:24:27.401: INFO: (17) /api/v1/namespaces/proxy-2092/pods/https:proxy-service-c8nkr-wvfql:462/proxy/: tls qux (200; 4.42991ms) Apr 27 14:24:27.402: INFO: (17) /api/v1/namespaces/proxy-2092/pods/https:proxy-service-c8nkr-wvfql:443/proxy/: ... (200; 5.152922ms) Apr 27 14:24:27.403: INFO: (17) /api/v1/namespaces/proxy-2092/pods/proxy-service-c8nkr-wvfql:162/proxy/: bar (200; 5.708835ms) Apr 27 14:24:27.403: INFO: (17) /api/v1/namespaces/proxy-2092/pods/https:proxy-service-c8nkr-wvfql:460/proxy/: tls baz (200; 6.395916ms) Apr 27 14:24:27.404: INFO: (17) /api/v1/namespaces/proxy-2092/services/http:proxy-service-c8nkr:portname1/proxy/: foo (200; 7.138734ms) Apr 27 14:24:27.404: INFO: (17) /api/v1/namespaces/proxy-2092/services/proxy-service-c8nkr:portname1/proxy/: foo (200; 7.661317ms) Apr 27 14:24:27.404: INFO: (17) /api/v1/namespaces/proxy-2092/services/https:proxy-service-c8nkr:tlsportname1/proxy/: tls baz (200; 7.555855ms) Apr 27 14:24:27.404: INFO: (17) /api/v1/namespaces/proxy-2092/pods/proxy-service-c8nkr-wvfql/proxy/: test (200; 7.220099ms) Apr 27 14:24:27.404: INFO: (17) /api/v1/namespaces/proxy-2092/services/https:proxy-service-c8nkr:tlsportname2/proxy/: tls qux (200; 7.969739ms) Apr 27 14:24:27.404: INFO: (17) /api/v1/namespaces/proxy-2092/pods/http:proxy-service-c8nkr-wvfql:162/proxy/: bar (200; 7.339895ms) Apr 27 14:24:27.404: INFO: (17) /api/v1/namespaces/proxy-2092/services/proxy-service-c8nkr:portname2/proxy/: bar (200; 7.295196ms) Apr 27 14:24:27.404: INFO: (17) /api/v1/namespaces/proxy-2092/services/http:proxy-service-c8nkr:portname2/proxy/: bar (200; 7.45698ms) Apr 27 14:24:27.404: INFO: (17) /api/v1/namespaces/proxy-2092/pods/proxy-service-c8nkr-wvfql:1080/proxy/: test<... (200; 7.389692ms) Apr 27 14:24:27.409: INFO: (18) /api/v1/namespaces/proxy-2092/pods/proxy-service-c8nkr-wvfql:162/proxy/: bar (200; 3.714505ms) Apr 27 14:24:27.409: INFO: (18) /api/v1/namespaces/proxy-2092/pods/proxy-service-c8nkr-wvfql:1080/proxy/: test<... (200; 3.998275ms) Apr 27 14:24:27.409: INFO: (18) /api/v1/namespaces/proxy-2092/services/proxy-service-c8nkr:portname2/proxy/: bar (200; 4.118216ms) Apr 27 14:24:27.409: INFO: (18) /api/v1/namespaces/proxy-2092/services/http:proxy-service-c8nkr:portname2/proxy/: bar (200; 4.134355ms) Apr 27 14:24:27.409: INFO: (18) /api/v1/namespaces/proxy-2092/pods/https:proxy-service-c8nkr-wvfql:462/proxy/: tls qux (200; 4.217663ms) Apr 27 14:24:27.409: INFO: (18) /api/v1/namespaces/proxy-2092/pods/http:proxy-service-c8nkr-wvfql:160/proxy/: foo (200; 4.445766ms) Apr 27 14:24:27.410: INFO: (18) /api/v1/namespaces/proxy-2092/services/proxy-service-c8nkr:portname1/proxy/: foo (200; 5.048467ms) Apr 27 14:24:27.410: INFO: (18) /api/v1/namespaces/proxy-2092/pods/http:proxy-service-c8nkr-wvfql:1080/proxy/: ... (200; 4.785482ms) Apr 27 14:24:27.410: INFO: (18) /api/v1/namespaces/proxy-2092/pods/proxy-service-c8nkr-wvfql:160/proxy/: foo (200; 4.966049ms) Apr 27 14:24:27.410: INFO: (18) /api/v1/namespaces/proxy-2092/pods/proxy-service-c8nkr-wvfql/proxy/: test (200; 4.923988ms) Apr 27 14:24:27.410: INFO: (18) /api/v1/namespaces/proxy-2092/pods/https:proxy-service-c8nkr-wvfql:460/proxy/: tls baz (200; 5.439854ms) Apr 27 14:24:27.410: INFO: (18) /api/v1/namespaces/proxy-2092/pods/http:proxy-service-c8nkr-wvfql:162/proxy/: bar (200; 5.300757ms) Apr 27 14:24:27.411: INFO: (18) /api/v1/namespaces/proxy-2092/pods/https:proxy-service-c8nkr-wvfql:443/proxy/: test<... (200; 3.444875ms) Apr 27 14:24:27.415: INFO: (19) /api/v1/namespaces/proxy-2092/pods/https:proxy-service-c8nkr-wvfql:462/proxy/: tls qux (200; 3.622374ms) Apr 27 14:24:27.415: INFO: (19) /api/v1/namespaces/proxy-2092/pods/proxy-service-c8nkr-wvfql/proxy/: test (200; 3.638708ms) Apr 27 14:24:27.415: INFO: (19) /api/v1/namespaces/proxy-2092/pods/proxy-service-c8nkr-wvfql:162/proxy/: bar (200; 3.658228ms) Apr 27 14:24:27.415: INFO: (19) /api/v1/namespaces/proxy-2092/pods/http:proxy-service-c8nkr-wvfql:160/proxy/: foo (200; 3.679105ms) Apr 27 14:24:27.415: INFO: (19) /api/v1/namespaces/proxy-2092/services/proxy-service-c8nkr:portname1/proxy/: foo (200; 3.732522ms) Apr 27 14:24:27.415: INFO: (19) /api/v1/namespaces/proxy-2092/pods/http:proxy-service-c8nkr-wvfql:1080/proxy/: ... (200; 4.252923ms) Apr 27 14:24:27.415: INFO: (19) /api/v1/namespaces/proxy-2092/services/https:proxy-service-c8nkr:tlsportname2/proxy/: tls qux (200; 4.332447ms) Apr 27 14:24:27.415: INFO: (19) /api/v1/namespaces/proxy-2092/pods/https:proxy-service-c8nkr-wvfql:460/proxy/: tls baz (200; 4.225266ms) Apr 27 14:24:27.415: INFO: (19) /api/v1/namespaces/proxy-2092/services/http:proxy-service-c8nkr:portname2/proxy/: bar (200; 4.177207ms) Apr 27 14:24:27.415: INFO: (19) /api/v1/namespaces/proxy-2092/pods/https:proxy-service-c8nkr-wvfql:443/proxy/: >> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Apr 27 14:24:38.126: INFO: Creating deployment "test-recreate-deployment" Apr 27 14:24:38.138: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 Apr 27 14:24:38.150: INFO: deployment "test-recreate-deployment" doesn't have the required revision set Apr 27 14:24:40.158: INFO: Waiting deployment "test-recreate-deployment" to complete Apr 27 14:24:40.160: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723594278, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723594278, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723594278, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723594278, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 27 14:24:42.164: INFO: Triggering a new rollout for deployment "test-recreate-deployment" Apr 27 14:24:42.171: INFO: Updating deployment test-recreate-deployment Apr 27 14:24:42.171: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 Apr 27 14:24:42.556: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment,GenerateName:,Namespace:deployment-4018,SelfLink:/apis/apps/v1/namespaces/deployment-4018/deployments/test-recreate-deployment,UID:0db41f2c-35a4-4b0e-8047-78e744882582,ResourceVersion:7731701,Generation:2,CreationTimestamp:2020-04-27 14:24:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[{Available False 2020-04-27 14:24:42 +0000 UTC 2020-04-27 14:24:42 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2020-04-27 14:24:42 +0000 UTC 2020-04-27 14:24:38 +0000 UTC ReplicaSetUpdated ReplicaSet "test-recreate-deployment-5c8c9cc69d" is progressing.}],ReadyReplicas:0,CollisionCount:nil,},} Apr 27 14:24:42.560: INFO: New ReplicaSet "test-recreate-deployment-5c8c9cc69d" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5c8c9cc69d,GenerateName:,Namespace:deployment-4018,SelfLink:/apis/apps/v1/namespaces/deployment-4018/replicasets/test-recreate-deployment-5c8c9cc69d,UID:e045e2a9-b9e5-4470-a9df-a80424ac30ba,ResourceVersion:7731700,Generation:1,CreationTimestamp:2020-04-27 14:24:42 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 0db41f2c-35a4-4b0e-8047-78e744882582 0xc001f0ca67 0xc001f0ca68}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Apr 27 14:24:42.560: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": Apr 27 14:24:42.560: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-6df85df6b9,GenerateName:,Namespace:deployment-4018,SelfLink:/apis/apps/v1/namespaces/deployment-4018/replicasets/test-recreate-deployment-6df85df6b9,UID:24becff3-9436-4d27-ae6d-5b17cef4531c,ResourceVersion:7731690,Generation:2,CreationTimestamp:2020-04-27 14:24:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 0db41f2c-35a4-4b0e-8047-78e744882582 0xc001f0cb37 0xc001f0cb38}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Apr 27 14:24:42.564: INFO: Pod "test-recreate-deployment-5c8c9cc69d-tkdng" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5c8c9cc69d-tkdng,GenerateName:test-recreate-deployment-5c8c9cc69d-,Namespace:deployment-4018,SelfLink:/api/v1/namespaces/deployment-4018/pods/test-recreate-deployment-5c8c9cc69d-tkdng,UID:0fb37e68-ac4e-4218-b25e-eb667e4fe81e,ResourceVersion:7731702,Generation:0,CreationTimestamp:2020-04-27 14:24:42 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-recreate-deployment-5c8c9cc69d e045e2a9-b9e5-4470-a9df-a80424ac30ba 0xc0029f40e7 0xc0029f40e8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-f59bg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-f59bg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-f59bg true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0029f41d0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0029f41f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-27 14:24:42 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-27 14:24:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-27 14:24:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-27 14:24:42 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:,StartTime:2020-04-27 14:24:42 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 27 14:24:42.564: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-4018" for this suite. Apr 27 14:24:48.614: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 27 14:24:48.693: INFO: namespace deployment-4018 deletion completed in 6.126255281s • [SLOW TEST:10.616 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 27 14:24:48.694: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 27 14:25:14.983: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-6976" for this suite. Apr 27 14:25:21.012: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 27 14:25:21.086: INFO: namespace namespaces-6976 deletion completed in 6.099674266s STEP: Destroying namespace "nsdeletetest-7382" for this suite. Apr 27 14:25:21.089: INFO: Namespace nsdeletetest-7382 was already deleted STEP: Destroying namespace "nsdeletetest-9832" for this suite. Apr 27 14:25:27.106: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 27 14:25:27.236: INFO: namespace nsdeletetest-9832 deletion completed in 6.147193861s • [SLOW TEST:38.542 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 27 14:25:27.237: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-ca8964ab-bca4-4330-bd30-23350548a45e STEP: Creating a pod to test consume secrets Apr 27 14:25:27.409: INFO: Waiting up to 5m0s for pod "pod-secrets-f8529725-c444-4b58-bbdf-702969ffacb9" in namespace "secrets-1744" to be "success or failure" Apr 27 14:25:27.415: INFO: Pod "pod-secrets-f8529725-c444-4b58-bbdf-702969ffacb9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.016357ms Apr 27 14:25:29.426: INFO: Pod "pod-secrets-f8529725-c444-4b58-bbdf-702969ffacb9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017257139s Apr 27 14:25:31.432: INFO: Pod "pod-secrets-f8529725-c444-4b58-bbdf-702969ffacb9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023239192s STEP: Saw pod success Apr 27 14:25:31.432: INFO: Pod "pod-secrets-f8529725-c444-4b58-bbdf-702969ffacb9" satisfied condition "success or failure" Apr 27 14:25:31.435: INFO: Trying to get logs from node iruya-worker2 pod pod-secrets-f8529725-c444-4b58-bbdf-702969ffacb9 container secret-volume-test: STEP: delete the pod Apr 27 14:25:31.505: INFO: Waiting for pod pod-secrets-f8529725-c444-4b58-bbdf-702969ffacb9 to disappear Apr 27 14:25:31.522: INFO: Pod pod-secrets-f8529725-c444-4b58-bbdf-702969ffacb9 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 27 14:25:31.522: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1744" for this suite. Apr 27 14:25:37.536: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 27 14:25:37.615: INFO: namespace secrets-1744 deletion completed in 6.088312968s • [SLOW TEST:10.378 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 27 14:25:37.615: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod Apr 27 14:25:37.649: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 27 14:25:43.467: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-1308" for this suite. Apr 27 14:25:49.509: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 27 14:25:49.608: INFO: namespace init-container-1308 deletion completed in 6.123185497s • [SLOW TEST:11.993 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 27 14:25:49.610: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a test externalName service STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-5058.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-5058.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-5058.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-5058.svc.cluster.local; sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 27 14:25:55.734: INFO: DNS probes using dns-test-da0897f5-a53f-4b33-8daa-72dc242e8cdd succeeded STEP: deleting the pod STEP: changing the externalName to bar.example.com STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-5058.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-5058.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-5058.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-5058.svc.cluster.local; sleep 1; done STEP: creating a second pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 27 14:26:01.811: INFO: File wheezy_udp@dns-test-service-3.dns-5058.svc.cluster.local from pod dns-5058/dns-test-0ec07013-7c28-49bd-b5e4-af7f96307789 contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 27 14:26:01.815: INFO: File jessie_udp@dns-test-service-3.dns-5058.svc.cluster.local from pod dns-5058/dns-test-0ec07013-7c28-49bd-b5e4-af7f96307789 contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 27 14:26:01.815: INFO: Lookups using dns-5058/dns-test-0ec07013-7c28-49bd-b5e4-af7f96307789 failed for: [wheezy_udp@dns-test-service-3.dns-5058.svc.cluster.local jessie_udp@dns-test-service-3.dns-5058.svc.cluster.local] Apr 27 14:26:06.823: INFO: File wheezy_udp@dns-test-service-3.dns-5058.svc.cluster.local from pod dns-5058/dns-test-0ec07013-7c28-49bd-b5e4-af7f96307789 contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 27 14:26:06.826: INFO: File jessie_udp@dns-test-service-3.dns-5058.svc.cluster.local from pod dns-5058/dns-test-0ec07013-7c28-49bd-b5e4-af7f96307789 contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 27 14:26:06.826: INFO: Lookups using dns-5058/dns-test-0ec07013-7c28-49bd-b5e4-af7f96307789 failed for: [wheezy_udp@dns-test-service-3.dns-5058.svc.cluster.local jessie_udp@dns-test-service-3.dns-5058.svc.cluster.local] Apr 27 14:26:11.819: INFO: File wheezy_udp@dns-test-service-3.dns-5058.svc.cluster.local from pod dns-5058/dns-test-0ec07013-7c28-49bd-b5e4-af7f96307789 contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 27 14:26:11.822: INFO: File jessie_udp@dns-test-service-3.dns-5058.svc.cluster.local from pod dns-5058/dns-test-0ec07013-7c28-49bd-b5e4-af7f96307789 contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 27 14:26:11.822: INFO: Lookups using dns-5058/dns-test-0ec07013-7c28-49bd-b5e4-af7f96307789 failed for: [wheezy_udp@dns-test-service-3.dns-5058.svc.cluster.local jessie_udp@dns-test-service-3.dns-5058.svc.cluster.local] Apr 27 14:26:16.822: INFO: File wheezy_udp@dns-test-service-3.dns-5058.svc.cluster.local from pod dns-5058/dns-test-0ec07013-7c28-49bd-b5e4-af7f96307789 contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 27 14:26:16.825: INFO: File jessie_udp@dns-test-service-3.dns-5058.svc.cluster.local from pod dns-5058/dns-test-0ec07013-7c28-49bd-b5e4-af7f96307789 contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 27 14:26:16.826: INFO: Lookups using dns-5058/dns-test-0ec07013-7c28-49bd-b5e4-af7f96307789 failed for: [wheezy_udp@dns-test-service-3.dns-5058.svc.cluster.local jessie_udp@dns-test-service-3.dns-5058.svc.cluster.local] Apr 27 14:26:21.820: INFO: File wheezy_udp@dns-test-service-3.dns-5058.svc.cluster.local from pod dns-5058/dns-test-0ec07013-7c28-49bd-b5e4-af7f96307789 contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 27 14:26:21.823: INFO: File jessie_udp@dns-test-service-3.dns-5058.svc.cluster.local from pod dns-5058/dns-test-0ec07013-7c28-49bd-b5e4-af7f96307789 contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 27 14:26:21.823: INFO: Lookups using dns-5058/dns-test-0ec07013-7c28-49bd-b5e4-af7f96307789 failed for: [wheezy_udp@dns-test-service-3.dns-5058.svc.cluster.local jessie_udp@dns-test-service-3.dns-5058.svc.cluster.local] Apr 27 14:26:26.822: INFO: DNS probes using dns-test-0ec07013-7c28-49bd-b5e4-af7f96307789 succeeded STEP: deleting the pod STEP: changing the service to type=ClusterIP STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-5058.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-5058.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-5058.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-5058.svc.cluster.local; sleep 1; done STEP: creating a third pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 27 14:26:33.392: INFO: DNS probes using dns-test-71cea43b-3c24-40c1-959c-005d06f579a7 succeeded STEP: deleting the pod STEP: deleting the test externalName service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 27 14:26:33.977: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-5058" for this suite. Apr 27 14:26:40.019: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 27 14:26:40.110: INFO: namespace dns-5058 deletion completed in 6.117680144s • [SLOW TEST:50.501 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 27 14:26:40.111: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Apr 27 14:27:02.212: INFO: Container started at 2020-04-27 14:26:42 +0000 UTC, pod became ready at 2020-04-27 14:27:01 +0000 UTC [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 27 14:27:02.212: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-7056" for this suite. Apr 27 14:27:24.228: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 27 14:27:24.329: INFO: namespace container-probe-7056 deletion completed in 22.113174851s • [SLOW TEST:44.218 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 27 14:27:24.330: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-configmap-zw6v STEP: Creating a pod to test atomic-volume-subpath Apr 27 14:27:24.425: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-zw6v" in namespace "subpath-5428" to be "success or failure" Apr 27 14:27:24.450: INFO: Pod "pod-subpath-test-configmap-zw6v": Phase="Pending", Reason="", readiness=false. Elapsed: 25.032898ms Apr 27 14:27:26.475: INFO: Pod "pod-subpath-test-configmap-zw6v": Phase="Pending", Reason="", readiness=false. Elapsed: 2.050139138s Apr 27 14:27:28.479: INFO: Pod "pod-subpath-test-configmap-zw6v": Phase="Running", Reason="", readiness=true. Elapsed: 4.054164421s Apr 27 14:27:30.482: INFO: Pod "pod-subpath-test-configmap-zw6v": Phase="Running", Reason="", readiness=true. Elapsed: 6.057696856s Apr 27 14:27:32.486: INFO: Pod "pod-subpath-test-configmap-zw6v": Phase="Running", Reason="", readiness=true. Elapsed: 8.061601681s Apr 27 14:27:34.510: INFO: Pod "pod-subpath-test-configmap-zw6v": Phase="Running", Reason="", readiness=true. Elapsed: 10.085776274s Apr 27 14:27:36.522: INFO: Pod "pod-subpath-test-configmap-zw6v": Phase="Running", Reason="", readiness=true. Elapsed: 12.097618477s Apr 27 14:27:38.526: INFO: Pod "pod-subpath-test-configmap-zw6v": Phase="Running", Reason="", readiness=true. Elapsed: 14.101830268s Apr 27 14:27:40.533: INFO: Pod "pod-subpath-test-configmap-zw6v": Phase="Running", Reason="", readiness=true. Elapsed: 16.107952089s Apr 27 14:27:42.537: INFO: Pod "pod-subpath-test-configmap-zw6v": Phase="Running", Reason="", readiness=true. Elapsed: 18.112172919s Apr 27 14:27:44.541: INFO: Pod "pod-subpath-test-configmap-zw6v": Phase="Running", Reason="", readiness=true. Elapsed: 20.116528147s Apr 27 14:27:46.545: INFO: Pod "pod-subpath-test-configmap-zw6v": Phase="Running", Reason="", readiness=true. Elapsed: 22.120233145s Apr 27 14:27:48.549: INFO: Pod "pod-subpath-test-configmap-zw6v": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.124253866s STEP: Saw pod success Apr 27 14:27:48.549: INFO: Pod "pod-subpath-test-configmap-zw6v" satisfied condition "success or failure" Apr 27 14:27:48.551: INFO: Trying to get logs from node iruya-worker2 pod pod-subpath-test-configmap-zw6v container test-container-subpath-configmap-zw6v: STEP: delete the pod Apr 27 14:27:48.576: INFO: Waiting for pod pod-subpath-test-configmap-zw6v to disappear Apr 27 14:27:48.580: INFO: Pod pod-subpath-test-configmap-zw6v no longer exists STEP: Deleting pod pod-subpath-test-configmap-zw6v Apr 27 14:27:48.580: INFO: Deleting pod "pod-subpath-test-configmap-zw6v" in namespace "subpath-5428" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 27 14:27:48.582: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-5428" for this suite. Apr 27 14:27:54.640: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 27 14:27:54.722: INFO: namespace subpath-5428 deletion completed in 6.136377804s • [SLOW TEST:30.392 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 27 14:27:54.725: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0666 on node default medium Apr 27 14:27:54.831: INFO: Waiting up to 5m0s for pod "pod-8f71ce51-b39f-45f9-ba20-f81fc886eaee" in namespace "emptydir-40" to be "success or failure" Apr 27 14:27:54.838: INFO: Pod "pod-8f71ce51-b39f-45f9-ba20-f81fc886eaee": Phase="Pending", Reason="", readiness=false. Elapsed: 6.889815ms Apr 27 14:27:56.842: INFO: Pod "pod-8f71ce51-b39f-45f9-ba20-f81fc886eaee": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010468101s Apr 27 14:27:58.846: INFO: Pod "pod-8f71ce51-b39f-45f9-ba20-f81fc886eaee": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014831025s STEP: Saw pod success Apr 27 14:27:58.846: INFO: Pod "pod-8f71ce51-b39f-45f9-ba20-f81fc886eaee" satisfied condition "success or failure" Apr 27 14:27:58.849: INFO: Trying to get logs from node iruya-worker pod pod-8f71ce51-b39f-45f9-ba20-f81fc886eaee container test-container: STEP: delete the pod Apr 27 14:27:58.913: INFO: Waiting for pod pod-8f71ce51-b39f-45f9-ba20-f81fc886eaee to disappear Apr 27 14:27:58.926: INFO: Pod pod-8f71ce51-b39f-45f9-ba20-f81fc886eaee no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 27 14:27:58.926: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-40" for this suite. Apr 27 14:28:04.943: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 27 14:28:05.016: INFO: namespace emptydir-40 deletion completed in 6.08656129s • [SLOW TEST:10.292 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 27 14:28:05.018: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 Apr 27 14:28:05.054: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Apr 27 14:28:05.078: INFO: Waiting for terminating namespaces to be deleted... Apr 27 14:28:05.082: INFO: Logging pods the kubelet thinks is on node iruya-worker before test Apr 27 14:28:05.086: INFO: kube-proxy-pmz4p from kube-system started at 2020-03-15 18:24:55 +0000 UTC (1 container statuses recorded) Apr 27 14:28:05.087: INFO: Container kube-proxy ready: true, restart count 0 Apr 27 14:28:05.087: INFO: kindnet-gwz5g from kube-system started at 2020-03-15 18:24:55 +0000 UTC (1 container statuses recorded) Apr 27 14:28:05.087: INFO: Container kindnet-cni ready: true, restart count 0 Apr 27 14:28:05.087: INFO: Logging pods the kubelet thinks is on node iruya-worker2 before test Apr 27 14:28:05.107: INFO: kube-proxy-vwbcj from kube-system started at 2020-03-15 18:24:42 +0000 UTC (1 container statuses recorded) Apr 27 14:28:05.107: INFO: Container kube-proxy ready: true, restart count 0 Apr 27 14:28:05.107: INFO: kindnet-mgd8b from kube-system started at 2020-03-15 18:24:43 +0000 UTC (1 container statuses recorded) Apr 27 14:28:05.107: INFO: Container kindnet-cni ready: true, restart count 0 Apr 27 14:28:05.107: INFO: coredns-5d4dd4b4db-gm7vr from kube-system started at 2020-03-15 18:24:52 +0000 UTC (1 container statuses recorded) Apr 27 14:28:05.107: INFO: Container coredns ready: true, restart count 0 Apr 27 14:28:05.107: INFO: coredns-5d4dd4b4db-6jcgz from kube-system started at 2020-03-15 18:24:54 +0000 UTC (1 container statuses recorded) Apr 27 14:28:05.107: INFO: Container coredns ready: true, restart count 0 [It] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-8b54c35d-7dce-44fb-ade1-3beee0771220 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-8b54c35d-7dce-44fb-ade1-3beee0771220 off the node iruya-worker2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-8b54c35d-7dce-44fb-ade1-3beee0771220 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 27 14:28:13.302: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-8678" for this suite. Apr 27 14:28:41.316: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 27 14:28:41.390: INFO: namespace sched-pred-8678 deletion completed in 28.085507948s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72 • [SLOW TEST:36.371 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23 validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 27 14:28:41.390: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0777 on tmpfs Apr 27 14:28:41.434: INFO: Waiting up to 5m0s for pod "pod-72be4b5d-8602-4c4c-a82c-561dcf0cf22d" in namespace "emptydir-6857" to be "success or failure" Apr 27 14:28:41.456: INFO: Pod "pod-72be4b5d-8602-4c4c-a82c-561dcf0cf22d": Phase="Pending", Reason="", readiness=false. Elapsed: 22.159333ms Apr 27 14:28:43.460: INFO: Pod "pod-72be4b5d-8602-4c4c-a82c-561dcf0cf22d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026366361s Apr 27 14:28:45.465: INFO: Pod "pod-72be4b5d-8602-4c4c-a82c-561dcf0cf22d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.031308221s STEP: Saw pod success Apr 27 14:28:45.465: INFO: Pod "pod-72be4b5d-8602-4c4c-a82c-561dcf0cf22d" satisfied condition "success or failure" Apr 27 14:28:45.468: INFO: Trying to get logs from node iruya-worker2 pod pod-72be4b5d-8602-4c4c-a82c-561dcf0cf22d container test-container: STEP: delete the pod Apr 27 14:28:45.506: INFO: Waiting for pod pod-72be4b5d-8602-4c4c-a82c-561dcf0cf22d to disappear Apr 27 14:28:45.520: INFO: Pod pod-72be4b5d-8602-4c4c-a82c-561dcf0cf22d no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 27 14:28:45.520: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6857" for this suite. Apr 27 14:28:51.562: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 27 14:28:51.658: INFO: namespace emptydir-6857 deletion completed in 6.116543727s • [SLOW TEST:10.268 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 27 14:28:51.658: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-3433.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-3433.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-3433.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-3433.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-3433.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-3433.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-3433.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-3433.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-3433.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-3433.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-3433.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-3433.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3433.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 63.225.97.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.97.225.63_udp@PTR;check="$$(dig +tcp +noall +answer +search 63.225.97.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.97.225.63_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-3433.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-3433.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-3433.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-3433.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-3433.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-3433.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-3433.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-3433.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-3433.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-3433.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-3433.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-3433.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3433.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 63.225.97.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.97.225.63_udp@PTR;check="$$(dig +tcp +noall +answer +search 63.225.97.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.97.225.63_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 27 14:28:57.818: INFO: Unable to read wheezy_udp@dns-test-service.dns-3433.svc.cluster.local from pod dns-3433/dns-test-8721b51a-f99e-41e3-82ef-9ef4bc14aa85: the server could not find the requested resource (get pods dns-test-8721b51a-f99e-41e3-82ef-9ef4bc14aa85) Apr 27 14:28:57.821: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3433.svc.cluster.local from pod dns-3433/dns-test-8721b51a-f99e-41e3-82ef-9ef4bc14aa85: the server could not find the requested resource (get pods dns-test-8721b51a-f99e-41e3-82ef-9ef4bc14aa85) Apr 27 14:28:57.823: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3433.svc.cluster.local from pod dns-3433/dns-test-8721b51a-f99e-41e3-82ef-9ef4bc14aa85: the server could not find the requested resource (get pods dns-test-8721b51a-f99e-41e3-82ef-9ef4bc14aa85) Apr 27 14:28:57.827: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3433.svc.cluster.local from pod dns-3433/dns-test-8721b51a-f99e-41e3-82ef-9ef4bc14aa85: the server could not find the requested resource (get pods dns-test-8721b51a-f99e-41e3-82ef-9ef4bc14aa85) Apr 27 14:28:57.848: INFO: Unable to read jessie_udp@dns-test-service.dns-3433.svc.cluster.local from pod dns-3433/dns-test-8721b51a-f99e-41e3-82ef-9ef4bc14aa85: the server could not find the requested resource (get pods dns-test-8721b51a-f99e-41e3-82ef-9ef4bc14aa85) Apr 27 14:28:57.851: INFO: Unable to read jessie_tcp@dns-test-service.dns-3433.svc.cluster.local from pod dns-3433/dns-test-8721b51a-f99e-41e3-82ef-9ef4bc14aa85: the server could not find the requested resource (get pods dns-test-8721b51a-f99e-41e3-82ef-9ef4bc14aa85) Apr 27 14:28:57.854: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3433.svc.cluster.local from pod dns-3433/dns-test-8721b51a-f99e-41e3-82ef-9ef4bc14aa85: the server could not find the requested resource (get pods dns-test-8721b51a-f99e-41e3-82ef-9ef4bc14aa85) Apr 27 14:28:57.857: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3433.svc.cluster.local from pod dns-3433/dns-test-8721b51a-f99e-41e3-82ef-9ef4bc14aa85: the server could not find the requested resource (get pods dns-test-8721b51a-f99e-41e3-82ef-9ef4bc14aa85) Apr 27 14:28:57.874: INFO: Lookups using dns-3433/dns-test-8721b51a-f99e-41e3-82ef-9ef4bc14aa85 failed for: [wheezy_udp@dns-test-service.dns-3433.svc.cluster.local wheezy_tcp@dns-test-service.dns-3433.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-3433.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-3433.svc.cluster.local jessie_udp@dns-test-service.dns-3433.svc.cluster.local jessie_tcp@dns-test-service.dns-3433.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-3433.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-3433.svc.cluster.local] Apr 27 14:29:02.878: INFO: Unable to read wheezy_udp@dns-test-service.dns-3433.svc.cluster.local from pod dns-3433/dns-test-8721b51a-f99e-41e3-82ef-9ef4bc14aa85: the server could not find the requested resource (get pods dns-test-8721b51a-f99e-41e3-82ef-9ef4bc14aa85) Apr 27 14:29:02.882: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3433.svc.cluster.local from pod dns-3433/dns-test-8721b51a-f99e-41e3-82ef-9ef4bc14aa85: the server could not find the requested resource (get pods dns-test-8721b51a-f99e-41e3-82ef-9ef4bc14aa85) Apr 27 14:29:02.886: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3433.svc.cluster.local from pod dns-3433/dns-test-8721b51a-f99e-41e3-82ef-9ef4bc14aa85: the server could not find the requested resource (get pods dns-test-8721b51a-f99e-41e3-82ef-9ef4bc14aa85) Apr 27 14:29:02.889: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3433.svc.cluster.local from pod dns-3433/dns-test-8721b51a-f99e-41e3-82ef-9ef4bc14aa85: the server could not find the requested resource (get pods dns-test-8721b51a-f99e-41e3-82ef-9ef4bc14aa85) Apr 27 14:29:02.909: INFO: Unable to read jessie_udp@dns-test-service.dns-3433.svc.cluster.local from pod dns-3433/dns-test-8721b51a-f99e-41e3-82ef-9ef4bc14aa85: the server could not find the requested resource (get pods dns-test-8721b51a-f99e-41e3-82ef-9ef4bc14aa85) Apr 27 14:29:02.911: INFO: Unable to read jessie_tcp@dns-test-service.dns-3433.svc.cluster.local from pod dns-3433/dns-test-8721b51a-f99e-41e3-82ef-9ef4bc14aa85: the server could not find the requested resource (get pods dns-test-8721b51a-f99e-41e3-82ef-9ef4bc14aa85) Apr 27 14:29:02.914: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3433.svc.cluster.local from pod dns-3433/dns-test-8721b51a-f99e-41e3-82ef-9ef4bc14aa85: the server could not find the requested resource (get pods dns-test-8721b51a-f99e-41e3-82ef-9ef4bc14aa85) Apr 27 14:29:02.917: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3433.svc.cluster.local from pod dns-3433/dns-test-8721b51a-f99e-41e3-82ef-9ef4bc14aa85: the server could not find the requested resource (get pods dns-test-8721b51a-f99e-41e3-82ef-9ef4bc14aa85) Apr 27 14:29:02.934: INFO: Lookups using dns-3433/dns-test-8721b51a-f99e-41e3-82ef-9ef4bc14aa85 failed for: [wheezy_udp@dns-test-service.dns-3433.svc.cluster.local wheezy_tcp@dns-test-service.dns-3433.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-3433.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-3433.svc.cluster.local jessie_udp@dns-test-service.dns-3433.svc.cluster.local jessie_tcp@dns-test-service.dns-3433.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-3433.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-3433.svc.cluster.local] Apr 27 14:29:07.879: INFO: Unable to read wheezy_udp@dns-test-service.dns-3433.svc.cluster.local from pod dns-3433/dns-test-8721b51a-f99e-41e3-82ef-9ef4bc14aa85: the server could not find the requested resource (get pods dns-test-8721b51a-f99e-41e3-82ef-9ef4bc14aa85) Apr 27 14:29:07.884: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3433.svc.cluster.local from pod dns-3433/dns-test-8721b51a-f99e-41e3-82ef-9ef4bc14aa85: the server could not find the requested resource (get pods dns-test-8721b51a-f99e-41e3-82ef-9ef4bc14aa85) Apr 27 14:29:07.887: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3433.svc.cluster.local from pod dns-3433/dns-test-8721b51a-f99e-41e3-82ef-9ef4bc14aa85: the server could not find the requested resource (get pods dns-test-8721b51a-f99e-41e3-82ef-9ef4bc14aa85) Apr 27 14:29:07.891: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3433.svc.cluster.local from pod dns-3433/dns-test-8721b51a-f99e-41e3-82ef-9ef4bc14aa85: the server could not find the requested resource (get pods dns-test-8721b51a-f99e-41e3-82ef-9ef4bc14aa85) Apr 27 14:29:07.911: INFO: Unable to read jessie_udp@dns-test-service.dns-3433.svc.cluster.local from pod dns-3433/dns-test-8721b51a-f99e-41e3-82ef-9ef4bc14aa85: the server could not find the requested resource (get pods dns-test-8721b51a-f99e-41e3-82ef-9ef4bc14aa85) Apr 27 14:29:07.913: INFO: Unable to read jessie_tcp@dns-test-service.dns-3433.svc.cluster.local from pod dns-3433/dns-test-8721b51a-f99e-41e3-82ef-9ef4bc14aa85: the server could not find the requested resource (get pods dns-test-8721b51a-f99e-41e3-82ef-9ef4bc14aa85) Apr 27 14:29:07.916: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3433.svc.cluster.local from pod dns-3433/dns-test-8721b51a-f99e-41e3-82ef-9ef4bc14aa85: the server could not find the requested resource (get pods dns-test-8721b51a-f99e-41e3-82ef-9ef4bc14aa85) Apr 27 14:29:07.919: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3433.svc.cluster.local from pod dns-3433/dns-test-8721b51a-f99e-41e3-82ef-9ef4bc14aa85: the server could not find the requested resource (get pods dns-test-8721b51a-f99e-41e3-82ef-9ef4bc14aa85) Apr 27 14:29:07.937: INFO: Lookups using dns-3433/dns-test-8721b51a-f99e-41e3-82ef-9ef4bc14aa85 failed for: [wheezy_udp@dns-test-service.dns-3433.svc.cluster.local wheezy_tcp@dns-test-service.dns-3433.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-3433.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-3433.svc.cluster.local jessie_udp@dns-test-service.dns-3433.svc.cluster.local jessie_tcp@dns-test-service.dns-3433.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-3433.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-3433.svc.cluster.local] Apr 27 14:29:12.903: INFO: Unable to read wheezy_udp@dns-test-service.dns-3433.svc.cluster.local from pod dns-3433/dns-test-8721b51a-f99e-41e3-82ef-9ef4bc14aa85: the server could not find the requested resource (get pods dns-test-8721b51a-f99e-41e3-82ef-9ef4bc14aa85) Apr 27 14:29:12.906: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3433.svc.cluster.local from pod dns-3433/dns-test-8721b51a-f99e-41e3-82ef-9ef4bc14aa85: the server could not find the requested resource (get pods dns-test-8721b51a-f99e-41e3-82ef-9ef4bc14aa85) Apr 27 14:29:12.909: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3433.svc.cluster.local from pod dns-3433/dns-test-8721b51a-f99e-41e3-82ef-9ef4bc14aa85: the server could not find the requested resource (get pods dns-test-8721b51a-f99e-41e3-82ef-9ef4bc14aa85) Apr 27 14:29:12.912: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3433.svc.cluster.local from pod dns-3433/dns-test-8721b51a-f99e-41e3-82ef-9ef4bc14aa85: the server could not find the requested resource (get pods dns-test-8721b51a-f99e-41e3-82ef-9ef4bc14aa85) Apr 27 14:29:12.933: INFO: Unable to read jessie_udp@dns-test-service.dns-3433.svc.cluster.local from pod dns-3433/dns-test-8721b51a-f99e-41e3-82ef-9ef4bc14aa85: the server could not find the requested resource (get pods dns-test-8721b51a-f99e-41e3-82ef-9ef4bc14aa85) Apr 27 14:29:12.935: INFO: Unable to read jessie_tcp@dns-test-service.dns-3433.svc.cluster.local from pod dns-3433/dns-test-8721b51a-f99e-41e3-82ef-9ef4bc14aa85: the server could not find the requested resource (get pods dns-test-8721b51a-f99e-41e3-82ef-9ef4bc14aa85) Apr 27 14:29:12.938: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3433.svc.cluster.local from pod dns-3433/dns-test-8721b51a-f99e-41e3-82ef-9ef4bc14aa85: the server could not find the requested resource (get pods dns-test-8721b51a-f99e-41e3-82ef-9ef4bc14aa85) Apr 27 14:29:12.941: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3433.svc.cluster.local from pod dns-3433/dns-test-8721b51a-f99e-41e3-82ef-9ef4bc14aa85: the server could not find the requested resource (get pods dns-test-8721b51a-f99e-41e3-82ef-9ef4bc14aa85) Apr 27 14:29:12.979: INFO: Lookups using dns-3433/dns-test-8721b51a-f99e-41e3-82ef-9ef4bc14aa85 failed for: [wheezy_udp@dns-test-service.dns-3433.svc.cluster.local wheezy_tcp@dns-test-service.dns-3433.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-3433.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-3433.svc.cluster.local jessie_udp@dns-test-service.dns-3433.svc.cluster.local jessie_tcp@dns-test-service.dns-3433.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-3433.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-3433.svc.cluster.local] Apr 27 14:29:17.879: INFO: Unable to read wheezy_udp@dns-test-service.dns-3433.svc.cluster.local from pod dns-3433/dns-test-8721b51a-f99e-41e3-82ef-9ef4bc14aa85: the server could not find the requested resource (get pods dns-test-8721b51a-f99e-41e3-82ef-9ef4bc14aa85) Apr 27 14:29:17.883: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3433.svc.cluster.local from pod dns-3433/dns-test-8721b51a-f99e-41e3-82ef-9ef4bc14aa85: the server could not find the requested resource (get pods dns-test-8721b51a-f99e-41e3-82ef-9ef4bc14aa85) Apr 27 14:29:17.887: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3433.svc.cluster.local from pod dns-3433/dns-test-8721b51a-f99e-41e3-82ef-9ef4bc14aa85: the server could not find the requested resource (get pods dns-test-8721b51a-f99e-41e3-82ef-9ef4bc14aa85) Apr 27 14:29:17.890: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3433.svc.cluster.local from pod dns-3433/dns-test-8721b51a-f99e-41e3-82ef-9ef4bc14aa85: the server could not find the requested resource (get pods dns-test-8721b51a-f99e-41e3-82ef-9ef4bc14aa85) Apr 27 14:29:17.913: INFO: Unable to read jessie_udp@dns-test-service.dns-3433.svc.cluster.local from pod dns-3433/dns-test-8721b51a-f99e-41e3-82ef-9ef4bc14aa85: the server could not find the requested resource (get pods dns-test-8721b51a-f99e-41e3-82ef-9ef4bc14aa85) Apr 27 14:29:17.916: INFO: Unable to read jessie_tcp@dns-test-service.dns-3433.svc.cluster.local from pod dns-3433/dns-test-8721b51a-f99e-41e3-82ef-9ef4bc14aa85: the server could not find the requested resource (get pods dns-test-8721b51a-f99e-41e3-82ef-9ef4bc14aa85) Apr 27 14:29:17.920: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3433.svc.cluster.local from pod dns-3433/dns-test-8721b51a-f99e-41e3-82ef-9ef4bc14aa85: the server could not find the requested resource (get pods dns-test-8721b51a-f99e-41e3-82ef-9ef4bc14aa85) Apr 27 14:29:17.923: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3433.svc.cluster.local from pod dns-3433/dns-test-8721b51a-f99e-41e3-82ef-9ef4bc14aa85: the server could not find the requested resource (get pods dns-test-8721b51a-f99e-41e3-82ef-9ef4bc14aa85) Apr 27 14:29:17.943: INFO: Lookups using dns-3433/dns-test-8721b51a-f99e-41e3-82ef-9ef4bc14aa85 failed for: [wheezy_udp@dns-test-service.dns-3433.svc.cluster.local wheezy_tcp@dns-test-service.dns-3433.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-3433.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-3433.svc.cluster.local jessie_udp@dns-test-service.dns-3433.svc.cluster.local jessie_tcp@dns-test-service.dns-3433.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-3433.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-3433.svc.cluster.local] Apr 27 14:29:22.879: INFO: Unable to read wheezy_udp@dns-test-service.dns-3433.svc.cluster.local from pod dns-3433/dns-test-8721b51a-f99e-41e3-82ef-9ef4bc14aa85: the server could not find the requested resource (get pods dns-test-8721b51a-f99e-41e3-82ef-9ef4bc14aa85) Apr 27 14:29:22.883: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3433.svc.cluster.local from pod dns-3433/dns-test-8721b51a-f99e-41e3-82ef-9ef4bc14aa85: the server could not find the requested resource (get pods dns-test-8721b51a-f99e-41e3-82ef-9ef4bc14aa85) Apr 27 14:29:22.887: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3433.svc.cluster.local from pod dns-3433/dns-test-8721b51a-f99e-41e3-82ef-9ef4bc14aa85: the server could not find the requested resource (get pods dns-test-8721b51a-f99e-41e3-82ef-9ef4bc14aa85) Apr 27 14:29:22.891: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3433.svc.cluster.local from pod dns-3433/dns-test-8721b51a-f99e-41e3-82ef-9ef4bc14aa85: the server could not find the requested resource (get pods dns-test-8721b51a-f99e-41e3-82ef-9ef4bc14aa85) Apr 27 14:29:22.924: INFO: Unable to read jessie_udp@dns-test-service.dns-3433.svc.cluster.local from pod dns-3433/dns-test-8721b51a-f99e-41e3-82ef-9ef4bc14aa85: the server could not find the requested resource (get pods dns-test-8721b51a-f99e-41e3-82ef-9ef4bc14aa85) Apr 27 14:29:22.927: INFO: Unable to read jessie_tcp@dns-test-service.dns-3433.svc.cluster.local from pod dns-3433/dns-test-8721b51a-f99e-41e3-82ef-9ef4bc14aa85: the server could not find the requested resource (get pods dns-test-8721b51a-f99e-41e3-82ef-9ef4bc14aa85) Apr 27 14:29:22.930: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3433.svc.cluster.local from pod dns-3433/dns-test-8721b51a-f99e-41e3-82ef-9ef4bc14aa85: the server could not find the requested resource (get pods dns-test-8721b51a-f99e-41e3-82ef-9ef4bc14aa85) Apr 27 14:29:22.933: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3433.svc.cluster.local from pod dns-3433/dns-test-8721b51a-f99e-41e3-82ef-9ef4bc14aa85: the server could not find the requested resource (get pods dns-test-8721b51a-f99e-41e3-82ef-9ef4bc14aa85) Apr 27 14:29:22.972: INFO: Lookups using dns-3433/dns-test-8721b51a-f99e-41e3-82ef-9ef4bc14aa85 failed for: [wheezy_udp@dns-test-service.dns-3433.svc.cluster.local wheezy_tcp@dns-test-service.dns-3433.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-3433.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-3433.svc.cluster.local jessie_udp@dns-test-service.dns-3433.svc.cluster.local jessie_tcp@dns-test-service.dns-3433.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-3433.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-3433.svc.cluster.local] Apr 27 14:29:27.940: INFO: DNS probes using dns-3433/dns-test-8721b51a-f99e-41e3-82ef-9ef4bc14aa85 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 27 14:29:28.613: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-3433" for this suite. Apr 27 14:29:34.633: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 27 14:29:34.707: INFO: namespace dns-3433 deletion completed in 6.084269894s • [SLOW TEST:43.049 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 27 14:29:34.708: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change Apr 27 14:29:34.820: INFO: Pod name pod-release: Found 0 pods out of 1 Apr 27 14:29:39.824: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 27 14:29:40.853: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-3243" for this suite. Apr 27 14:29:46.943: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 27 14:29:47.021: INFO: namespace replication-controller-3243 deletion completed in 6.165426826s • [SLOW TEST:12.313 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 27 14:29:47.022: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod Apr 27 14:29:51.411: INFO: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:send-events-622f3c29-b2f0-400c-aa34-236b9ebfbcde,GenerateName:,Namespace:events-6333,SelfLink:/api/v1/namespaces/events-6333/pods/send-events-622f3c29-b2f0-400c-aa34-236b9ebfbcde,UID:c801f374-874d-46bd-8746-470c63f6ba4d,ResourceVersion:7732789,Generation:0,CreationTimestamp:2020-04-27 14:29:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: foo,time: 151155678,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-9857v {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-9857v,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{p gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 [] [] [{ 0 80 TCP }] [] [] {map[] map[]} [{default-token-9857v true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001db6480} {node.kubernetes.io/unreachable Exists NoExecute 0xc001db64a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-27 14:29:47 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-04-27 14:29:49 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-04-27 14:29:49 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-27 14:29:47 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:10.244.2.174,StartTime:2020-04-27 14:29:47 +0000 UTC,ContainerStatuses:[{p {nil ContainerStateRunning{StartedAt:2020-04-27 14:29:49 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 containerd://4e8747ff0193ec2df81f8b2030f9196f54ac64dc96f25e6f90ce5f9c464a1799}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} STEP: checking for scheduler event about the pod Apr 27 14:29:53.416: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod Apr 27 14:29:55.421: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 27 14:29:55.427: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-6333" for this suite. Apr 27 14:30:33.509: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 27 14:30:33.586: INFO: namespace events-6333 deletion completed in 38.096144886s • [SLOW TEST:46.564 seconds] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 27 14:30:33.587: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name s-test-opt-del-cb20601c-fc48-472f-bd3a-b05f8041ba72 STEP: Creating secret with name s-test-opt-upd-00ac54bd-e769-4687-90d1-3d24bc6b11f0 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-cb20601c-fc48-472f-bd3a-b05f8041ba72 STEP: Updating secret s-test-opt-upd-00ac54bd-e769-4687-90d1-3d24bc6b11f0 STEP: Creating secret with name s-test-opt-create-f7dd2b8d-2c9c-4d4d-beec-98cbad53c512 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 27 14:31:48.135: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3649" for this suite. Apr 27 14:32:10.160: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 27 14:32:10.231: INFO: namespace projected-3649 deletion completed in 22.091946359s • [SLOW TEST:96.644 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 27 14:32:10.232: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Apr 27 14:32:14.425: INFO: Waiting up to 5m0s for pod "client-envvars-c2913bd7-faac-40d7-a3d4-b71a44adec0f" in namespace "pods-7841" to be "success or failure" Apr 27 14:32:14.470: INFO: Pod "client-envvars-c2913bd7-faac-40d7-a3d4-b71a44adec0f": Phase="Pending", Reason="", readiness=false. Elapsed: 45.290603ms Apr 27 14:32:16.474: INFO: Pod "client-envvars-c2913bd7-faac-40d7-a3d4-b71a44adec0f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.049096043s Apr 27 14:32:18.478: INFO: Pod "client-envvars-c2913bd7-faac-40d7-a3d4-b71a44adec0f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.053333221s STEP: Saw pod success Apr 27 14:32:18.479: INFO: Pod "client-envvars-c2913bd7-faac-40d7-a3d4-b71a44adec0f" satisfied condition "success or failure" Apr 27 14:32:18.481: INFO: Trying to get logs from node iruya-worker2 pod client-envvars-c2913bd7-faac-40d7-a3d4-b71a44adec0f container env3cont: STEP: delete the pod Apr 27 14:32:18.504: INFO: Waiting for pod client-envvars-c2913bd7-faac-40d7-a3d4-b71a44adec0f to disappear Apr 27 14:32:18.509: INFO: Pod client-envvars-c2913bd7-faac-40d7-a3d4-b71a44adec0f no longer exists [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 27 14:32:18.509: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-7841" for this suite. Apr 27 14:32:56.530: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 27 14:32:56.600: INFO: namespace pods-7841 deletion completed in 38.087133306s • [SLOW TEST:46.368 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 27 14:32:56.600: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod busybox-3f3aee44-556a-4f16-8099-e8985cedb4db in namespace container-probe-8648 Apr 27 14:33:00.713: INFO: Started pod busybox-3f3aee44-556a-4f16-8099-e8985cedb4db in namespace container-probe-8648 STEP: checking the pod's current state and verifying that restartCount is present Apr 27 14:33:00.716: INFO: Initial restart count of pod busybox-3f3aee44-556a-4f16-8099-e8985cedb4db is 0 Apr 27 14:33:50.989: INFO: Restart count of pod container-probe-8648/busybox-3f3aee44-556a-4f16-8099-e8985cedb4db is now 1 (50.272761526s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 27 14:33:51.003: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-8648" for this suite. Apr 27 14:33:57.064: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 27 14:33:57.142: INFO: namespace container-probe-8648 deletion completed in 6.133466324s • [SLOW TEST:60.542 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 27 14:33:57.143: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-map-d698ea53-e5c0-454a-a095-ab17d14e1c8b STEP: Creating a pod to test consume configMaps Apr 27 14:33:57.953: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-1ea72062-9343-4bf7-b14c-73363ee36cdb" in namespace "projected-5973" to be "success or failure" Apr 27 14:33:57.985: INFO: Pod "pod-projected-configmaps-1ea72062-9343-4bf7-b14c-73363ee36cdb": Phase="Pending", Reason="", readiness=false. Elapsed: 31.95667ms Apr 27 14:33:59.988: INFO: Pod "pod-projected-configmaps-1ea72062-9343-4bf7-b14c-73363ee36cdb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035790409s Apr 27 14:34:01.992: INFO: Pod "pod-projected-configmaps-1ea72062-9343-4bf7-b14c-73363ee36cdb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.039925073s STEP: Saw pod success Apr 27 14:34:01.993: INFO: Pod "pod-projected-configmaps-1ea72062-9343-4bf7-b14c-73363ee36cdb" satisfied condition "success or failure" Apr 27 14:34:01.996: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-configmaps-1ea72062-9343-4bf7-b14c-73363ee36cdb container projected-configmap-volume-test: STEP: delete the pod Apr 27 14:34:02.162: INFO: Waiting for pod pod-projected-configmaps-1ea72062-9343-4bf7-b14c-73363ee36cdb to disappear Apr 27 14:34:02.282: INFO: Pod pod-projected-configmaps-1ea72062-9343-4bf7-b14c-73363ee36cdb no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 27 14:34:02.282: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5973" for this suite. Apr 27 14:34:08.319: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 27 14:34:08.393: INFO: namespace projected-5973 deletion completed in 6.106649042s • [SLOW TEST:11.250 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 27 14:34:08.394: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Apr 27 14:34:08.503: INFO: (0) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 5.33625ms) Apr 27 14:34:08.507: INFO: (1) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.352722ms) Apr 27 14:34:08.510: INFO: (2) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.821653ms) Apr 27 14:34:08.512: INFO: (3) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.791108ms) Apr 27 14:34:08.516: INFO: (4) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.776665ms) Apr 27 14:34:08.520: INFO: (5) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.255595ms) Apr 27 14:34:08.522: INFO: (6) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.592999ms) Apr 27 14:34:08.525: INFO: (7) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.817835ms) Apr 27 14:34:08.528: INFO: (8) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.157321ms) Apr 27 14:34:08.531: INFO: (9) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.122874ms) Apr 27 14:34:08.534: INFO: (10) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.686392ms) Apr 27 14:34:08.537: INFO: (11) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.675432ms) Apr 27 14:34:08.540: INFO: (12) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.943699ms) Apr 27 14:34:08.543: INFO: (13) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.442455ms) Apr 27 14:34:08.546: INFO: (14) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.913952ms) Apr 27 14:34:08.549: INFO: (15) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.996804ms) Apr 27 14:34:08.552: INFO: (16) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.813238ms) Apr 27 14:34:08.555: INFO: (17) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.18548ms) Apr 27 14:34:08.558: INFO: (18) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.063549ms) Apr 27 14:34:08.562: INFO: (19) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.475477ms) [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 27 14:34:08.562: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-8782" for this suite. Apr 27 14:34:14.582: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 27 14:34:14.656: INFO: namespace proxy-8782 deletion completed in 6.090806548s • [SLOW TEST:6.263 seconds] [sig-network] Proxy /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:58 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 27 14:34:14.658: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the container STEP: wait for the container to reach Failed STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Apr 27 14:34:17.782: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 27 14:34:17.943: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-3205" for this suite. Apr 27 14:34:23.957: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 27 14:34:24.035: INFO: namespace container-runtime-3205 deletion completed in 6.08980969s • [SLOW TEST:9.378 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129 should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 27 14:34:24.036: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Apr 27 14:34:24.122: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 27 14:34:28.252: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-560" for this suite. Apr 27 14:35:06.275: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 27 14:35:06.350: INFO: namespace pods-560 deletion completed in 38.092963076s • [SLOW TEST:42.314 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 27 14:35:06.350: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44 [It] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod Apr 27 14:35:06.425: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 27 14:35:12.982: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-7466" for this suite. Apr 27 14:35:35.025: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 27 14:35:35.107: INFO: namespace init-container-7466 deletion completed in 22.100925281s • [SLOW TEST:28.757 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 27 14:35:35.107: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Apr 27 14:35:35.150: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7de6709b-d4f7-4e1a-a309-2acc65ecf9e3" in namespace "downward-api-6195" to be "success or failure" Apr 27 14:35:35.200: INFO: Pod "downwardapi-volume-7de6709b-d4f7-4e1a-a309-2acc65ecf9e3": Phase="Pending", Reason="", readiness=false. Elapsed: 49.912541ms Apr 27 14:35:37.204: INFO: Pod "downwardapi-volume-7de6709b-d4f7-4e1a-a309-2acc65ecf9e3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.054339853s Apr 27 14:35:39.208: INFO: Pod "downwardapi-volume-7de6709b-d4f7-4e1a-a309-2acc65ecf9e3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.058420696s STEP: Saw pod success Apr 27 14:35:39.208: INFO: Pod "downwardapi-volume-7de6709b-d4f7-4e1a-a309-2acc65ecf9e3" satisfied condition "success or failure" Apr 27 14:35:39.211: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-7de6709b-d4f7-4e1a-a309-2acc65ecf9e3 container client-container: STEP: delete the pod Apr 27 14:35:39.249: INFO: Waiting for pod downwardapi-volume-7de6709b-d4f7-4e1a-a309-2acc65ecf9e3 to disappear Apr 27 14:35:39.271: INFO: Pod downwardapi-volume-7de6709b-d4f7-4e1a-a309-2acc65ecf9e3 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 27 14:35:39.271: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6195" for this suite. Apr 27 14:35:45.285: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 27 14:35:45.359: INFO: namespace downward-api-6195 deletion completed in 6.084478401s • [SLOW TEST:10.252 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl patch should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 27 14:35:45.359: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating Redis RC Apr 27 14:35:45.441: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6052' Apr 27 14:35:48.314: INFO: stderr: "" Apr 27 14:35:48.314: INFO: stdout: "replicationcontroller/redis-master created\n" STEP: Waiting for Redis master to start. Apr 27 14:35:49.319: INFO: Selector matched 1 pods for map[app:redis] Apr 27 14:35:49.319: INFO: Found 0 / 1 Apr 27 14:35:50.318: INFO: Selector matched 1 pods for map[app:redis] Apr 27 14:35:50.318: INFO: Found 0 / 1 Apr 27 14:35:51.319: INFO: Selector matched 1 pods for map[app:redis] Apr 27 14:35:51.319: INFO: Found 0 / 1 Apr 27 14:35:52.319: INFO: Selector matched 1 pods for map[app:redis] Apr 27 14:35:52.320: INFO: Found 1 / 1 Apr 27 14:35:52.320: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods Apr 27 14:35:52.323: INFO: Selector matched 1 pods for map[app:redis] Apr 27 14:35:52.323: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Apr 27 14:35:52.323: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod redis-master-ns8zg --namespace=kubectl-6052 -p {"metadata":{"annotations":{"x":"y"}}}' Apr 27 14:35:52.419: INFO: stderr: "" Apr 27 14:35:52.419: INFO: stdout: "pod/redis-master-ns8zg patched\n" STEP: checking annotations Apr 27 14:35:52.425: INFO: Selector matched 1 pods for map[app:redis] Apr 27 14:35:52.425: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 27 14:35:52.425: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6052" for this suite. Apr 27 14:36:14.459: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 27 14:36:14.523: INFO: namespace kubectl-6052 deletion completed in 22.094259245s • [SLOW TEST:29.163 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl patch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 27 14:36:14.523: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod liveness-be08196b-34f3-408f-b038-ec8486f3b4e2 in namespace container-probe-9365 Apr 27 14:36:18.662: INFO: Started pod liveness-be08196b-34f3-408f-b038-ec8486f3b4e2 in namespace container-probe-9365 STEP: checking the pod's current state and verifying that restartCount is present Apr 27 14:36:18.665: INFO: Initial restart count of pod liveness-be08196b-34f3-408f-b038-ec8486f3b4e2 is 0 Apr 27 14:36:34.700: INFO: Restart count of pod container-probe-9365/liveness-be08196b-34f3-408f-b038-ec8486f3b4e2 is now 1 (16.034938583s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 27 14:36:34.739: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-9365" for this suite. Apr 27 14:36:40.904: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 27 14:36:41.005: INFO: namespace container-probe-9365 deletion completed in 6.260604357s • [SLOW TEST:26.482 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 27 14:36:41.005: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0666 on node default medium Apr 27 14:36:41.118: INFO: Waiting up to 5m0s for pod "pod-99862f42-d81a-4230-848f-ae211a5abd0e" in namespace "emptydir-3309" to be "success or failure" Apr 27 14:36:41.146: INFO: Pod "pod-99862f42-d81a-4230-848f-ae211a5abd0e": Phase="Pending", Reason="", readiness=false. Elapsed: 28.253945ms Apr 27 14:36:43.249: INFO: Pod "pod-99862f42-d81a-4230-848f-ae211a5abd0e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.130835458s Apr 27 14:36:45.253: INFO: Pod "pod-99862f42-d81a-4230-848f-ae211a5abd0e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.134810368s STEP: Saw pod success Apr 27 14:36:45.253: INFO: Pod "pod-99862f42-d81a-4230-848f-ae211a5abd0e" satisfied condition "success or failure" Apr 27 14:36:45.256: INFO: Trying to get logs from node iruya-worker pod pod-99862f42-d81a-4230-848f-ae211a5abd0e container test-container: STEP: delete the pod Apr 27 14:36:45.292: INFO: Waiting for pod pod-99862f42-d81a-4230-848f-ae211a5abd0e to disappear Apr 27 14:36:45.345: INFO: Pod pod-99862f42-d81a-4230-848f-ae211a5abd0e no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 27 14:36:45.345: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3309" for this suite. Apr 27 14:36:51.363: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 27 14:36:51.437: INFO: namespace emptydir-3309 deletion completed in 6.089189013s • [SLOW TEST:10.432 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 27 14:36:51.438: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: starting a background goroutine to produce watch events STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 27 14:36:57.007: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-2220" for this suite. Apr 27 14:37:03.148: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 27 14:37:03.251: INFO: namespace watch-2220 deletion completed in 6.206374312s • [SLOW TEST:11.813 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 27 14:37:03.251: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Apr 27 14:37:06.343: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 27 14:37:06.505: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-2232" for this suite. Apr 27 14:37:12.538: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 27 14:37:12.616: INFO: namespace container-runtime-2232 deletion completed in 6.106433345s • [SLOW TEST:9.365 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129 should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 27 14:37:12.617: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Apr 27 14:37:20.763: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Apr 27 14:37:20.774: INFO: Pod pod-with-poststart-http-hook still exists Apr 27 14:37:22.775: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Apr 27 14:37:22.779: INFO: Pod pod-with-poststart-http-hook still exists Apr 27 14:37:24.775: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Apr 27 14:37:24.779: INFO: Pod pod-with-poststart-http-hook still exists Apr 27 14:37:26.775: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Apr 27 14:37:26.779: INFO: Pod pod-with-poststart-http-hook still exists Apr 27 14:37:28.775: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Apr 27 14:37:28.778: INFO: Pod pod-with-poststart-http-hook still exists Apr 27 14:37:30.775: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Apr 27 14:37:30.779: INFO: Pod pod-with-poststart-http-hook still exists Apr 27 14:37:32.775: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Apr 27 14:37:32.778: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 27 14:37:32.778: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-5586" for this suite. Apr 27 14:37:54.825: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 27 14:37:54.891: INFO: namespace container-lifecycle-hook-5586 deletion completed in 22.108391968s • [SLOW TEST:42.274 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 27 14:37:54.891: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 27 14:37:55.083: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-2839" for this suite. Apr 27 14:38:01.095: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 27 14:38:01.177: INFO: namespace kubelet-test-2839 deletion completed in 6.089990191s • [SLOW TEST:6.286 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78 should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 27 14:38:01.177: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Apr 27 14:38:01.214: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 27 14:38:05.268: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-2724" for this suite. Apr 27 14:38:43.319: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 27 14:38:43.394: INFO: namespace pods-2724 deletion completed in 38.123178997s • [SLOW TEST:42.217 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run rc should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 27 14:38:43.394: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1456 [It] should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Apr 27 14:38:43.463: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=kubectl-9135' Apr 27 14:38:43.572: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Apr 27 14:38:43.572: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n" STEP: verifying the rc e2e-test-nginx-rc was created STEP: verifying the pod controlled by rc e2e-test-nginx-rc was created STEP: confirm that you can get logs from an rc Apr 27 14:38:43.584: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-nginx-rc-7mbfb] Apr 27 14:38:43.584: INFO: Waiting up to 5m0s for pod "e2e-test-nginx-rc-7mbfb" in namespace "kubectl-9135" to be "running and ready" Apr 27 14:38:43.610: INFO: Pod "e2e-test-nginx-rc-7mbfb": Phase="Pending", Reason="", readiness=false. Elapsed: 25.695918ms Apr 27 14:38:45.651: INFO: Pod "e2e-test-nginx-rc-7mbfb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.06707538s Apr 27 14:38:47.656: INFO: Pod "e2e-test-nginx-rc-7mbfb": Phase="Running", Reason="", readiness=true. Elapsed: 4.071490824s Apr 27 14:38:47.656: INFO: Pod "e2e-test-nginx-rc-7mbfb" satisfied condition "running and ready" Apr 27 14:38:47.656: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-nginx-rc-7mbfb] Apr 27 14:38:47.656: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-nginx-rc --namespace=kubectl-9135' Apr 27 14:38:47.780: INFO: stderr: "" Apr 27 14:38:47.780: INFO: stdout: "" [AfterEach] [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1461 Apr 27 14:38:47.780: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=kubectl-9135' Apr 27 14:38:47.888: INFO: stderr: "" Apr 27 14:38:47.888: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 27 14:38:47.888: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9135" for this suite. Apr 27 14:38:53.905: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 27 14:38:53.988: INFO: namespace kubectl-9135 deletion completed in 6.096977251s • [SLOW TEST:10.594 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 27 14:38:53.988: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test override arguments Apr 27 14:38:54.043: INFO: Waiting up to 5m0s for pod "client-containers-28958255-8cef-4c3f-84c2-4a60ec9bd85d" in namespace "containers-2223" to be "success or failure" Apr 27 14:38:54.046: INFO: Pod "client-containers-28958255-8cef-4c3f-84c2-4a60ec9bd85d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.843438ms Apr 27 14:38:56.059: INFO: Pod "client-containers-28958255-8cef-4c3f-84c2-4a60ec9bd85d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016253243s Apr 27 14:38:58.063: INFO: Pod "client-containers-28958255-8cef-4c3f-84c2-4a60ec9bd85d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.020144098s Apr 27 14:39:00.071: INFO: Pod "client-containers-28958255-8cef-4c3f-84c2-4a60ec9bd85d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.027900596s STEP: Saw pod success Apr 27 14:39:00.071: INFO: Pod "client-containers-28958255-8cef-4c3f-84c2-4a60ec9bd85d" satisfied condition "success or failure" Apr 27 14:39:00.073: INFO: Trying to get logs from node iruya-worker2 pod client-containers-28958255-8cef-4c3f-84c2-4a60ec9bd85d container test-container: STEP: delete the pod Apr 27 14:39:00.111: INFO: Waiting for pod client-containers-28958255-8cef-4c3f-84c2-4a60ec9bd85d to disappear Apr 27 14:39:00.123: INFO: Pod client-containers-28958255-8cef-4c3f-84c2-4a60ec9bd85d no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 27 14:39:00.123: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-2223" for this suite. Apr 27 14:39:06.140: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 27 14:39:06.210: INFO: namespace containers-2223 deletion completed in 6.083748591s • [SLOW TEST:12.222 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 27 14:39:06.211: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Performing setup for networking test in namespace pod-network-test-8014 STEP: creating a selector STEP: Creating the service pods in kubernetes Apr 27 14:39:06.262: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Apr 27 14:39:30.386: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.2.184 8081 | grep -v '^\s*$'] Namespace:pod-network-test-8014 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 27 14:39:30.386: INFO: >>> kubeConfig: /root/.kube/config I0427 14:39:30.428230 6 log.go:172] (0xc003523080) (0xc0023428c0) Create stream I0427 14:39:30.428286 6 log.go:172] (0xc003523080) (0xc0023428c0) Stream added, broadcasting: 1 I0427 14:39:30.431736 6 log.go:172] (0xc003523080) Reply frame received for 1 I0427 14:39:30.431779 6 log.go:172] (0xc003523080) (0xc002342960) Create stream I0427 14:39:30.431797 6 log.go:172] (0xc003523080) (0xc002342960) Stream added, broadcasting: 3 I0427 14:39:30.432918 6 log.go:172] (0xc003523080) Reply frame received for 3 I0427 14:39:30.432949 6 log.go:172] (0xc003523080) (0xc002342a00) Create stream I0427 14:39:30.432960 6 log.go:172] (0xc003523080) (0xc002342a00) Stream added, broadcasting: 5 I0427 14:39:30.434216 6 log.go:172] (0xc003523080) Reply frame received for 5 I0427 14:39:31.522416 6 log.go:172] (0xc003523080) Data frame received for 3 I0427 14:39:31.522539 6 log.go:172] (0xc002342960) (3) Data frame handling I0427 14:39:31.522568 6 log.go:172] (0xc002342960) (3) Data frame sent I0427 14:39:31.522580 6 log.go:172] (0xc003523080) Data frame received for 3 I0427 14:39:31.522604 6 log.go:172] (0xc002342960) (3) Data frame handling I0427 14:39:31.522649 6 log.go:172] (0xc003523080) Data frame received for 5 I0427 14:39:31.522706 6 log.go:172] (0xc002342a00) (5) Data frame handling I0427 14:39:31.525792 6 log.go:172] (0xc003523080) Data frame received for 1 I0427 14:39:31.525828 6 log.go:172] (0xc0023428c0) (1) Data frame handling I0427 14:39:31.525846 6 log.go:172] (0xc0023428c0) (1) Data frame sent I0427 14:39:31.525868 6 log.go:172] (0xc003523080) (0xc0023428c0) Stream removed, broadcasting: 1 I0427 14:39:31.525897 6 log.go:172] (0xc003523080) Go away received I0427 14:39:31.526125 6 log.go:172] (0xc003523080) (0xc0023428c0) Stream removed, broadcasting: 1 I0427 14:39:31.526153 6 log.go:172] (0xc003523080) (0xc002342960) Stream removed, broadcasting: 3 I0427 14:39:31.526171 6 log.go:172] (0xc003523080) (0xc002342a00) Stream removed, broadcasting: 5 Apr 27 14:39:31.526: INFO: Found all expected endpoints: [netserver-0] Apr 27 14:39:31.530: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.1.144 8081 | grep -v '^\s*$'] Namespace:pod-network-test-8014 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 27 14:39:31.530: INFO: >>> kubeConfig: /root/.kube/config I0427 14:39:31.562276 6 log.go:172] (0xc002b12630) (0xc002f7a320) Create stream I0427 14:39:31.562308 6 log.go:172] (0xc002b12630) (0xc002f7a320) Stream added, broadcasting: 1 I0427 14:39:31.564476 6 log.go:172] (0xc002b12630) Reply frame received for 1 I0427 14:39:31.564507 6 log.go:172] (0xc002b12630) (0xc002f7a3c0) Create stream I0427 14:39:31.564517 6 log.go:172] (0xc002b12630) (0xc002f7a3c0) Stream added, broadcasting: 3 I0427 14:39:31.565341 6 log.go:172] (0xc002b12630) Reply frame received for 3 I0427 14:39:31.565381 6 log.go:172] (0xc002b12630) (0xc001a5ee60) Create stream I0427 14:39:31.565398 6 log.go:172] (0xc002b12630) (0xc001a5ee60) Stream added, broadcasting: 5 I0427 14:39:31.566278 6 log.go:172] (0xc002b12630) Reply frame received for 5 I0427 14:39:32.642875 6 log.go:172] (0xc002b12630) Data frame received for 3 I0427 14:39:32.642921 6 log.go:172] (0xc002f7a3c0) (3) Data frame handling I0427 14:39:32.642973 6 log.go:172] (0xc002f7a3c0) (3) Data frame sent I0427 14:39:32.643006 6 log.go:172] (0xc002b12630) Data frame received for 3 I0427 14:39:32.643021 6 log.go:172] (0xc002f7a3c0) (3) Data frame handling I0427 14:39:32.643458 6 log.go:172] (0xc002b12630) Data frame received for 5 I0427 14:39:32.643489 6 log.go:172] (0xc001a5ee60) (5) Data frame handling I0427 14:39:32.645427 6 log.go:172] (0xc002b12630) Data frame received for 1 I0427 14:39:32.645468 6 log.go:172] (0xc002f7a320) (1) Data frame handling I0427 14:39:32.645495 6 log.go:172] (0xc002f7a320) (1) Data frame sent I0427 14:39:32.645516 6 log.go:172] (0xc002b12630) (0xc002f7a320) Stream removed, broadcasting: 1 I0427 14:39:32.645610 6 log.go:172] (0xc002b12630) (0xc002f7a320) Stream removed, broadcasting: 1 I0427 14:39:32.645633 6 log.go:172] (0xc002b12630) (0xc002f7a3c0) Stream removed, broadcasting: 3 I0427 14:39:32.645645 6 log.go:172] (0xc002b12630) (0xc001a5ee60) Stream removed, broadcasting: 5 Apr 27 14:39:32.645: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 27 14:39:32.645: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready I0427 14:39:32.646055 6 log.go:172] (0xc002b12630) Go away received STEP: Destroying namespace "pod-network-test-8014" for this suite. Apr 27 14:39:48.682: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 27 14:39:48.760: INFO: namespace pod-network-test-8014 deletion completed in 16.109131944s • [SLOW TEST:42.548 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl replace should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 27 14:39:48.760: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1721 [It] should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Apr 27 14:39:48.845: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --labels=run=e2e-test-nginx-pod --namespace=kubectl-4863' Apr 27 14:39:48.959: INFO: stderr: "" Apr 27 14:39:48.959: INFO: stdout: "pod/e2e-test-nginx-pod created\n" STEP: verifying the pod e2e-test-nginx-pod is running STEP: verifying the pod e2e-test-nginx-pod was created Apr 27 14:39:54.010: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-nginx-pod --namespace=kubectl-4863 -o json' Apr 27 14:39:54.112: INFO: stderr: "" Apr 27 14:39:54.112: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2020-04-27T14:39:48Z\",\n \"labels\": {\n \"run\": \"e2e-test-nginx-pod\"\n },\n \"name\": \"e2e-test-nginx-pod\",\n \"namespace\": \"kubectl-4863\",\n \"resourceVersion\": \"7734678\",\n \"selfLink\": \"/api/v1/namespaces/kubectl-4863/pods/e2e-test-nginx-pod\",\n \"uid\": \"a0a0b1d1-54ed-45eb-a845-b3cb2107344f\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/nginx:1.14-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-nginx-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-fwxzl\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"iruya-worker\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-fwxzl\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-fwxzl\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-04-27T14:39:49Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-04-27T14:39:52Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-04-27T14:39:52Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-04-27T14:39:48Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"containerd://f883713094c2debf8ed5be81589d850ff3f18c0ad577eb643fe338bf52b092d2\",\n \"image\": \"docker.io/library/nginx:1.14-alpine\",\n \"imageID\": \"docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7\",\n \"lastState\": {},\n \"name\": \"e2e-test-nginx-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2020-04-27T14:39:51Z\"\n }\n }\n }\n ],\n \"hostIP\": \"172.17.0.6\",\n \"phase\": \"Running\",\n \"podIP\": \"10.244.2.185\",\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2020-04-27T14:39:49Z\"\n }\n}\n" STEP: replace the image in the pod Apr 27 14:39:54.113: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-4863' Apr 27 14:39:54.365: INFO: stderr: "" Apr 27 14:39:54.365: INFO: stdout: "pod/e2e-test-nginx-pod replaced\n" STEP: verifying the pod e2e-test-nginx-pod has the right image docker.io/library/busybox:1.29 [AfterEach] [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1726 Apr 27 14:39:54.398: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=kubectl-4863' Apr 27 14:40:02.181: INFO: stderr: "" Apr 27 14:40:02.181: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 27 14:40:02.181: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4863" for this suite. Apr 27 14:40:08.194: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 27 14:40:08.270: INFO: namespace kubectl-4863 deletion completed in 6.086425019s • [SLOW TEST:19.510 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 27 14:40:08.271: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Apr 27 14:40:08.370: INFO: Pod name cleanup-pod: Found 0 pods out of 1 Apr 27 14:40:13.375: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Apr 27 14:40:13.375: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 Apr 27 14:40:13.401: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment,GenerateName:,Namespace:deployment-3791,SelfLink:/apis/apps/v1/namespaces/deployment-3791/deployments/test-cleanup-deployment,UID:79c3119c-a8ec-4e88-af38-7edc1ce9ce66,ResourceVersion:7734758,Generation:1,CreationTimestamp:2020-04-27 14:40:13 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[],ReadyReplicas:0,CollisionCount:nil,},} Apr 27 14:40:13.407: INFO: New ReplicaSet "test-cleanup-deployment-55bbcbc84c" of Deployment "test-cleanup-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment-55bbcbc84c,GenerateName:,Namespace:deployment-3791,SelfLink:/apis/apps/v1/namespaces/deployment-3791/replicasets/test-cleanup-deployment-55bbcbc84c,UID:c7634659-e99b-4403-9555-71713d730da8,ResourceVersion:7734760,Generation:1,CreationTimestamp:2020-04-27 14:40:13 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-cleanup-deployment 79c3119c-a8ec-4e88-af38-7edc1ce9ce66 0xc0027fb967 0xc0027fb968}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:0,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Apr 27 14:40:13.407: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment": Apr 27 14:40:13.408: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller,GenerateName:,Namespace:deployment-3791,SelfLink:/apis/apps/v1/namespaces/deployment-3791/replicasets/test-cleanup-controller,UID:fa14940b-5bf0-4664-a91f-3a2d8fd703e4,ResourceVersion:7734759,Generation:1,CreationTimestamp:2020-04-27 14:40:08 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 Deployment test-cleanup-deployment 79c3119c-a8ec-4e88-af38-7edc1ce9ce66 0xc0027fb87f 0xc0027fb890}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} Apr 27 14:40:13.428: INFO: Pod "test-cleanup-controller-mk6f7" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller-mk6f7,GenerateName:test-cleanup-controller-,Namespace:deployment-3791,SelfLink:/api/v1/namespaces/deployment-3791/pods/test-cleanup-controller-mk6f7,UID:48fd6ad7-e106-4a2b-bb83-52101516b7ce,ResourceVersion:7734752,Generation:0,CreationTimestamp:2020-04-27 14:40:08 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-cleanup-controller fa14940b-5bf0-4664-a91f-3a2d8fd703e4 0xc003195437 0xc003195438}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-slg7x {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-slg7x,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-slg7x true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0031954b0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0031954d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-27 14:40:08 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-04-27 14:40:11 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-04-27 14:40:11 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-27 14:40:08 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:10.244.1.146,StartTime:2020-04-27 14:40:08 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-04-27 14:40:10 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://06927a5038168a024133a8d747bb2b22b0f4679088db68744cc6187e47caac85}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 27 14:40:13.428: INFO: Pod "test-cleanup-deployment-55bbcbc84c-sp697" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment-55bbcbc84c-sp697,GenerateName:test-cleanup-deployment-55bbcbc84c-,Namespace:deployment-3791,SelfLink:/api/v1/namespaces/deployment-3791/pods/test-cleanup-deployment-55bbcbc84c-sp697,UID:ce981528-42f2-4851-98de-8cb168079302,ResourceVersion:7734765,Generation:0,CreationTimestamp:2020-04-27 14:40:13 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-cleanup-deployment-55bbcbc84c c7634659-e99b-4403-9555-71713d730da8 0xc0031955c7 0xc0031955c8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-slg7x {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-slg7x,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-slg7x true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc003195640} {node.kubernetes.io/unreachable Exists NoExecute 0xc003195660}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-27 14:40:13 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 27 14:40:13.429: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-3791" for this suite. Apr 27 14:40:19.519: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 27 14:40:19.593: INFO: namespace deployment-3791 deletion completed in 6.106789313s • [SLOW TEST:11.322 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 27 14:40:19.594: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88 [It] should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating service multi-endpoint-test in namespace services-4282 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-4282 to expose endpoints map[] Apr 27 14:40:19.681: INFO: Get endpoints failed (14.171828ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found Apr 27 14:40:20.685: INFO: successfully validated that service multi-endpoint-test in namespace services-4282 exposes endpoints map[] (1.017938259s elapsed) STEP: Creating pod pod1 in namespace services-4282 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-4282 to expose endpoints map[pod1:[100]] Apr 27 14:40:23.725: INFO: successfully validated that service multi-endpoint-test in namespace services-4282 exposes endpoints map[pod1:[100]] (3.032550367s elapsed) STEP: Creating pod pod2 in namespace services-4282 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-4282 to expose endpoints map[pod1:[100] pod2:[101]] Apr 27 14:40:27.839: INFO: successfully validated that service multi-endpoint-test in namespace services-4282 exposes endpoints map[pod1:[100] pod2:[101]] (4.110348224s elapsed) STEP: Deleting pod pod1 in namespace services-4282 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-4282 to expose endpoints map[pod2:[101]] Apr 27 14:40:28.918: INFO: successfully validated that service multi-endpoint-test in namespace services-4282 exposes endpoints map[pod2:[101]] (1.074003765s elapsed) STEP: Deleting pod pod2 in namespace services-4282 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-4282 to expose endpoints map[] Apr 27 14:40:29.938: INFO: successfully validated that service multi-endpoint-test in namespace services-4282 exposes endpoints map[] (1.015055398s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 27 14:40:30.018: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-4282" for this suite. Apr 27 14:40:52.075: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 27 14:40:52.148: INFO: namespace services-4282 deletion completed in 22.125914099s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92 • [SLOW TEST:32.555 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 27 14:40:52.149: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-map-432a410c-147b-4907-ad24-9f0a9529ff92 STEP: Creating a pod to test consume secrets Apr 27 14:40:52.285: INFO: Waiting up to 5m0s for pod "pod-secrets-5363bfc3-da3a-44d2-92e3-73d34728ee37" in namespace "secrets-9659" to be "success or failure" Apr 27 14:40:52.293: INFO: Pod "pod-secrets-5363bfc3-da3a-44d2-92e3-73d34728ee37": Phase="Pending", Reason="", readiness=false. Elapsed: 7.903676ms Apr 27 14:40:54.297: INFO: Pod "pod-secrets-5363bfc3-da3a-44d2-92e3-73d34728ee37": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011890167s Apr 27 14:40:56.301: INFO: Pod "pod-secrets-5363bfc3-da3a-44d2-92e3-73d34728ee37": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016069482s STEP: Saw pod success Apr 27 14:40:56.302: INFO: Pod "pod-secrets-5363bfc3-da3a-44d2-92e3-73d34728ee37" satisfied condition "success or failure" Apr 27 14:40:56.304: INFO: Trying to get logs from node iruya-worker2 pod pod-secrets-5363bfc3-da3a-44d2-92e3-73d34728ee37 container secret-volume-test: STEP: delete the pod Apr 27 14:40:56.368: INFO: Waiting for pod pod-secrets-5363bfc3-da3a-44d2-92e3-73d34728ee37 to disappear Apr 27 14:40:56.383: INFO: Pod pod-secrets-5363bfc3-da3a-44d2-92e3-73d34728ee37 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 27 14:40:56.383: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-9659" for this suite. Apr 27 14:41:02.399: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 27 14:41:02.495: INFO: namespace secrets-9659 deletion completed in 6.108343325s • [SLOW TEST:10.346 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 27 14:41:02.495: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-55aedb68-03df-4270-b4f2-0a82f30c639b STEP: Creating a pod to test consume secrets Apr 27 14:41:02.571: INFO: Waiting up to 5m0s for pod "pod-secrets-1e0b8027-cb4c-4939-a901-366b0d8fbbd2" in namespace "secrets-1584" to be "success or failure" Apr 27 14:41:02.574: INFO: Pod "pod-secrets-1e0b8027-cb4c-4939-a901-366b0d8fbbd2": Phase="Pending", Reason="", readiness=false. Elapsed: 3.417861ms Apr 27 14:41:04.581: INFO: Pod "pod-secrets-1e0b8027-cb4c-4939-a901-366b0d8fbbd2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010447361s Apr 27 14:41:06.599: INFO: Pod "pod-secrets-1e0b8027-cb4c-4939-a901-366b0d8fbbd2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.02812475s STEP: Saw pod success Apr 27 14:41:06.599: INFO: Pod "pod-secrets-1e0b8027-cb4c-4939-a901-366b0d8fbbd2" satisfied condition "success or failure" Apr 27 14:41:06.602: INFO: Trying to get logs from node iruya-worker2 pod pod-secrets-1e0b8027-cb4c-4939-a901-366b0d8fbbd2 container secret-volume-test: STEP: delete the pod Apr 27 14:41:06.635: INFO: Waiting for pod pod-secrets-1e0b8027-cb4c-4939-a901-366b0d8fbbd2 to disappear Apr 27 14:41:06.646: INFO: Pod pod-secrets-1e0b8027-cb4c-4939-a901-366b0d8fbbd2 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 27 14:41:06.646: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1584" for this suite. Apr 27 14:41:12.662: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 27 14:41:12.735: INFO: namespace secrets-1584 deletion completed in 6.085717112s • [SLOW TEST:10.241 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 27 14:41:12.736: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Apr 27 14:41:12.813: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. Apr 27 14:41:12.851: INFO: Number of nodes with available pods: 0 Apr 27 14:41:12.851: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. Apr 27 14:41:12.881: INFO: Number of nodes with available pods: 0 Apr 27 14:41:12.881: INFO: Node iruya-worker is running more than one daemon pod Apr 27 14:41:13.885: INFO: Number of nodes with available pods: 0 Apr 27 14:41:13.885: INFO: Node iruya-worker is running more than one daemon pod Apr 27 14:41:14.886: INFO: Number of nodes with available pods: 0 Apr 27 14:41:14.886: INFO: Node iruya-worker is running more than one daemon pod Apr 27 14:41:15.886: INFO: Number of nodes with available pods: 0 Apr 27 14:41:15.886: INFO: Node iruya-worker is running more than one daemon pod Apr 27 14:41:16.885: INFO: Number of nodes with available pods: 1 Apr 27 14:41:16.886: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled Apr 27 14:41:16.931: INFO: Number of nodes with available pods: 1 Apr 27 14:41:16.931: INFO: Number of running nodes: 0, number of available pods: 1 Apr 27 14:41:17.935: INFO: Number of nodes with available pods: 0 Apr 27 14:41:17.935: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate Apr 27 14:41:17.995: INFO: Number of nodes with available pods: 0 Apr 27 14:41:17.995: INFO: Node iruya-worker is running more than one daemon pod Apr 27 14:41:19.000: INFO: Number of nodes with available pods: 0 Apr 27 14:41:19.000: INFO: Node iruya-worker is running more than one daemon pod Apr 27 14:41:20.000: INFO: Number of nodes with available pods: 0 Apr 27 14:41:20.000: INFO: Node iruya-worker is running more than one daemon pod Apr 27 14:41:21.000: INFO: Number of nodes with available pods: 0 Apr 27 14:41:21.000: INFO: Node iruya-worker is running more than one daemon pod Apr 27 14:41:21.999: INFO: Number of nodes with available pods: 0 Apr 27 14:41:21.999: INFO: Node iruya-worker is running more than one daemon pod Apr 27 14:41:22.999: INFO: Number of nodes with available pods: 0 Apr 27 14:41:22.999: INFO: Node iruya-worker is running more than one daemon pod Apr 27 14:41:24.000: INFO: Number of nodes with available pods: 1 Apr 27 14:41:24.000: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-1083, will wait for the garbage collector to delete the pods Apr 27 14:41:24.065: INFO: Deleting DaemonSet.extensions daemon-set took: 5.78468ms Apr 27 14:41:24.366: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.248715ms Apr 27 14:41:32.269: INFO: Number of nodes with available pods: 0 Apr 27 14:41:32.269: INFO: Number of running nodes: 0, number of available pods: 0 Apr 27 14:41:32.271: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-1083/daemonsets","resourceVersion":"7735109"},"items":null} Apr 27 14:41:32.274: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-1083/pods","resourceVersion":"7735109"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 27 14:41:32.313: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-1083" for this suite. Apr 27 14:41:38.358: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 27 14:41:38.432: INFO: namespace daemonsets-1083 deletion completed in 6.115266021s • [SLOW TEST:25.696 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 27 14:41:38.432: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1212.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1212.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 27 14:41:44.571: INFO: DNS probes using dns-1212/dns-test-2443316d-f27f-46b4-b9be-f51f16e6c796 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 27 14:41:44.598: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-1212" for this suite. Apr 27 14:41:50.646: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 27 14:41:50.726: INFO: namespace dns-1212 deletion completed in 6.106443098s • [SLOW TEST:12.294 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 27 14:41:50.726: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod test-webserver-cd7d10e9-3634-4700-b454-93c4c18a8431 in namespace container-probe-8876 Apr 27 14:41:54.874: INFO: Started pod test-webserver-cd7d10e9-3634-4700-b454-93c4c18a8431 in namespace container-probe-8876 STEP: checking the pod's current state and verifying that restartCount is present Apr 27 14:41:54.876: INFO: Initial restart count of pod test-webserver-cd7d10e9-3634-4700-b454-93c4c18a8431 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 27 14:45:55.814: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-8876" for this suite. Apr 27 14:46:02.017: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 27 14:46:02.098: INFO: namespace container-probe-8876 deletion completed in 6.275361323s • [SLOW TEST:251.371 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl expose should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 27 14:46:02.098: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating Redis RC Apr 27 14:46:02.175: INFO: namespace kubectl-3683 Apr 27 14:46:02.175: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3683' Apr 27 14:46:05.112: INFO: stderr: "" Apr 27 14:46:05.112: INFO: stdout: "replicationcontroller/redis-master created\n" STEP: Waiting for Redis master to start. Apr 27 14:46:06.116: INFO: Selector matched 1 pods for map[app:redis] Apr 27 14:46:06.116: INFO: Found 0 / 1 Apr 27 14:46:07.247: INFO: Selector matched 1 pods for map[app:redis] Apr 27 14:46:07.248: INFO: Found 0 / 1 Apr 27 14:46:08.116: INFO: Selector matched 1 pods for map[app:redis] Apr 27 14:46:08.116: INFO: Found 0 / 1 Apr 27 14:46:09.116: INFO: Selector matched 1 pods for map[app:redis] Apr 27 14:46:09.116: INFO: Found 1 / 1 Apr 27 14:46:09.116: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Apr 27 14:46:09.121: INFO: Selector matched 1 pods for map[app:redis] Apr 27 14:46:09.121: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Apr 27 14:46:09.121: INFO: wait on redis-master startup in kubectl-3683 Apr 27 14:46:09.121: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-bnrd5 redis-master --namespace=kubectl-3683' Apr 27 14:46:09.233: INFO: stderr: "" Apr 27 14:46:09.233: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 27 Apr 14:46:08.065 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 27 Apr 14:46:08.065 # Server started, Redis version 3.2.12\n1:M 27 Apr 14:46:08.065 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 27 Apr 14:46:08.065 * The server is now ready to accept connections on port 6379\n" STEP: exposing RC Apr 27 14:46:09.233: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc redis-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-3683' Apr 27 14:46:09.400: INFO: stderr: "" Apr 27 14:46:09.400: INFO: stdout: "service/rm2 exposed\n" Apr 27 14:46:09.408: INFO: Service rm2 in namespace kubectl-3683 found. STEP: exposing service Apr 27 14:46:11.414: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-3683' Apr 27 14:46:11.548: INFO: stderr: "" Apr 27 14:46:11.548: INFO: stdout: "service/rm3 exposed\n" Apr 27 14:46:11.559: INFO: Service rm3 in namespace kubectl-3683 found. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 27 14:46:13.566: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3683" for this suite. Apr 27 14:46:35.586: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 27 14:46:35.663: INFO: namespace kubectl-3683 deletion completed in 22.092909718s • [SLOW TEST:33.565 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl expose /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 27 14:46:35.664: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-map-5ca5bd15-8b91-4f1d-939a-2d210d2a6606 STEP: Creating a pod to test consume configMaps Apr 27 14:46:35.767: INFO: Waiting up to 5m0s for pod "pod-configmaps-4cbb0252-b755-4c25-87d2-5360ccffbb24" in namespace "configmap-2333" to be "success or failure" Apr 27 14:46:35.788: INFO: Pod "pod-configmaps-4cbb0252-b755-4c25-87d2-5360ccffbb24": Phase="Pending", Reason="", readiness=false. Elapsed: 20.202772ms Apr 27 14:46:37.790: INFO: Pod "pod-configmaps-4cbb0252-b755-4c25-87d2-5360ccffbb24": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02271539s Apr 27 14:46:39.794: INFO: Pod "pod-configmaps-4cbb0252-b755-4c25-87d2-5360ccffbb24": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.026516993s STEP: Saw pod success Apr 27 14:46:39.794: INFO: Pod "pod-configmaps-4cbb0252-b755-4c25-87d2-5360ccffbb24" satisfied condition "success or failure" Apr 27 14:46:39.797: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-4cbb0252-b755-4c25-87d2-5360ccffbb24 container configmap-volume-test: STEP: delete the pod Apr 27 14:46:39.821: INFO: Waiting for pod pod-configmaps-4cbb0252-b755-4c25-87d2-5360ccffbb24 to disappear Apr 27 14:46:39.826: INFO: Pod pod-configmaps-4cbb0252-b755-4c25-87d2-5360ccffbb24 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 27 14:46:39.826: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2333" for this suite. Apr 27 14:46:45.871: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 27 14:46:45.945: INFO: namespace configmap-2333 deletion completed in 6.115001171s • [SLOW TEST:10.281 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 27 14:46:45.945: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Performing setup for networking test in namespace pod-network-test-6951 STEP: creating a selector STEP: Creating the service pods in kubernetes Apr 27 14:46:46.001: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Apr 27 14:47:06.162: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.1.152:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-6951 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 27 14:47:06.162: INFO: >>> kubeConfig: /root/.kube/config I0427 14:47:06.193878 6 log.go:172] (0xc00245b600) (0xc0014b4960) Create stream I0427 14:47:06.193916 6 log.go:172] (0xc00245b600) (0xc0014b4960) Stream added, broadcasting: 1 I0427 14:47:06.196661 6 log.go:172] (0xc00245b600) Reply frame received for 1 I0427 14:47:06.196711 6 log.go:172] (0xc00245b600) (0xc0014b4aa0) Create stream I0427 14:47:06.196728 6 log.go:172] (0xc00245b600) (0xc0014b4aa0) Stream added, broadcasting: 3 I0427 14:47:06.198112 6 log.go:172] (0xc00245b600) Reply frame received for 3 I0427 14:47:06.198165 6 log.go:172] (0xc00245b600) (0xc0014b4b40) Create stream I0427 14:47:06.198185 6 log.go:172] (0xc00245b600) (0xc0014b4b40) Stream added, broadcasting: 5 I0427 14:47:06.199250 6 log.go:172] (0xc00245b600) Reply frame received for 5 I0427 14:47:06.290973 6 log.go:172] (0xc00245b600) Data frame received for 5 I0427 14:47:06.291036 6 log.go:172] (0xc00245b600) Data frame received for 3 I0427 14:47:06.291093 6 log.go:172] (0xc0014b4aa0) (3) Data frame handling I0427 14:47:06.291108 6 log.go:172] (0xc0014b4aa0) (3) Data frame sent I0427 14:47:06.291115 6 log.go:172] (0xc00245b600) Data frame received for 3 I0427 14:47:06.291128 6 log.go:172] (0xc0014b4aa0) (3) Data frame handling I0427 14:47:06.291138 6 log.go:172] (0xc0014b4b40) (5) Data frame handling I0427 14:47:06.292933 6 log.go:172] (0xc00245b600) Data frame received for 1 I0427 14:47:06.292949 6 log.go:172] (0xc0014b4960) (1) Data frame handling I0427 14:47:06.292959 6 log.go:172] (0xc0014b4960) (1) Data frame sent I0427 14:47:06.292971 6 log.go:172] (0xc00245b600) (0xc0014b4960) Stream removed, broadcasting: 1 I0427 14:47:06.292994 6 log.go:172] (0xc00245b600) Go away received I0427 14:47:06.293257 6 log.go:172] (0xc00245b600) (0xc0014b4960) Stream removed, broadcasting: 1 I0427 14:47:06.293286 6 log.go:172] (0xc00245b600) (0xc0014b4aa0) Stream removed, broadcasting: 3 I0427 14:47:06.293301 6 log.go:172] (0xc00245b600) (0xc0014b4b40) Stream removed, broadcasting: 5 Apr 27 14:47:06.293: INFO: Found all expected endpoints: [netserver-0] Apr 27 14:47:06.296: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.2.192:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-6951 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 27 14:47:06.296: INFO: >>> kubeConfig: /root/.kube/config I0427 14:47:06.326308 6 log.go:172] (0xc00087cc60) (0xc000313680) Create stream I0427 14:47:06.326348 6 log.go:172] (0xc00087cc60) (0xc000313680) Stream added, broadcasting: 1 I0427 14:47:06.328486 6 log.go:172] (0xc00087cc60) Reply frame received for 1 I0427 14:47:06.328535 6 log.go:172] (0xc00087cc60) (0xc000313720) Create stream I0427 14:47:06.328545 6 log.go:172] (0xc00087cc60) (0xc000313720) Stream added, broadcasting: 3 I0427 14:47:06.329806 6 log.go:172] (0xc00087cc60) Reply frame received for 3 I0427 14:47:06.329863 6 log.go:172] (0xc00087cc60) (0xc001c16000) Create stream I0427 14:47:06.329881 6 log.go:172] (0xc00087cc60) (0xc001c16000) Stream added, broadcasting: 5 I0427 14:47:06.330819 6 log.go:172] (0xc00087cc60) Reply frame received for 5 I0427 14:47:06.392365 6 log.go:172] (0xc00087cc60) Data frame received for 5 I0427 14:47:06.392405 6 log.go:172] (0xc001c16000) (5) Data frame handling I0427 14:47:06.392432 6 log.go:172] (0xc00087cc60) Data frame received for 3 I0427 14:47:06.392446 6 log.go:172] (0xc000313720) (3) Data frame handling I0427 14:47:06.392464 6 log.go:172] (0xc000313720) (3) Data frame sent I0427 14:47:06.392474 6 log.go:172] (0xc00087cc60) Data frame received for 3 I0427 14:47:06.392484 6 log.go:172] (0xc000313720) (3) Data frame handling I0427 14:47:06.394061 6 log.go:172] (0xc00087cc60) Data frame received for 1 I0427 14:47:06.394112 6 log.go:172] (0xc000313680) (1) Data frame handling I0427 14:47:06.394143 6 log.go:172] (0xc000313680) (1) Data frame sent I0427 14:47:06.394169 6 log.go:172] (0xc00087cc60) (0xc000313680) Stream removed, broadcasting: 1 I0427 14:47:06.394187 6 log.go:172] (0xc00087cc60) Go away received I0427 14:47:06.394343 6 log.go:172] (0xc00087cc60) (0xc000313680) Stream removed, broadcasting: 1 I0427 14:47:06.394365 6 log.go:172] (0xc00087cc60) (0xc000313720) Stream removed, broadcasting: 3 I0427 14:47:06.394373 6 log.go:172] (0xc00087cc60) (0xc001c16000) Stream removed, broadcasting: 5 Apr 27 14:47:06.394: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 27 14:47:06.394: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-6951" for this suite. Apr 27 14:47:30.435: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 27 14:47:30.538: INFO: namespace pod-network-test-6951 deletion completed in 24.140546213s • [SLOW TEST:44.593 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 27 14:47:30.538: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Delete Grace Period /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:47 [It] should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: setting up selector STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes Apr 27 14:47:34.685: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0' STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice Apr 27 14:47:44.782: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 27 14:47:44.786: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-9428" for this suite. Apr 27 14:47:50.823: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 27 14:47:50.900: INFO: namespace pods-9428 deletion completed in 6.10987773s • [SLOW TEST:20.362 seconds] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 [k8s.io] Delete Grace Period /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 27 14:47:50.901: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir volume type on tmpfs Apr 27 14:47:50.940: INFO: Waiting up to 5m0s for pod "pod-173b1151-3f3d-4418-8222-bf5c194438a6" in namespace "emptydir-3954" to be "success or failure" Apr 27 14:47:50.962: INFO: Pod "pod-173b1151-3f3d-4418-8222-bf5c194438a6": Phase="Pending", Reason="", readiness=false. Elapsed: 22.14096ms Apr 27 14:47:52.967: INFO: Pod "pod-173b1151-3f3d-4418-8222-bf5c194438a6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026653543s Apr 27 14:47:54.971: INFO: Pod "pod-173b1151-3f3d-4418-8222-bf5c194438a6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.030733551s STEP: Saw pod success Apr 27 14:47:54.971: INFO: Pod "pod-173b1151-3f3d-4418-8222-bf5c194438a6" satisfied condition "success or failure" Apr 27 14:47:54.974: INFO: Trying to get logs from node iruya-worker2 pod pod-173b1151-3f3d-4418-8222-bf5c194438a6 container test-container: STEP: delete the pod Apr 27 14:47:55.012: INFO: Waiting for pod pod-173b1151-3f3d-4418-8222-bf5c194438a6 to disappear Apr 27 14:47:55.029: INFO: Pod pod-173b1151-3f3d-4418-8222-bf5c194438a6 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 27 14:47:55.029: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3954" for this suite. Apr 27 14:48:01.076: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 27 14:48:01.149: INFO: namespace emptydir-3954 deletion completed in 6.116914548s • [SLOW TEST:10.249 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 27 14:48:01.151: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-map-29e409ce-529c-41d8-b3db-40495b66d8ef STEP: Creating a pod to test consume configMaps Apr 27 14:48:01.222: INFO: Waiting up to 5m0s for pod "pod-configmaps-6423ff69-23bd-4a73-9baa-d8276846e683" in namespace "configmap-184" to be "success or failure" Apr 27 14:48:01.239: INFO: Pod "pod-configmaps-6423ff69-23bd-4a73-9baa-d8276846e683": Phase="Pending", Reason="", readiness=false. Elapsed: 16.800048ms Apr 27 14:48:03.243: INFO: Pod "pod-configmaps-6423ff69-23bd-4a73-9baa-d8276846e683": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020667642s Apr 27 14:48:05.248: INFO: Pod "pod-configmaps-6423ff69-23bd-4a73-9baa-d8276846e683": Phase="Running", Reason="", readiness=true. Elapsed: 4.025648762s Apr 27 14:48:07.252: INFO: Pod "pod-configmaps-6423ff69-23bd-4a73-9baa-d8276846e683": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.029293613s STEP: Saw pod success Apr 27 14:48:07.252: INFO: Pod "pod-configmaps-6423ff69-23bd-4a73-9baa-d8276846e683" satisfied condition "success or failure" Apr 27 14:48:07.255: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-6423ff69-23bd-4a73-9baa-d8276846e683 container configmap-volume-test: STEP: delete the pod Apr 27 14:48:07.362: INFO: Waiting for pod pod-configmaps-6423ff69-23bd-4a73-9baa-d8276846e683 to disappear Apr 27 14:48:07.373: INFO: Pod pod-configmaps-6423ff69-23bd-4a73-9baa-d8276846e683 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 27 14:48:07.373: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-184" for this suite. Apr 27 14:48:13.389: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 27 14:48:13.476: INFO: namespace configmap-184 deletion completed in 6.100457597s • [SLOW TEST:12.325 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 27 14:48:13.476: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-map-e245385c-120e-42a3-aa41-172ac1108fbf STEP: Creating a pod to test consume configMaps Apr 27 14:48:13.579: INFO: Waiting up to 5m0s for pod "pod-configmaps-b0e541c4-e7c4-4780-a57a-76199bf13bca" in namespace "configmap-1652" to be "success or failure" Apr 27 14:48:13.596: INFO: Pod "pod-configmaps-b0e541c4-e7c4-4780-a57a-76199bf13bca": Phase="Pending", Reason="", readiness=false. Elapsed: 16.867055ms Apr 27 14:48:15.600: INFO: Pod "pod-configmaps-b0e541c4-e7c4-4780-a57a-76199bf13bca": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02051674s Apr 27 14:48:17.604: INFO: Pod "pod-configmaps-b0e541c4-e7c4-4780-a57a-76199bf13bca": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024755911s STEP: Saw pod success Apr 27 14:48:17.604: INFO: Pod "pod-configmaps-b0e541c4-e7c4-4780-a57a-76199bf13bca" satisfied condition "success or failure" Apr 27 14:48:17.607: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-b0e541c4-e7c4-4780-a57a-76199bf13bca container configmap-volume-test: STEP: delete the pod Apr 27 14:48:17.649: INFO: Waiting for pod pod-configmaps-b0e541c4-e7c4-4780-a57a-76199bf13bca to disappear Apr 27 14:48:17.660: INFO: Pod pod-configmaps-b0e541c4-e7c4-4780-a57a-76199bf13bca no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 27 14:48:17.661: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1652" for this suite. Apr 27 14:48:23.676: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 27 14:48:23.754: INFO: namespace configmap-1652 deletion completed in 6.090696434s • [SLOW TEST:10.278 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 27 14:48:23.754: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods STEP: Gathering metrics W0427 14:49:03.924667 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Apr 27 14:49:03.924: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 27 14:49:03.924: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-1961" for this suite. Apr 27 14:49:13.950: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 27 14:49:14.032: INFO: namespace gc-1961 deletion completed in 10.10470722s • [SLOW TEST:50.278 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 27 14:49:14.033: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0644 on tmpfs Apr 27 14:49:14.119: INFO: Waiting up to 5m0s for pod "pod-a3a0cb4d-a291-429c-bd68-2058e26c2bd5" in namespace "emptydir-2873" to be "success or failure" Apr 27 14:49:14.182: INFO: Pod "pod-a3a0cb4d-a291-429c-bd68-2058e26c2bd5": Phase="Pending", Reason="", readiness=false. Elapsed: 62.975ms Apr 27 14:49:16.187: INFO: Pod "pod-a3a0cb4d-a291-429c-bd68-2058e26c2bd5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.067577385s Apr 27 14:49:18.192: INFO: Pod "pod-a3a0cb4d-a291-429c-bd68-2058e26c2bd5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.072945128s STEP: Saw pod success Apr 27 14:49:18.192: INFO: Pod "pod-a3a0cb4d-a291-429c-bd68-2058e26c2bd5" satisfied condition "success or failure" Apr 27 14:49:18.195: INFO: Trying to get logs from node iruya-worker2 pod pod-a3a0cb4d-a291-429c-bd68-2058e26c2bd5 container test-container: STEP: delete the pod Apr 27 14:49:18.213: INFO: Waiting for pod pod-a3a0cb4d-a291-429c-bd68-2058e26c2bd5 to disappear Apr 27 14:49:18.218: INFO: Pod pod-a3a0cb4d-a291-429c-bd68-2058e26c2bd5 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 27 14:49:18.218: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2873" for this suite. Apr 27 14:49:24.247: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 27 14:49:24.327: INFO: namespace emptydir-2873 deletion completed in 6.107242043s • [SLOW TEST:10.294 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-cli] Kubectl client [k8s.io] Proxy server should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 27 14:49:24.327: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: starting the proxy server Apr 27 14:49:24.378: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter' STEP: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 27 14:49:24.465: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7445" for this suite. Apr 27 14:49:30.486: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 27 14:49:30.571: INFO: namespace kubectl-7445 deletion completed in 6.101832433s • [SLOW TEST:6.243 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Proxy server /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 27 14:49:30.571: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Apr 27 14:49:30.652: INFO: Waiting up to 5m0s for pod "downwardapi-volume-5eb9aa63-6d39-4bc9-98e8-faea60de47c1" in namespace "downward-api-970" to be "success or failure" Apr 27 14:49:30.677: INFO: Pod "downwardapi-volume-5eb9aa63-6d39-4bc9-98e8-faea60de47c1": Phase="Pending", Reason="", readiness=false. Elapsed: 24.918724ms Apr 27 14:49:32.682: INFO: Pod "downwardapi-volume-5eb9aa63-6d39-4bc9-98e8-faea60de47c1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029409266s Apr 27 14:49:34.686: INFO: Pod "downwardapi-volume-5eb9aa63-6d39-4bc9-98e8-faea60de47c1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.033990659s STEP: Saw pod success Apr 27 14:49:34.686: INFO: Pod "downwardapi-volume-5eb9aa63-6d39-4bc9-98e8-faea60de47c1" satisfied condition "success or failure" Apr 27 14:49:34.690: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-5eb9aa63-6d39-4bc9-98e8-faea60de47c1 container client-container: STEP: delete the pod Apr 27 14:49:34.752: INFO: Waiting for pod downwardapi-volume-5eb9aa63-6d39-4bc9-98e8-faea60de47c1 to disappear Apr 27 14:49:34.762: INFO: Pod downwardapi-volume-5eb9aa63-6d39-4bc9-98e8-faea60de47c1 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 27 14:49:34.762: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-970" for this suite. Apr 27 14:49:40.777: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 27 14:49:40.857: INFO: namespace downward-api-970 deletion completed in 6.092954174s • [SLOW TEST:10.286 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 27 14:49:40.858: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0644 on tmpfs Apr 27 14:49:40.950: INFO: Waiting up to 5m0s for pod "pod-ab33a14f-880f-4509-b047-d33f0afcb24f" in namespace "emptydir-7328" to be "success or failure" Apr 27 14:49:40.974: INFO: Pod "pod-ab33a14f-880f-4509-b047-d33f0afcb24f": Phase="Pending", Reason="", readiness=false. Elapsed: 23.840581ms Apr 27 14:49:42.978: INFO: Pod "pod-ab33a14f-880f-4509-b047-d33f0afcb24f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027889064s Apr 27 14:49:44.990: INFO: Pod "pod-ab33a14f-880f-4509-b047-d33f0afcb24f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.040446687s STEP: Saw pod success Apr 27 14:49:44.990: INFO: Pod "pod-ab33a14f-880f-4509-b047-d33f0afcb24f" satisfied condition "success or failure" Apr 27 14:49:44.992: INFO: Trying to get logs from node iruya-worker2 pod pod-ab33a14f-880f-4509-b047-d33f0afcb24f container test-container: STEP: delete the pod Apr 27 14:49:45.037: INFO: Waiting for pod pod-ab33a14f-880f-4509-b047-d33f0afcb24f to disappear Apr 27 14:49:45.050: INFO: Pod pod-ab33a14f-880f-4509-b047-d33f0afcb24f no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 27 14:49:45.050: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7328" for this suite. Apr 27 14:49:51.065: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 27 14:49:51.148: INFO: namespace emptydir-7328 deletion completed in 6.094989387s • [SLOW TEST:10.290 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 27 14:49:51.149: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Apr 27 14:49:51.189: INFO: Creating deployment "nginx-deployment" Apr 27 14:49:51.194: INFO: Waiting for observed generation 1 Apr 27 14:49:53.251: INFO: Waiting for all required pods to come up Apr 27 14:49:53.255: INFO: Pod name nginx: Found 10 pods out of 10 STEP: ensuring each pod is running Apr 27 14:50:01.266: INFO: Waiting for deployment "nginx-deployment" to complete Apr 27 14:50:01.271: INFO: Updating deployment "nginx-deployment" with a non-existent image Apr 27 14:50:01.276: INFO: Updating deployment nginx-deployment Apr 27 14:50:01.276: INFO: Waiting for observed generation 2 Apr 27 14:50:03.334: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 Apr 27 14:50:03.337: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 Apr 27 14:50:03.339: INFO: Waiting for the first rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas Apr 27 14:50:03.346: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 Apr 27 14:50:03.346: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 Apr 27 14:50:03.348: INFO: Waiting for the second rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas Apr 27 14:50:03.409: INFO: Verifying that deployment "nginx-deployment" has minimum required number of available replicas Apr 27 14:50:03.409: INFO: Scaling up the deployment "nginx-deployment" from 10 to 30 Apr 27 14:50:03.415: INFO: Updating deployment nginx-deployment Apr 27 14:50:03.415: INFO: Waiting for the replicasets of deployment "nginx-deployment" to have desired number of replicas Apr 27 14:50:03.594: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 Apr 27 14:50:03.616: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 Apr 27 14:50:03.896: INFO: Deployment "nginx-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment,GenerateName:,Namespace:deployment-6509,SelfLink:/apis/apps/v1/namespaces/deployment-6509/deployments/nginx-deployment,UID:9e5b612d-9b0d-4fbd-aa21-bcbcb0e6e8d0,ResourceVersion:7736872,Generation:3,CreationTimestamp:2020-04-27 14:49:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*30,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:13,UpdatedReplicas:5,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[{Progressing True 2020-04-27 14:50:01 +0000 UTC 2020-04-27 14:49:51 +0000 UTC ReplicaSetUpdated ReplicaSet "nginx-deployment-55fb7cb77f" is progressing.} {Available False 2020-04-27 14:50:03 +0000 UTC 2020-04-27 14:50:03 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.}],ReadyReplicas:8,CollisionCount:nil,},} Apr 27 14:50:03.945: INFO: New ReplicaSet "nginx-deployment-55fb7cb77f" of Deployment "nginx-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f,GenerateName:,Namespace:deployment-6509,SelfLink:/apis/apps/v1/namespaces/deployment-6509/replicasets/nginx-deployment-55fb7cb77f,UID:d04c7307-9ca5-479c-8c15-de070f64e1e6,ResourceVersion:7736917,Generation:3,CreationTimestamp:2020-04-27 14:50:01 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment nginx-deployment 9e5b612d-9b0d-4fbd-aa21-bcbcb0e6e8d0 0xc003279837 0xc003279838}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*13,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:5,FullyLabeledReplicas:5,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Apr 27 14:50:03.945: INFO: All old ReplicaSets of Deployment "nginx-deployment": Apr 27 14:50:03.945: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498,GenerateName:,Namespace:deployment-6509,SelfLink:/apis/apps/v1/namespaces/deployment-6509/replicasets/nginx-deployment-7b8c6f4498,UID:78aca99d-62d9-4e58-b47c-7179befe8074,ResourceVersion:7736919,Generation:3,CreationTimestamp:2020-04-27 14:49:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment nginx-deployment 9e5b612d-9b0d-4fbd-aa21-bcbcb0e6e8d0 0xc003279907 0xc003279908}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*20,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[],},} Apr 27 14:50:04.030: INFO: Pod "nginx-deployment-55fb7cb77f-2rknr" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-2rknr,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-6509,SelfLink:/api/v1/namespaces/deployment-6509/pods/nginx-deployment-55fb7cb77f-2rknr,UID:b9903d21-8ee4-4bca-bcaa-46e241eb7eb4,ResourceVersion:7736851,Generation:0,CreationTimestamp:2020-04-27 14:50:01 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f d04c7307-9ca5-479c-8c15-de070f64e1e6 0xc002c24647 0xc002c24648}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-rw8c5 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rw8c5,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-rw8c5 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002c24720} {node.kubernetes.io/unreachable Exists NoExecute 0xc002c24740}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-27 14:50:01 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-27 14:50:01 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-27 14:50:01 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-27 14:50:01 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:,StartTime:2020-04-27 14:50:01 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 27 14:50:04.030: INFO: Pod "nginx-deployment-55fb7cb77f-44pzl" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-44pzl,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-6509,SelfLink:/api/v1/namespaces/deployment-6509/pods/nginx-deployment-55fb7cb77f-44pzl,UID:bf481b44-2e8a-4230-906c-f8cdd7ba0702,ResourceVersion:7736925,Generation:0,CreationTimestamp:2020-04-27 14:50:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f d04c7307-9ca5-479c-8c15-de070f64e1e6 0xc002c24880 0xc002c24881}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-rw8c5 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rw8c5,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-rw8c5 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002c24900} {node.kubernetes.io/unreachable Exists NoExecute 0xc002c24920}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-27 14:50:03 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-27 14:50:03 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-27 14:50:03 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-27 14:50:03 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:,StartTime:2020-04-27 14:50:03 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 27 14:50:04.030: INFO: Pod "nginx-deployment-55fb7cb77f-8kdln" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-8kdln,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-6509,SelfLink:/api/v1/namespaces/deployment-6509/pods/nginx-deployment-55fb7cb77f-8kdln,UID:08283ab2-254f-4779-b8fa-ae18585b7dba,ResourceVersion:7736904,Generation:0,CreationTimestamp:2020-04-27 14:50:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f d04c7307-9ca5-479c-8c15-de070f64e1e6 0xc002c24c90 0xc002c24c91}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-rw8c5 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rw8c5,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-rw8c5 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002c24e10} {node.kubernetes.io/unreachable Exists NoExecute 0xc002c24e60}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-27 14:50:03 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 27 14:50:04.030: INFO: Pod "nginx-deployment-55fb7cb77f-9h87n" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-9h87n,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-6509,SelfLink:/api/v1/namespaces/deployment-6509/pods/nginx-deployment-55fb7cb77f-9h87n,UID:5d82daea-cff5-4d59-983a-93764692f9f6,ResourceVersion:7736827,Generation:0,CreationTimestamp:2020-04-27 14:50:01 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f d04c7307-9ca5-479c-8c15-de070f64e1e6 0xc002c25057 0xc002c25058}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-rw8c5 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rw8c5,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-rw8c5 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002c250d0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002c250f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-27 14:50:01 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-27 14:50:01 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-27 14:50:01 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-27 14:50:01 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:,StartTime:2020-04-27 14:50:01 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 27 14:50:04.030: INFO: Pod "nginx-deployment-55fb7cb77f-hcfdg" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-hcfdg,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-6509,SelfLink:/api/v1/namespaces/deployment-6509/pods/nginx-deployment-55fb7cb77f-hcfdg,UID:922f62b7-877e-4914-a3a1-6d2457d4d6f1,ResourceVersion:7736881,Generation:0,CreationTimestamp:2020-04-27 14:50:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f d04c7307-9ca5-479c-8c15-de070f64e1e6 0xc002c252f0 0xc002c252f1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-rw8c5 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rw8c5,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-rw8c5 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002c25440} {node.kubernetes.io/unreachable Exists NoExecute 0xc002c25460}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-27 14:50:03 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 27 14:50:04.030: INFO: Pod "nginx-deployment-55fb7cb77f-hgkfc" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-hgkfc,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-6509,SelfLink:/api/v1/namespaces/deployment-6509/pods/nginx-deployment-55fb7cb77f-hgkfc,UID:0f458994-1b2a-4bc3-ba23-5af385e30bdc,ResourceVersion:7736849,Generation:0,CreationTimestamp:2020-04-27 14:50:01 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f d04c7307-9ca5-479c-8c15-de070f64e1e6 0xc002c25567 0xc002c25568}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-rw8c5 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rw8c5,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-rw8c5 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002c25710} {node.kubernetes.io/unreachable Exists NoExecute 0xc002c25740}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-27 14:50:01 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-27 14:50:01 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-27 14:50:01 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-27 14:50:01 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:,StartTime:2020-04-27 14:50:01 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 27 14:50:04.031: INFO: Pod "nginx-deployment-55fb7cb77f-k9d42" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-k9d42,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-6509,SelfLink:/api/v1/namespaces/deployment-6509/pods/nginx-deployment-55fb7cb77f-k9d42,UID:deadceff-9c55-4cce-b222-fbaa7c1b7724,ResourceVersion:7736840,Generation:0,CreationTimestamp:2020-04-27 14:50:01 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f d04c7307-9ca5-479c-8c15-de070f64e1e6 0xc002c258e0 0xc002c258e1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-rw8c5 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rw8c5,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-rw8c5 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002c259c0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002c25a40}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-27 14:50:01 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-27 14:50:01 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-27 14:50:01 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-27 14:50:01 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:,StartTime:2020-04-27 14:50:01 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 27 14:50:04.031: INFO: Pod "nginx-deployment-55fb7cb77f-kqdr2" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-kqdr2,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-6509,SelfLink:/api/v1/namespaces/deployment-6509/pods/nginx-deployment-55fb7cb77f-kqdr2,UID:0d669799-3c7d-436f-9c47-57aef61a832e,ResourceVersion:7736902,Generation:0,CreationTimestamp:2020-04-27 14:50:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f d04c7307-9ca5-479c-8c15-de070f64e1e6 0xc002c25c40 0xc002c25c41}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-rw8c5 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rw8c5,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-rw8c5 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002c25d80} {node.kubernetes.io/unreachable Exists NoExecute 0xc002c25dc0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-27 14:50:03 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 27 14:50:04.031: INFO: Pod "nginx-deployment-55fb7cb77f-nmbrn" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-nmbrn,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-6509,SelfLink:/api/v1/namespaces/deployment-6509/pods/nginx-deployment-55fb7cb77f-nmbrn,UID:68f1e7b8-b854-4053-b64c-226b94a997ed,ResourceVersion:7736918,Generation:0,CreationTimestamp:2020-04-27 14:50:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f d04c7307-9ca5-479c-8c15-de070f64e1e6 0xc002c25ee7 0xc002c25ee8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-rw8c5 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rw8c5,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-rw8c5 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002c25fb0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002c25fd0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-27 14:50:03 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 27 14:50:04.031: INFO: Pod "nginx-deployment-55fb7cb77f-nz9zn" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-nz9zn,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-6509,SelfLink:/api/v1/namespaces/deployment-6509/pods/nginx-deployment-55fb7cb77f-nz9zn,UID:718312ae-a76e-4bc7-9219-71a27f364103,ResourceVersion:7736890,Generation:0,CreationTimestamp:2020-04-27 14:50:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f d04c7307-9ca5-479c-8c15-de070f64e1e6 0xc00329c067 0xc00329c068}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-rw8c5 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rw8c5,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-rw8c5 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00329c0e0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00329c100}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-27 14:50:03 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 27 14:50:04.031: INFO: Pod "nginx-deployment-55fb7cb77f-p7nwm" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-p7nwm,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-6509,SelfLink:/api/v1/namespaces/deployment-6509/pods/nginx-deployment-55fb7cb77f-p7nwm,UID:ffb97478-4471-4140-8598-a74aaec8b4a7,ResourceVersion:7736845,Generation:0,CreationTimestamp:2020-04-27 14:50:01 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f d04c7307-9ca5-479c-8c15-de070f64e1e6 0xc00329c187 0xc00329c188}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-rw8c5 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rw8c5,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-rw8c5 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00329c200} {node.kubernetes.io/unreachable Exists NoExecute 0xc00329c220}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-27 14:50:01 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-27 14:50:01 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-27 14:50:01 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-27 14:50:01 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:,StartTime:2020-04-27 14:50:01 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 27 14:50:04.031: INFO: Pod "nginx-deployment-55fb7cb77f-xkwd7" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-xkwd7,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-6509,SelfLink:/api/v1/namespaces/deployment-6509/pods/nginx-deployment-55fb7cb77f-xkwd7,UID:2020f0f0-31ef-4db4-b44a-ed902a1e0aae,ResourceVersion:7736906,Generation:0,CreationTimestamp:2020-04-27 14:50:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f d04c7307-9ca5-479c-8c15-de070f64e1e6 0xc00329c2f0 0xc00329c2f1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-rw8c5 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rw8c5,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-rw8c5 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00329c370} {node.kubernetes.io/unreachable Exists NoExecute 0xc00329c390}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-27 14:50:03 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 27 14:50:04.031: INFO: Pod "nginx-deployment-55fb7cb77f-xr4b6" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-xr4b6,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-6509,SelfLink:/api/v1/namespaces/deployment-6509/pods/nginx-deployment-55fb7cb77f-xr4b6,UID:cc398b99-5c64-4930-b4b1-7c1495cd4200,ResourceVersion:7736905,Generation:0,CreationTimestamp:2020-04-27 14:50:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f d04c7307-9ca5-479c-8c15-de070f64e1e6 0xc00329c417 0xc00329c418}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-rw8c5 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rw8c5,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-rw8c5 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00329c490} {node.kubernetes.io/unreachable Exists NoExecute 0xc00329c4c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-27 14:50:03 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 27 14:50:04.031: INFO: Pod "nginx-deployment-7b8c6f4498-2l9tm" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-2l9tm,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6509,SelfLink:/api/v1/namespaces/deployment-6509/pods/nginx-deployment-7b8c6f4498-2l9tm,UID:1a669ef9-6e6a-450d-9c37-03b673b7ac58,ResourceVersion:7736770,Generation:0,CreationTimestamp:2020-04-27 14:49:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 78aca99d-62d9-4e58-b47c-7179befe8074 0xc00329c547 0xc00329c548}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-rw8c5 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rw8c5,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-rw8c5 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00329c5c0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00329c5e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-27 14:49:51 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-04-27 14:49:58 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-04-27 14:49:58 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-27 14:49:51 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:10.244.2.203,StartTime:2020-04-27 14:49:51 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-04-27 14:49:58 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://47771d42d3b0a55d3d36de175e26cbe136388e8ade8038229bc5f9bdaa13605e}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 27 14:50:04.032: INFO: Pod "nginx-deployment-7b8c6f4498-45lhw" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-45lhw,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6509,SelfLink:/api/v1/namespaces/deployment-6509/pods/nginx-deployment-7b8c6f4498-45lhw,UID:9d74a457-e61b-48a2-a334-830a9d5a973f,ResourceVersion:7736789,Generation:0,CreationTimestamp:2020-04-27 14:49:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 78aca99d-62d9-4e58-b47c-7179befe8074 0xc00329c6b7 0xc00329c6b8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-rw8c5 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rw8c5,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-rw8c5 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00329c730} {node.kubernetes.io/unreachable Exists NoExecute 0xc00329c750}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-27 14:49:51 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-04-27 14:50:00 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-04-27 14:50:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-27 14:49:51 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:10.244.2.204,StartTime:2020-04-27 14:49:51 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-04-27 14:49:59 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://5c09f8c2720012c7da2d58a6e194351a4d2abe1d5e28619d795b85f83e49e56f}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 27 14:50:04.032: INFO: Pod "nginx-deployment-7b8c6f4498-4kxzm" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-4kxzm,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6509,SelfLink:/api/v1/namespaces/deployment-6509/pods/nginx-deployment-7b8c6f4498-4kxzm,UID:2eaf7396-2750-442a-be8b-6ceb6c33a127,ResourceVersion:7736910,Generation:0,CreationTimestamp:2020-04-27 14:50:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 78aca99d-62d9-4e58-b47c-7179befe8074 0xc00329c827 0xc00329c828}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-rw8c5 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rw8c5,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-rw8c5 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00329c8a0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00329c8c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-27 14:50:03 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 27 14:50:04.032: INFO: Pod "nginx-deployment-7b8c6f4498-5sdxf" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-5sdxf,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6509,SelfLink:/api/v1/namespaces/deployment-6509/pods/nginx-deployment-7b8c6f4498-5sdxf,UID:b8965177-1ce1-4994-94c6-82fa09c6ede0,ResourceVersion:7736927,Generation:0,CreationTimestamp:2020-04-27 14:50:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 78aca99d-62d9-4e58-b47c-7179befe8074 0xc00329c947 0xc00329c948}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-rw8c5 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rw8c5,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-rw8c5 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00329c9c0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00329c9e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-27 14:50:03 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-27 14:50:03 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-27 14:50:03 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-27 14:50:03 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:,StartTime:2020-04-27 14:50:03 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 27 14:50:04.032: INFO: Pod "nginx-deployment-7b8c6f4498-772kv" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-772kv,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6509,SelfLink:/api/v1/namespaces/deployment-6509/pods/nginx-deployment-7b8c6f4498-772kv,UID:21221d89-8eac-42df-b832-6ca5b6d9e177,ResourceVersion:7736912,Generation:0,CreationTimestamp:2020-04-27 14:50:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 78aca99d-62d9-4e58-b47c-7179befe8074 0xc00329caa7 0xc00329caa8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-rw8c5 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rw8c5,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-rw8c5 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00329cb20} {node.kubernetes.io/unreachable Exists NoExecute 0xc00329cb40}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-27 14:50:03 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 27 14:50:04.032: INFO: Pod "nginx-deployment-7b8c6f4498-8ld25" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-8ld25,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6509,SelfLink:/api/v1/namespaces/deployment-6509/pods/nginx-deployment-7b8c6f4498-8ld25,UID:302d3f01-ffe3-4768-b0e2-930c1d9ab484,ResourceVersion:7736900,Generation:0,CreationTimestamp:2020-04-27 14:50:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 78aca99d-62d9-4e58-b47c-7179befe8074 0xc00329cbd7 0xc00329cbd8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-rw8c5 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rw8c5,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-rw8c5 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00329cc50} {node.kubernetes.io/unreachable Exists NoExecute 0xc00329cc70}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-27 14:50:03 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 27 14:50:04.032: INFO: Pod "nginx-deployment-7b8c6f4498-8qfqf" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-8qfqf,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6509,SelfLink:/api/v1/namespaces/deployment-6509/pods/nginx-deployment-7b8c6f4498-8qfqf,UID:94e6f528-ea19-4e76-9d9d-a83459fea09b,ResourceVersion:7736743,Generation:0,CreationTimestamp:2020-04-27 14:49:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 78aca99d-62d9-4e58-b47c-7179befe8074 0xc00329ccf7 0xc00329ccf8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-rw8c5 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rw8c5,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-rw8c5 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00329cd70} {node.kubernetes.io/unreachable Exists NoExecute 0xc00329cd90}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-27 14:49:51 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-04-27 14:49:55 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-04-27 14:49:55 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-27 14:49:51 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:10.244.2.201,StartTime:2020-04-27 14:49:51 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-04-27 14:49:54 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://3441c9fd31c80a6705ddea3a7209d87c4d06df21b2c6af74067017cd4f69351a}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 27 14:50:04.032: INFO: Pod "nginx-deployment-7b8c6f4498-b6zkg" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-b6zkg,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6509,SelfLink:/api/v1/namespaces/deployment-6509/pods/nginx-deployment-7b8c6f4498-b6zkg,UID:d586c268-a057-400e-8228-363f40485d91,ResourceVersion:7736916,Generation:0,CreationTimestamp:2020-04-27 14:50:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 78aca99d-62d9-4e58-b47c-7179befe8074 0xc00329ce87 0xc00329ce88}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-rw8c5 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rw8c5,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-rw8c5 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00329cf00} {node.kubernetes.io/unreachable Exists NoExecute 0xc00329cf20}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-27 14:50:03 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-27 14:50:03 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-27 14:50:03 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-27 14:50:03 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:,StartTime:2020-04-27 14:50:03 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 27 14:50:04.032: INFO: Pod "nginx-deployment-7b8c6f4498-b8q2x" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-b8q2x,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6509,SelfLink:/api/v1/namespaces/deployment-6509/pods/nginx-deployment-7b8c6f4498-b8q2x,UID:f9393f3d-2f0e-4dfa-8d4a-662c86242e0f,ResourceVersion:7736891,Generation:0,CreationTimestamp:2020-04-27 14:50:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 78aca99d-62d9-4e58-b47c-7179befe8074 0xc00329cfe7 0xc00329cfe8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-rw8c5 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rw8c5,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-rw8c5 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00329d060} {node.kubernetes.io/unreachable Exists NoExecute 0xc00329d080}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-27 14:50:03 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 27 14:50:04.033: INFO: Pod "nginx-deployment-7b8c6f4498-gvcwk" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-gvcwk,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6509,SelfLink:/api/v1/namespaces/deployment-6509/pods/nginx-deployment-7b8c6f4498-gvcwk,UID:aa3ef122-fba8-4394-88b1-30b1fc6b2667,ResourceVersion:7736907,Generation:0,CreationTimestamp:2020-04-27 14:50:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 78aca99d-62d9-4e58-b47c-7179befe8074 0xc00329d107 0xc00329d108}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-rw8c5 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rw8c5,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-rw8c5 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00329d180} {node.kubernetes.io/unreachable Exists NoExecute 0xc00329d1a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-27 14:50:03 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 27 14:50:04.033: INFO: Pod "nginx-deployment-7b8c6f4498-hrclt" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-hrclt,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6509,SelfLink:/api/v1/namespaces/deployment-6509/pods/nginx-deployment-7b8c6f4498-hrclt,UID:9301864e-14ba-491f-b359-b34f2ba810cb,ResourceVersion:7736903,Generation:0,CreationTimestamp:2020-04-27 14:50:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 78aca99d-62d9-4e58-b47c-7179befe8074 0xc00329d227 0xc00329d228}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-rw8c5 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rw8c5,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-rw8c5 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00329d2a0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00329d2c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-27 14:50:03 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 27 14:50:04.033: INFO: Pod "nginx-deployment-7b8c6f4498-jxz95" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-jxz95,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6509,SelfLink:/api/v1/namespaces/deployment-6509/pods/nginx-deployment-7b8c6f4498-jxz95,UID:17520470-8a54-4f54-8b9a-ab09477d0d2b,ResourceVersion:7736759,Generation:0,CreationTimestamp:2020-04-27 14:49:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 78aca99d-62d9-4e58-b47c-7179befe8074 0xc00329d347 0xc00329d348}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-rw8c5 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rw8c5,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-rw8c5 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00329d3c0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00329d3e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-27 14:49:51 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-04-27 14:49:57 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-04-27 14:49:57 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-27 14:49:51 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:10.244.2.202,StartTime:2020-04-27 14:49:51 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-04-27 14:49:57 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://a15d33b48c1219a709af938e3b6c260f856360da8ca5476b754acf21662c0b20}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 27 14:50:04.033: INFO: Pod "nginx-deployment-7b8c6f4498-ls8lt" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-ls8lt,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6509,SelfLink:/api/v1/namespaces/deployment-6509/pods/nginx-deployment-7b8c6f4498-ls8lt,UID:73e4027c-e13d-495e-868d-e3d64573193a,ResourceVersion:7736760,Generation:0,CreationTimestamp:2020-04-27 14:49:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 78aca99d-62d9-4e58-b47c-7179befe8074 0xc00329d4b7 0xc00329d4b8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-rw8c5 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rw8c5,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-rw8c5 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00329d530} {node.kubernetes.io/unreachable Exists NoExecute 0xc00329d550}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-27 14:49:51 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-04-27 14:49:57 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-04-27 14:49:57 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-27 14:49:51 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:10.244.1.163,StartTime:2020-04-27 14:49:51 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-04-27 14:49:57 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://30aa41ebdfc3c8f748cfec8fc9698b2de3801b1ca8f310cb4fcd79702e495576}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 27 14:50:04.033: INFO: Pod "nginx-deployment-7b8c6f4498-q5qvs" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-q5qvs,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6509,SelfLink:/api/v1/namespaces/deployment-6509/pods/nginx-deployment-7b8c6f4498-q5qvs,UID:6d0dbd59-af0c-4048-90e0-bf5eefa943ed,ResourceVersion:7736772,Generation:0,CreationTimestamp:2020-04-27 14:49:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 78aca99d-62d9-4e58-b47c-7179befe8074 0xc00329d627 0xc00329d628}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-rw8c5 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rw8c5,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-rw8c5 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00329d6a0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00329d6c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-27 14:49:51 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-04-27 14:49:59 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-04-27 14:49:59 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-27 14:49:51 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:10.244.1.164,StartTime:2020-04-27 14:49:51 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-04-27 14:49:58 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://8ed84e19f565ef4ea27a9ddd2f4806fad9bb6d1dfbeab88e5d4094f4998d20d1}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 27 14:50:04.033: INFO: Pod "nginx-deployment-7b8c6f4498-rswb5" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-rswb5,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6509,SelfLink:/api/v1/namespaces/deployment-6509/pods/nginx-deployment-7b8c6f4498-rswb5,UID:bedfb0dd-988c-4d3b-a9de-ae7e20abcf25,ResourceVersion:7736894,Generation:0,CreationTimestamp:2020-04-27 14:50:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 78aca99d-62d9-4e58-b47c-7179befe8074 0xc00329d797 0xc00329d798}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-rw8c5 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rw8c5,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-rw8c5 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00329d810} {node.kubernetes.io/unreachable Exists NoExecute 0xc00329d830}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-27 14:50:03 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 27 14:50:04.033: INFO: Pod "nginx-deployment-7b8c6f4498-t2xv7" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-t2xv7,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6509,SelfLink:/api/v1/namespaces/deployment-6509/pods/nginx-deployment-7b8c6f4498-t2xv7,UID:f8d9ba90-d429-4c03-9548-7f7d3249d94f,ResourceVersion:7736798,Generation:0,CreationTimestamp:2020-04-27 14:49:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 78aca99d-62d9-4e58-b47c-7179befe8074 0xc00329d8b7 0xc00329d8b8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-rw8c5 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rw8c5,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-rw8c5 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00329d930} {node.kubernetes.io/unreachable Exists NoExecute 0xc00329d950}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-27 14:49:51 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-04-27 14:50:01 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-04-27 14:50:01 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-27 14:49:51 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:10.244.1.167,StartTime:2020-04-27 14:49:51 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-04-27 14:50:00 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://5eb72b3ab970cc6a1fe346fb099de606b5432e473b9f2fbf936abe86663956cf}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 27 14:50:04.033: INFO: Pod "nginx-deployment-7b8c6f4498-vltjb" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-vltjb,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6509,SelfLink:/api/v1/namespaces/deployment-6509/pods/nginx-deployment-7b8c6f4498-vltjb,UID:225ae97e-74df-41c8-9f89-0fd6d31552ef,ResourceVersion:7736877,Generation:0,CreationTimestamp:2020-04-27 14:50:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 78aca99d-62d9-4e58-b47c-7179befe8074 0xc00329da27 0xc00329da28}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-rw8c5 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rw8c5,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-rw8c5 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00329dac0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00329dae0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-27 14:50:03 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 27 14:50:04.034: INFO: Pod "nginx-deployment-7b8c6f4498-vsbd2" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-vsbd2,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6509,SelfLink:/api/v1/namespaces/deployment-6509/pods/nginx-deployment-7b8c6f4498-vsbd2,UID:e6c2503a-72f4-47b0-9714-689795b9dd8d,ResourceVersion:7736792,Generation:0,CreationTimestamp:2020-04-27 14:49:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 78aca99d-62d9-4e58-b47c-7179befe8074 0xc00329db77 0xc00329db78}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-rw8c5 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rw8c5,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-rw8c5 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00329dbf0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00329dc10}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-27 14:49:51 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-04-27 14:50:00 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-04-27 14:50:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-27 14:49:51 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:10.244.2.205,StartTime:2020-04-27 14:49:51 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-04-27 14:50:00 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://0f8ae43cfb1b644d44021bce201b098ea7bbfe6729841a882b2d8d4c935b7c60}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 27 14:50:04.034: INFO: Pod "nginx-deployment-7b8c6f4498-zmk5b" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-zmk5b,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6509,SelfLink:/api/v1/namespaces/deployment-6509/pods/nginx-deployment-7b8c6f4498-zmk5b,UID:7779dc64-daf7-445c-83a0-2c8049f6a7dc,ResourceVersion:7736895,Generation:0,CreationTimestamp:2020-04-27 14:50:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 78aca99d-62d9-4e58-b47c-7179befe8074 0xc00329dce7 0xc00329dce8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-rw8c5 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rw8c5,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-rw8c5 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00329dd60} {node.kubernetes.io/unreachable Exists NoExecute 0xc00329dd80}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-27 14:50:03 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 27 14:50:04.034: INFO: Pod "nginx-deployment-7b8c6f4498-zwq69" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-zwq69,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6509,SelfLink:/api/v1/namespaces/deployment-6509/pods/nginx-deployment-7b8c6f4498-zwq69,UID:1f0b91c5-1b3b-44eb-a41f-b76b75fa3e6d,ResourceVersion:7736908,Generation:0,CreationTimestamp:2020-04-27 14:50:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 78aca99d-62d9-4e58-b47c-7179befe8074 0xc00329de07 0xc00329de08}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-rw8c5 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rw8c5,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-rw8c5 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00329de80} {node.kubernetes.io/unreachable Exists NoExecute 0xc00329dea0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-27 14:50:03 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 27 14:50:04.034: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-6509" for this suite. Apr 27 14:50:21.798: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 27 14:50:21.863: INFO: namespace deployment-6509 deletion completed in 17.704576625s • [SLOW TEST:30.714 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 27 14:50:21.863: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Apr 27 14:50:26.310: INFO: Expected: &{} to match Container's Termination Message: -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 27 14:50:26.344: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-9514" for this suite. Apr 27 14:50:32.362: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 27 14:50:32.435: INFO: namespace container-runtime-9514 deletion completed in 6.088142434s • [SLOW TEST:10.572 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129 should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 27 14:50:32.435: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Apr 27 14:50:32.497: INFO: Waiting up to 5m0s for pod "downwardapi-volume-48a9e50f-4aad-492a-a207-06caaebbf6a5" in namespace "downward-api-7813" to be "success or failure" Apr 27 14:50:32.501: INFO: Pod "downwardapi-volume-48a9e50f-4aad-492a-a207-06caaebbf6a5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.046407ms Apr 27 14:50:34.505: INFO: Pod "downwardapi-volume-48a9e50f-4aad-492a-a207-06caaebbf6a5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008045994s Apr 27 14:50:36.510: INFO: Pod "downwardapi-volume-48a9e50f-4aad-492a-a207-06caaebbf6a5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012601773s STEP: Saw pod success Apr 27 14:50:36.510: INFO: Pod "downwardapi-volume-48a9e50f-4aad-492a-a207-06caaebbf6a5" satisfied condition "success or failure" Apr 27 14:50:36.513: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-48a9e50f-4aad-492a-a207-06caaebbf6a5 container client-container: STEP: delete the pod Apr 27 14:50:36.539: INFO: Waiting for pod downwardapi-volume-48a9e50f-4aad-492a-a207-06caaebbf6a5 to disappear Apr 27 14:50:36.543: INFO: Pod downwardapi-volume-48a9e50f-4aad-492a-a207-06caaebbf6a5 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 27 14:50:36.543: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7813" for this suite. Apr 27 14:50:42.569: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 27 14:50:42.645: INFO: namespace downward-api-7813 deletion completed in 6.099427664s • [SLOW TEST:10.210 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSApr 27 14:50:42.646: INFO: Running AfterSuite actions on all nodes Apr 27 14:50:42.646: INFO: Running AfterSuite actions on node 1 Apr 27 14:50:42.646: INFO: Skipping dumping logs from cluster Ran 215 of 4412 Specs in 6898.379 seconds SUCCESS! -- 215 Passed | 0 Failed | 0 Pending | 4197 Skipped PASS