I0316 12:55:39.546617 6 e2e.go:243] Starting e2e run "f7b00008-4236-4f7c-ae18-1b0ba274c8a3" on Ginkgo node 1 Running Suite: Kubernetes e2e suite =================================== Random Seed: 1584363338 - Will randomize all specs Will run 215 of 4412 specs Mar 16 12:55:39.731: INFO: >>> kubeConfig: /root/.kube/config Mar 16 12:55:39.735: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Mar 16 12:55:39.756: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Mar 16 12:55:39.789: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Mar 16 12:55:39.789: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Mar 16 12:55:39.789: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Mar 16 12:55:39.800: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) Mar 16 12:55:39.800: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Mar 16 12:55:39.800: INFO: e2e test version: v1.15.10 Mar 16 12:55:39.802: INFO: kube-apiserver version: v1.15.7 SSSSSSSSSSSS ------------------------------ [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 16 12:55:39.802: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods Mar 16 12:55:39.872: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes Mar 16 12:55:39.877: INFO: observed the pod list STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice Mar 16 12:55:48.933: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed STEP: verifying pod deletion was observed [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 16 12:55:48.936: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-9484" for this suite. Mar 16 12:55:54.955: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 12:55:55.036: INFO: namespace pods-9484 deletion completed in 6.096241658s • [SLOW TEST:15.235 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 16 12:55:55.037: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 16 12:56:55.120: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-202" for this suite. Mar 16 12:57:17.135: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 12:57:17.213: INFO: namespace container-probe-202 deletion completed in 22.089469577s • [SLOW TEST:82.176 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl patch should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 16 12:57:17.213: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating Redis RC Mar 16 12:57:17.278: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6334' Mar 16 12:57:19.996: INFO: stderr: "" Mar 16 12:57:19.996: INFO: stdout: "replicationcontroller/redis-master created\n" STEP: Waiting for Redis master to start. Mar 16 12:57:21.000: INFO: Selector matched 1 pods for map[app:redis] Mar 16 12:57:21.000: INFO: Found 0 / 1 Mar 16 12:57:22.000: INFO: Selector matched 1 pods for map[app:redis] Mar 16 12:57:22.000: INFO: Found 0 / 1 Mar 16 12:57:23.000: INFO: Selector matched 1 pods for map[app:redis] Mar 16 12:57:23.000: INFO: Found 0 / 1 Mar 16 12:57:24.000: INFO: Selector matched 1 pods for map[app:redis] Mar 16 12:57:24.000: INFO: Found 1 / 1 Mar 16 12:57:24.000: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods Mar 16 12:57:24.003: INFO: Selector matched 1 pods for map[app:redis] Mar 16 12:57:24.003: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Mar 16 12:57:24.003: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod redis-master-zprqd --namespace=kubectl-6334 -p {"metadata":{"annotations":{"x":"y"}}}' Mar 16 12:57:24.128: INFO: stderr: "" Mar 16 12:57:24.128: INFO: stdout: "pod/redis-master-zprqd patched\n" STEP: checking annotations Mar 16 12:57:24.146: INFO: Selector matched 1 pods for map[app:redis] Mar 16 12:57:24.146: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 16 12:57:24.146: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6334" for this suite. Mar 16 12:57:46.186: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 12:57:46.272: INFO: namespace kubectl-6334 deletion completed in 22.122295311s • [SLOW TEST:29.059 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl patch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 16 12:57:46.272: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 16 12:57:50.393: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-4140" for this suite. Mar 16 12:58:28.411: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 12:58:28.490: INFO: namespace kubelet-test-4140 deletion completed in 38.092394646s • [SLOW TEST:42.217 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a read only busybox container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:187 should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 16 12:58:28.490: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating the pod Mar 16 12:58:33.124: INFO: Successfully updated pod "annotationupdatec4f85baf-dce7-41a6-afcc-513c6fecffd5" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 16 12:58:35.139: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8179" for this suite. Mar 16 12:58:57.154: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 12:58:57.229: INFO: namespace projected-8179 deletion completed in 22.086621033s • [SLOW TEST:28.739 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 16 12:58:57.229: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: getting the auto-created API token STEP: reading a file in the container Mar 16 12:59:02.336: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-570 pod-service-account-66127a4d-7b54-43cc-a3e7-417bf04e5d98 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token' STEP: reading a file in the container Mar 16 12:59:02.542: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-570 pod-service-account-66127a4d-7b54-43cc-a3e7-417bf04e5d98 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt' STEP: reading a file in the container Mar 16 12:59:02.745: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-570 pod-service-account-66127a4d-7b54-43cc-a3e7-417bf04e5d98 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace' [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 16 12:59:02.948: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-570" for this suite. Mar 16 12:59:08.964: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 12:59:09.053: INFO: namespace svcaccounts-570 deletion completed in 6.099305735s • [SLOW TEST:11.823 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 16 12:59:09.054: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 16 12:59:42.762: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-3633" for this suite. Mar 16 12:59:48.772: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 12:59:48.852: INFO: namespace container-runtime-3633 deletion completed in 6.087661989s • [SLOW TEST:39.798 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 when starting a container that exits /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:39 should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 16 12:59:48.852: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Mar 16 12:59:48.910: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4c79d572-60a5-4c02-97c3-c57d458913be" in namespace "projected-8332" to be "success or failure" Mar 16 12:59:48.932: INFO: Pod "downwardapi-volume-4c79d572-60a5-4c02-97c3-c57d458913be": Phase="Pending", Reason="", readiness=false. Elapsed: 21.894592ms Mar 16 12:59:51.116: INFO: Pod "downwardapi-volume-4c79d572-60a5-4c02-97c3-c57d458913be": Phase="Pending", Reason="", readiness=false. Elapsed: 2.205489356s Mar 16 12:59:53.120: INFO: Pod "downwardapi-volume-4c79d572-60a5-4c02-97c3-c57d458913be": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.209391415s STEP: Saw pod success Mar 16 12:59:53.120: INFO: Pod "downwardapi-volume-4c79d572-60a5-4c02-97c3-c57d458913be" satisfied condition "success or failure" Mar 16 12:59:53.123: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-4c79d572-60a5-4c02-97c3-c57d458913be container client-container: STEP: delete the pod Mar 16 12:59:53.185: INFO: Waiting for pod downwardapi-volume-4c79d572-60a5-4c02-97c3-c57d458913be to disappear Mar 16 12:59:53.189: INFO: Pod downwardapi-volume-4c79d572-60a5-4c02-97c3-c57d458913be no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 16 12:59:53.189: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8332" for this suite. Mar 16 12:59:59.204: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 12:59:59.302: INFO: namespace projected-8332 deletion completed in 6.109551018s • [SLOW TEST:10.450 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-apps] Job should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 16 12:59:59.302: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: delete a job STEP: deleting Job.batch foo in namespace job-3443, will wait for the garbage collector to delete the pods Mar 16 13:00:03.444: INFO: Deleting Job.batch foo took: 6.565137ms Mar 16 13:00:03.744: INFO: Terminating Job.batch foo pods took: 300.26345ms STEP: Ensuring job was deleted [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 16 13:00:42.247: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-3443" for this suite. Mar 16 13:00:48.265: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 13:00:48.349: INFO: namespace job-3443 deletion completed in 6.0978692s • [SLOW TEST:49.047 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run rc should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 16 13:00:48.349: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1456 [It] should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Mar 16 13:00:48.871: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=kubectl-4978' Mar 16 13:00:49.009: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Mar 16 13:00:49.009: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n" STEP: verifying the rc e2e-test-nginx-rc was created STEP: verifying the pod controlled by rc e2e-test-nginx-rc was created STEP: confirm that you can get logs from an rc Mar 16 13:00:49.156: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-nginx-rc-ptddp] Mar 16 13:00:49.156: INFO: Waiting up to 5m0s for pod "e2e-test-nginx-rc-ptddp" in namespace "kubectl-4978" to be "running and ready" Mar 16 13:00:49.160: INFO: Pod "e2e-test-nginx-rc-ptddp": Phase="Pending", Reason="", readiness=false. Elapsed: 4.101792ms Mar 16 13:00:51.163: INFO: Pod "e2e-test-nginx-rc-ptddp": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007707282s Mar 16 13:00:53.168: INFO: Pod "e2e-test-nginx-rc-ptddp": Phase="Running", Reason="", readiness=true. Elapsed: 4.011788827s Mar 16 13:00:53.168: INFO: Pod "e2e-test-nginx-rc-ptddp" satisfied condition "running and ready" Mar 16 13:00:53.168: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-nginx-rc-ptddp] Mar 16 13:00:53.168: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-nginx-rc --namespace=kubectl-4978' Mar 16 13:00:53.276: INFO: stderr: "" Mar 16 13:00:53.276: INFO: stdout: "" [AfterEach] [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1461 Mar 16 13:00:53.276: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=kubectl-4978' Mar 16 13:00:53.454: INFO: stderr: "" Mar 16 13:00:53.454: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 16 13:00:53.454: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4978" for this suite. Mar 16 13:00:59.767: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 13:00:59.845: INFO: namespace kubectl-4978 deletion completed in 6.33672212s • [SLOW TEST:11.496 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 16 13:00:59.845: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-map-419a1e62-4d86-4394-813a-e01942ecfa27 STEP: Creating a pod to test consume secrets Mar 16 13:00:59.923: INFO: Waiting up to 5m0s for pod "pod-secrets-bcfa9e06-6f0a-4cab-9008-f9b9f7f465e9" in namespace "secrets-8562" to be "success or failure" Mar 16 13:00:59.927: INFO: Pod "pod-secrets-bcfa9e06-6f0a-4cab-9008-f9b9f7f465e9": Phase="Pending", Reason="", readiness=false. Elapsed: 3.546763ms Mar 16 13:01:01.930: INFO: Pod "pod-secrets-bcfa9e06-6f0a-4cab-9008-f9b9f7f465e9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006921528s Mar 16 13:01:03.935: INFO: Pod "pod-secrets-bcfa9e06-6f0a-4cab-9008-f9b9f7f465e9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011037783s STEP: Saw pod success Mar 16 13:01:03.935: INFO: Pod "pod-secrets-bcfa9e06-6f0a-4cab-9008-f9b9f7f465e9" satisfied condition "success or failure" Mar 16 13:01:03.938: INFO: Trying to get logs from node iruya-worker2 pod pod-secrets-bcfa9e06-6f0a-4cab-9008-f9b9f7f465e9 container secret-volume-test: STEP: delete the pod Mar 16 13:01:03.992: INFO: Waiting for pod pod-secrets-bcfa9e06-6f0a-4cab-9008-f9b9f7f465e9 to disappear Mar 16 13:01:03.998: INFO: Pod pod-secrets-bcfa9e06-6f0a-4cab-9008-f9b9f7f465e9 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 16 13:01:03.998: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-8562" for this suite. Mar 16 13:01:10.013: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 13:01:10.098: INFO: namespace secrets-8562 deletion completed in 6.096009751s • [SLOW TEST:10.253 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 16 13:01:10.098: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod liveness-be775b43-9d37-4589-81e4-8cf791e01af0 in namespace container-probe-9925 Mar 16 13:01:14.188: INFO: Started pod liveness-be775b43-9d37-4589-81e4-8cf791e01af0 in namespace container-probe-9925 STEP: checking the pod's current state and verifying that restartCount is present Mar 16 13:01:14.191: INFO: Initial restart count of pod liveness-be775b43-9d37-4589-81e4-8cf791e01af0 is 0 Mar 16 13:01:32.232: INFO: Restart count of pod container-probe-9925/liveness-be775b43-9d37-4589-81e4-8cf791e01af0 is now 1 (18.041332342s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 16 13:01:32.246: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-9925" for this suite. Mar 16 13:01:38.284: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 13:01:38.361: INFO: namespace container-probe-9925 deletion completed in 6.098057644s • [SLOW TEST:28.263 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 16 13:01:38.362: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Mar 16 13:01:38.419: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 16 13:01:42.554: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-894" for this suite. Mar 16 13:02:23.027: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 13:02:23.098: INFO: namespace pods-894 deletion completed in 40.539042743s • [SLOW TEST:44.736 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run default should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 16 13:02:23.098: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1420 [It] should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Mar 16 13:02:23.529: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-8665' Mar 16 13:02:23.638: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Mar 16 13:02:23.638: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n" STEP: verifying the pod controlled by e2e-test-nginx-deployment gets created [AfterEach] [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1426 Mar 16 13:02:25.684: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=kubectl-8665' Mar 16 13:02:25.787: INFO: stderr: "" Mar 16 13:02:25.787: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 16 13:02:25.787: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8665" for this suite. Mar 16 13:02:47.800: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 13:02:47.887: INFO: namespace kubectl-8665 deletion completed in 22.096951685s • [SLOW TEST:24.789 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 16 13:02:47.888: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Mar 16 13:02:55.997: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 16 13:02:56.005: INFO: Pod pod-with-prestop-exec-hook still exists Mar 16 13:02:58.006: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 16 13:02:58.010: INFO: Pod pod-with-prestop-exec-hook still exists Mar 16 13:03:00.006: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 16 13:03:00.010: INFO: Pod pod-with-prestop-exec-hook still exists Mar 16 13:03:02.006: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 16 13:03:02.010: INFO: Pod pod-with-prestop-exec-hook still exists Mar 16 13:03:04.006: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 16 13:03:04.010: INFO: Pod pod-with-prestop-exec-hook still exists Mar 16 13:03:06.006: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 16 13:03:06.010: INFO: Pod pod-with-prestop-exec-hook still exists Mar 16 13:03:08.006: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 16 13:03:08.010: INFO: Pod pod-with-prestop-exec-hook still exists Mar 16 13:03:10.006: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 16 13:03:10.010: INFO: Pod pod-with-prestop-exec-hook still exists Mar 16 13:03:12.006: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 16 13:03:12.010: INFO: Pod pod-with-prestop-exec-hook still exists Mar 16 13:03:14.006: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 16 13:03:14.009: INFO: Pod pod-with-prestop-exec-hook still exists Mar 16 13:03:16.006: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 16 13:03:16.009: INFO: Pod pod-with-prestop-exec-hook still exists Mar 16 13:03:18.006: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 16 13:03:18.009: INFO: Pod pod-with-prestop-exec-hook still exists Mar 16 13:03:20.006: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 16 13:03:20.010: INFO: Pod pod-with-prestop-exec-hook still exists Mar 16 13:03:22.006: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 16 13:03:22.016: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 16 13:03:22.026: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-6572" for this suite. Mar 16 13:03:44.069: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 13:03:44.189: INFO: namespace container-lifecycle-hook-6572 deletion completed in 22.159505017s • [SLOW TEST:56.300 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 16 13:03:44.189: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-map-3efcd73d-6aa6-47f8-ab5e-de8355f721dd STEP: Creating a pod to test consume secrets Mar 16 13:03:44.304: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-6db779f3-6124-4a0a-b1bb-43ade8fdb1e8" in namespace "projected-7028" to be "success or failure" Mar 16 13:03:44.307: INFO: Pod "pod-projected-secrets-6db779f3-6124-4a0a-b1bb-43ade8fdb1e8": Phase="Pending", Reason="", readiness=false. Elapsed: 3.252474ms Mar 16 13:03:46.311: INFO: Pod "pod-projected-secrets-6db779f3-6124-4a0a-b1bb-43ade8fdb1e8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006877417s Mar 16 13:03:48.316: INFO: Pod "pod-projected-secrets-6db779f3-6124-4a0a-b1bb-43ade8fdb1e8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011780332s STEP: Saw pod success Mar 16 13:03:48.316: INFO: Pod "pod-projected-secrets-6db779f3-6124-4a0a-b1bb-43ade8fdb1e8" satisfied condition "success or failure" Mar 16 13:03:48.319: INFO: Trying to get logs from node iruya-worker pod pod-projected-secrets-6db779f3-6124-4a0a-b1bb-43ade8fdb1e8 container projected-secret-volume-test: STEP: delete the pod Mar 16 13:03:48.350: INFO: Waiting for pod pod-projected-secrets-6db779f3-6124-4a0a-b1bb-43ade8fdb1e8 to disappear Mar 16 13:03:48.355: INFO: Pod pod-projected-secrets-6db779f3-6124-4a0a-b1bb-43ade8fdb1e8 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 16 13:03:48.355: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7028" for this suite. Mar 16 13:03:54.371: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 13:03:54.441: INFO: namespace projected-7028 deletion completed in 6.082693536s • [SLOW TEST:10.252 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 16 13:03:54.442: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Mar 16 13:03:54.490: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace STEP: Creating rc "condition-test" that asks for more than the allowed pod quota STEP: Checking rc "condition-test" has the desired failure condition set STEP: Scaling down rc "condition-test" to satisfy pod quota Mar 16 13:03:56.537: INFO: Updating replication controller "condition-test" STEP: Checking rc "condition-test" has no failure condition set [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 16 13:03:57.556: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-2588" for this suite. Mar 16 13:04:03.579: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 13:04:03.661: INFO: namespace replication-controller-2588 deletion completed in 6.101531286s • [SLOW TEST:9.220 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 16 13:04:03.662: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Mar 16 13:04:03.765: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e0797706-4b9d-4ba7-93b1-369bd4a423f4" in namespace "downward-api-872" to be "success or failure" Mar 16 13:04:03.782: INFO: Pod "downwardapi-volume-e0797706-4b9d-4ba7-93b1-369bd4a423f4": Phase="Pending", Reason="", readiness=false. Elapsed: 16.925188ms Mar 16 13:04:05.786: INFO: Pod "downwardapi-volume-e0797706-4b9d-4ba7-93b1-369bd4a423f4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021607809s Mar 16 13:04:07.792: INFO: Pod "downwardapi-volume-e0797706-4b9d-4ba7-93b1-369bd4a423f4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.026983906s STEP: Saw pod success Mar 16 13:04:07.792: INFO: Pod "downwardapi-volume-e0797706-4b9d-4ba7-93b1-369bd4a423f4" satisfied condition "success or failure" Mar 16 13:04:07.796: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-e0797706-4b9d-4ba7-93b1-369bd4a423f4 container client-container: STEP: delete the pod Mar 16 13:04:07.839: INFO: Waiting for pod downwardapi-volume-e0797706-4b9d-4ba7-93b1-369bd4a423f4 to disappear Mar 16 13:04:07.853: INFO: Pod downwardapi-volume-e0797706-4b9d-4ba7-93b1-369bd4a423f4 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 16 13:04:07.853: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-872" for this suite. Mar 16 13:04:13.867: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 13:04:13.943: INFO: namespace downward-api-872 deletion completed in 6.086838838s • [SLOW TEST:10.281 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 16 13:04:13.943: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Mar 16 13:04:14.051: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e98e74c9-b80c-41a8-8a0f-52fdc6a5e673" in namespace "downward-api-1451" to be "success or failure" Mar 16 13:04:14.069: INFO: Pod "downwardapi-volume-e98e74c9-b80c-41a8-8a0f-52fdc6a5e673": Phase="Pending", Reason="", readiness=false. Elapsed: 18.016102ms Mar 16 13:04:16.074: INFO: Pod "downwardapi-volume-e98e74c9-b80c-41a8-8a0f-52fdc6a5e673": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022619692s Mar 16 13:04:18.078: INFO: Pod "downwardapi-volume-e98e74c9-b80c-41a8-8a0f-52fdc6a5e673": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.026715997s STEP: Saw pod success Mar 16 13:04:18.078: INFO: Pod "downwardapi-volume-e98e74c9-b80c-41a8-8a0f-52fdc6a5e673" satisfied condition "success or failure" Mar 16 13:04:18.081: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-e98e74c9-b80c-41a8-8a0f-52fdc6a5e673 container client-container: STEP: delete the pod Mar 16 13:04:18.101: INFO: Waiting for pod downwardapi-volume-e98e74c9-b80c-41a8-8a0f-52fdc6a5e673 to disappear Mar 16 13:04:18.129: INFO: Pod downwardapi-volume-e98e74c9-b80c-41a8-8a0f-52fdc6a5e673 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 16 13:04:18.129: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1451" for this suite. Mar 16 13:04:24.189: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 13:04:24.314: INFO: namespace downward-api-1451 deletion completed in 6.181953825s • [SLOW TEST:10.371 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 16 13:04:24.314: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Mar 16 13:04:24.372: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) Mar 16 13:04:24.402: INFO: Pod name sample-pod: Found 0 pods out of 1 Mar 16 13:04:29.407: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Mar 16 13:04:29.407: INFO: Creating deployment "test-rolling-update-deployment" Mar 16 13:04:29.411: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has Mar 16 13:04:29.417: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created Mar 16 13:04:31.424: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected Mar 16 13:04:31.427: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719960669, loc:(*time.Location)(0x7ea78c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719960669, loc:(*time.Location)(0x7ea78c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719960669, loc:(*time.Location)(0x7ea78c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719960669, loc:(*time.Location)(0x7ea78c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-79f6b9d75c\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 16 13:04:33.431: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 Mar 16 13:04:33.439: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment,GenerateName:,Namespace:deployment-816,SelfLink:/apis/apps/v1/namespaces/deployment-816/deployments/test-rolling-update-deployment,UID:cac157c3-3694-4bf2-9a9b-2bac3a2b0ae2,ResourceVersion:155131,Generation:1,CreationTimestamp:2020-03-16 13:04:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-03-16 13:04:29 +0000 UTC 2020-03-16 13:04:29 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-03-16 13:04:32 +0000 UTC 2020-03-16 13:04:29 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rolling-update-deployment-79f6b9d75c" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},} Mar 16 13:04:33.442: INFO: New ReplicaSet "test-rolling-update-deployment-79f6b9d75c" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-79f6b9d75c,GenerateName:,Namespace:deployment-816,SelfLink:/apis/apps/v1/namespaces/deployment-816/replicasets/test-rolling-update-deployment-79f6b9d75c,UID:fa1c0ee7-4a28-4286-98a0-5274c23e46c1,ResourceVersion:155120,Generation:1,CreationTimestamp:2020-03-16 13:04:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment cac157c3-3694-4bf2-9a9b-2bac3a2b0ae2 0xc002d6dbb7 0xc002d6dbb8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} Mar 16 13:04:33.442: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": Mar 16 13:04:33.442: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-controller,GenerateName:,Namespace:deployment-816,SelfLink:/apis/apps/v1/namespaces/deployment-816/replicasets/test-rolling-update-controller,UID:8be1991b-632d-48e9-ae6f-d4889d8897f3,ResourceVersion:155129,Generation:2,CreationTimestamp:2020-03-16 13:04:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305832,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment cac157c3-3694-4bf2-9a9b-2bac3a2b0ae2 0xc002d6dae7 0xc002d6dae8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Mar 16 13:04:33.444: INFO: Pod "test-rolling-update-deployment-79f6b9d75c-xlmhv" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-79f6b9d75c-xlmhv,GenerateName:test-rolling-update-deployment-79f6b9d75c-,Namespace:deployment-816,SelfLink:/api/v1/namespaces/deployment-816/pods/test-rolling-update-deployment-79f6b9d75c-xlmhv,UID:0e3d334c-636f-40ea-bbf6-74b1c5d97296,ResourceVersion:155119,Generation:0,CreationTimestamp:2020-03-16 13:04:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rolling-update-deployment-79f6b9d75c fa1c0ee7-4a28-4286-98a0-5274c23e46c1 0xc002eb2487 0xc002eb2488}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-s4tfc {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-s4tfc,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-s4tfc true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002eb2500} {node.kubernetes.io/unreachable Exists NoExecute 0xc002eb2520}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-16 13:04:29 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-03-16 13:04:32 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-03-16 13:04:32 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-16 13:04:29 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:10.244.2.138,StartTime:2020-03-16 13:04:29 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-03-16 13:04:31 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 containerd://e38be1be3776e1972577f86aca0f156f1ec2aad9d41f7588142fce125edf11dc}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 16 13:04:33.444: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-816" for this suite. Mar 16 13:04:39.468: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 13:04:39.574: INFO: namespace deployment-816 deletion completed in 6.12760384s • [SLOW TEST:15.260 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 16 13:04:39.575: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 Mar 16 13:04:39.611: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Mar 16 13:04:39.617: INFO: Waiting for terminating namespaces to be deleted... Mar 16 13:04:39.619: INFO: Logging pods the kubelet thinks is on node iruya-worker before test Mar 16 13:04:39.623: INFO: kube-proxy-pmz4p from kube-system started at 2020-03-15 18:24:55 +0000 UTC (1 container statuses recorded) Mar 16 13:04:39.623: INFO: Container kube-proxy ready: true, restart count 0 Mar 16 13:04:39.623: INFO: kindnet-gwz5g from kube-system started at 2020-03-15 18:24:55 +0000 UTC (1 container statuses recorded) Mar 16 13:04:39.623: INFO: Container kindnet-cni ready: true, restart count 0 Mar 16 13:04:39.623: INFO: Logging pods the kubelet thinks is on node iruya-worker2 before test Mar 16 13:04:39.628: INFO: coredns-5d4dd4b4db-gm7vr from kube-system started at 2020-03-15 18:24:52 +0000 UTC (1 container statuses recorded) Mar 16 13:04:39.628: INFO: Container coredns ready: true, restart count 0 Mar 16 13:04:39.628: INFO: coredns-5d4dd4b4db-6jcgz from kube-system started at 2020-03-15 18:24:54 +0000 UTC (1 container statuses recorded) Mar 16 13:04:39.628: INFO: Container coredns ready: true, restart count 0 Mar 16 13:04:39.628: INFO: kube-proxy-vwbcj from kube-system started at 2020-03-15 18:24:42 +0000 UTC (1 container statuses recorded) Mar 16 13:04:39.628: INFO: Container kube-proxy ready: true, restart count 0 Mar 16 13:04:39.628: INFO: kindnet-mgd8b from kube-system started at 2020-03-15 18:24:43 +0000 UTC (1 container statuses recorded) Mar 16 13:04:39.628: INFO: Container kindnet-cni ready: true, restart count 0 [It] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.15fcca8de20f3566], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 16 13:04:40.737: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-5352" for this suite. Mar 16 13:04:46.756: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 13:04:46.832: INFO: namespace sched-pred-5352 deletion completed in 6.091095781s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72 • [SLOW TEST:7.257 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23 validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 16 13:04:46.832: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test substitution in container's args Mar 16 13:04:46.899: INFO: Waiting up to 5m0s for pod "var-expansion-c8c316d8-56de-45e2-9712-874b7548bdf2" in namespace "var-expansion-8978" to be "success or failure" Mar 16 13:04:46.902: INFO: Pod "var-expansion-c8c316d8-56de-45e2-9712-874b7548bdf2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.44509ms Mar 16 13:04:48.906: INFO: Pod "var-expansion-c8c316d8-56de-45e2-9712-874b7548bdf2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006898353s Mar 16 13:04:50.910: INFO: Pod "var-expansion-c8c316d8-56de-45e2-9712-874b7548bdf2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010738702s STEP: Saw pod success Mar 16 13:04:50.910: INFO: Pod "var-expansion-c8c316d8-56de-45e2-9712-874b7548bdf2" satisfied condition "success or failure" Mar 16 13:04:50.913: INFO: Trying to get logs from node iruya-worker2 pod var-expansion-c8c316d8-56de-45e2-9712-874b7548bdf2 container dapi-container: STEP: delete the pod Mar 16 13:04:50.942: INFO: Waiting for pod var-expansion-c8c316d8-56de-45e2-9712-874b7548bdf2 to disappear Mar 16 13:04:50.953: INFO: Pod var-expansion-c8c316d8-56de-45e2-9712-874b7548bdf2 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 16 13:04:50.954: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-8978" for this suite. Mar 16 13:04:56.969: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 13:04:57.051: INFO: namespace var-expansion-8978 deletion completed in 6.093330006s • [SLOW TEST:10.218 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 16 13:04:57.051: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Mar 16 13:04:57.136: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0b241cb3-b395-4543-b3e7-245482b881f9" in namespace "downward-api-4078" to be "success or failure" Mar 16 13:04:57.139: INFO: Pod "downwardapi-volume-0b241cb3-b395-4543-b3e7-245482b881f9": Phase="Pending", Reason="", readiness=false. Elapsed: 3.096639ms Mar 16 13:04:59.143: INFO: Pod "downwardapi-volume-0b241cb3-b395-4543-b3e7-245482b881f9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007497843s Mar 16 13:05:01.147: INFO: Pod "downwardapi-volume-0b241cb3-b395-4543-b3e7-245482b881f9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011544914s STEP: Saw pod success Mar 16 13:05:01.147: INFO: Pod "downwardapi-volume-0b241cb3-b395-4543-b3e7-245482b881f9" satisfied condition "success or failure" Mar 16 13:05:01.151: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-0b241cb3-b395-4543-b3e7-245482b881f9 container client-container: STEP: delete the pod Mar 16 13:05:01.183: INFO: Waiting for pod downwardapi-volume-0b241cb3-b395-4543-b3e7-245482b881f9 to disappear Mar 16 13:05:01.187: INFO: Pod downwardapi-volume-0b241cb3-b395-4543-b3e7-245482b881f9 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 16 13:05:01.187: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4078" for this suite. Mar 16 13:05:07.203: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 13:05:07.278: INFO: namespace downward-api-4078 deletion completed in 6.084328453s • [SLOW TEST:10.227 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 16 13:05:07.279: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-1178 [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a new StatefulSet Mar 16 13:05:07.355: INFO: Found 0 stateful pods, waiting for 3 Mar 16 13:05:17.360: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Mar 16 13:05:17.360: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Mar 16 13:05:17.360: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Mar 16 13:05:27.360: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Mar 16 13:05:27.361: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Mar 16 13:05:27.361: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Updating stateful set template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine Mar 16 13:05:27.386: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Not applying an update when the partition is greater than the number of replicas STEP: Performing a canary update Mar 16 13:05:37.437: INFO: Updating stateful set ss2 Mar 16 13:05:37.475: INFO: Waiting for Pod statefulset-1178/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c STEP: Restoring Pods to the correct revision when they are deleted Mar 16 13:05:47.838: INFO: Found 2 stateful pods, waiting for 3 Mar 16 13:05:57.843: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Mar 16 13:05:57.843: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Mar 16 13:05:57.843: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Performing a phased rolling update Mar 16 13:05:57.867: INFO: Updating stateful set ss2 Mar 16 13:05:57.880: INFO: Waiting for Pod statefulset-1178/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Mar 16 13:06:07.904: INFO: Updating stateful set ss2 Mar 16 13:06:07.969: INFO: Waiting for StatefulSet statefulset-1178/ss2 to complete update Mar 16 13:06:07.969: INFO: Waiting for Pod statefulset-1178/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 Mar 16 13:06:17.978: INFO: Deleting all statefulset in ns statefulset-1178 Mar 16 13:06:17.980: INFO: Scaling statefulset ss2 to 0 Mar 16 13:06:28.007: INFO: Waiting for statefulset status.replicas updated to 0 Mar 16 13:06:28.010: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 16 13:06:28.041: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-1178" for this suite. Mar 16 13:06:34.058: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 13:06:34.137: INFO: namespace statefulset-1178 deletion completed in 6.093167315s • [SLOW TEST:86.858 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 16 13:06:34.137: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Mar 16 13:06:34.192: INFO: Creating deployment "test-recreate-deployment" Mar 16 13:06:34.204: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 Mar 16 13:06:34.256: INFO: deployment "test-recreate-deployment" doesn't have the required revision set Mar 16 13:06:36.264: INFO: Waiting deployment "test-recreate-deployment" to complete Mar 16 13:06:36.267: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719960794, loc:(*time.Location)(0x7ea78c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719960794, loc:(*time.Location)(0x7ea78c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719960794, loc:(*time.Location)(0x7ea78c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719960794, loc:(*time.Location)(0x7ea78c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 16 13:06:38.271: INFO: Triggering a new rollout for deployment "test-recreate-deployment" Mar 16 13:06:38.277: INFO: Updating deployment test-recreate-deployment Mar 16 13:06:38.277: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 Mar 16 13:06:38.601: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment,GenerateName:,Namespace:deployment-1960,SelfLink:/apis/apps/v1/namespaces/deployment-1960/deployments/test-recreate-deployment,UID:99831749-4370-40b9-9042-05f6b2b329a7,ResourceVersion:155762,Generation:2,CreationTimestamp:2020-03-16 13:06:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[{Available False 2020-03-16 13:06:38 +0000 UTC 2020-03-16 13:06:38 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2020-03-16 13:06:38 +0000 UTC 2020-03-16 13:06:34 +0000 UTC ReplicaSetUpdated ReplicaSet "test-recreate-deployment-5c8c9cc69d" is progressing.}],ReadyReplicas:0,CollisionCount:nil,},} Mar 16 13:06:38.739: INFO: New ReplicaSet "test-recreate-deployment-5c8c9cc69d" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5c8c9cc69d,GenerateName:,Namespace:deployment-1960,SelfLink:/apis/apps/v1/namespaces/deployment-1960/replicasets/test-recreate-deployment-5c8c9cc69d,UID:b8bc93ec-6796-46cb-be26-113e3cad6660,ResourceVersion:155759,Generation:1,CreationTimestamp:2020-03-16 13:06:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 99831749-4370-40b9-9042-05f6b2b329a7 0xc002048717 0xc002048718}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Mar 16 13:06:38.739: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": Mar 16 13:06:38.739: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-6df85df6b9,GenerateName:,Namespace:deployment-1960,SelfLink:/apis/apps/v1/namespaces/deployment-1960/replicasets/test-recreate-deployment-6df85df6b9,UID:fffcb804-879c-4767-9ec0-ae66c9833380,ResourceVersion:155751,Generation:2,CreationTimestamp:2020-03-16 13:06:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 99831749-4370-40b9-9042-05f6b2b329a7 0xc0020487e7 0xc0020487e8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Mar 16 13:06:38.761: INFO: Pod "test-recreate-deployment-5c8c9cc69d-xdsvv" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5c8c9cc69d-xdsvv,GenerateName:test-recreate-deployment-5c8c9cc69d-,Namespace:deployment-1960,SelfLink:/api/v1/namespaces/deployment-1960/pods/test-recreate-deployment-5c8c9cc69d-xdsvv,UID:1b51bb80-cbc1-443e-a03a-9c651c8603e4,ResourceVersion:155763,Generation:0,CreationTimestamp:2020-03-16 13:06:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-recreate-deployment-5c8c9cc69d b8bc93ec-6796-46cb-be26-113e3cad6660 0xc002626eb7 0xc002626eb8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-9qxt4 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-9qxt4,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-9qxt4 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002626f30} {node.kubernetes.io/unreachable Exists NoExecute 0xc002626f50}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-16 13:06:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-16 13:06:38 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-16 13:06:38 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-16 13:06:38 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:,StartTime:2020-03-16 13:06:38 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 16 13:06:38.761: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-1960" for this suite. Mar 16 13:06:44.784: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 13:06:44.911: INFO: namespace deployment-1960 deletion completed in 6.147034892s • [SLOW TEST:10.774 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 16 13:06:44.912: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 16 13:06:49.085: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-9348" for this suite. Mar 16 13:06:55.112: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 13:06:55.191: INFO: namespace emptydir-wrapper-9348 deletion completed in 6.09291325s • [SLOW TEST:10.279 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 16 13:06:55.191: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Mar 16 13:06:55.460: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8a316ead-4155-496a-9579-350678ac562c" in namespace "projected-8383" to be "success or failure" Mar 16 13:06:55.479: INFO: Pod "downwardapi-volume-8a316ead-4155-496a-9579-350678ac562c": Phase="Pending", Reason="", readiness=false. Elapsed: 19.148736ms Mar 16 13:06:57.488: INFO: Pod "downwardapi-volume-8a316ead-4155-496a-9579-350678ac562c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028642247s Mar 16 13:06:59.510: INFO: Pod "downwardapi-volume-8a316ead-4155-496a-9579-350678ac562c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.050492281s STEP: Saw pod success Mar 16 13:06:59.510: INFO: Pod "downwardapi-volume-8a316ead-4155-496a-9579-350678ac562c" satisfied condition "success or failure" Mar 16 13:06:59.513: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-8a316ead-4155-496a-9579-350678ac562c container client-container: STEP: delete the pod Mar 16 13:06:59.528: INFO: Waiting for pod downwardapi-volume-8a316ead-4155-496a-9579-350678ac562c to disappear Mar 16 13:06:59.533: INFO: Pod downwardapi-volume-8a316ead-4155-496a-9579-350678ac562c no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 16 13:06:59.533: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8383" for this suite. Mar 16 13:07:05.563: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 13:07:05.644: INFO: namespace projected-8383 deletion completed in 6.104163121s • [SLOW TEST:10.453 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Proxy server should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 16 13:07:05.645: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: starting the proxy server Mar 16 13:07:05.688: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter' STEP: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 16 13:07:05.766: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3036" for this suite. Mar 16 13:07:11.791: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 13:07:11.867: INFO: namespace kubectl-3036 deletion completed in 6.093111593s • [SLOW TEST:6.222 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Proxy server /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 16 13:07:11.868: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [It] should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the initial replication controller Mar 16 13:07:11.932: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3156' Mar 16 13:07:12.221: INFO: stderr: "" Mar 16 13:07:12.221: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Mar 16 13:07:12.221: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3156' Mar 16 13:07:12.324: INFO: stderr: "" Mar 16 13:07:12.324: INFO: stdout: "update-demo-nautilus-mcblm update-demo-nautilus-qwlzm " Mar 16 13:07:12.324: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-mcblm -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3156' Mar 16 13:07:12.408: INFO: stderr: "" Mar 16 13:07:12.408: INFO: stdout: "" Mar 16 13:07:12.408: INFO: update-demo-nautilus-mcblm is created but not running Mar 16 13:07:17.408: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3156' Mar 16 13:07:17.501: INFO: stderr: "" Mar 16 13:07:17.501: INFO: stdout: "update-demo-nautilus-mcblm update-demo-nautilus-qwlzm " Mar 16 13:07:17.501: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-mcblm -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3156' Mar 16 13:07:17.604: INFO: stderr: "" Mar 16 13:07:17.604: INFO: stdout: "true" Mar 16 13:07:17.605: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-mcblm -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3156' Mar 16 13:07:17.773: INFO: stderr: "" Mar 16 13:07:17.773: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 16 13:07:17.773: INFO: validating pod update-demo-nautilus-mcblm Mar 16 13:07:17.776: INFO: got data: { "image": "nautilus.jpg" } Mar 16 13:07:17.776: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 16 13:07:17.776: INFO: update-demo-nautilus-mcblm is verified up and running Mar 16 13:07:17.776: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-qwlzm -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3156' Mar 16 13:07:17.864: INFO: stderr: "" Mar 16 13:07:17.864: INFO: stdout: "true" Mar 16 13:07:17.864: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-qwlzm -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3156' Mar 16 13:07:17.981: INFO: stderr: "" Mar 16 13:07:17.981: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 16 13:07:17.981: INFO: validating pod update-demo-nautilus-qwlzm Mar 16 13:07:17.985: INFO: got data: { "image": "nautilus.jpg" } Mar 16 13:07:17.985: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 16 13:07:17.985: INFO: update-demo-nautilus-qwlzm is verified up and running STEP: rolling-update to new replication controller Mar 16 13:07:17.987: INFO: scanned /root for discovery docs: Mar 16 13:07:17.987: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=kubectl-3156' Mar 16 13:07:43.206: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" Mar 16 13:07:43.206: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n" STEP: waiting for all containers in name=update-demo pods to come up. Mar 16 13:07:43.207: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3156' Mar 16 13:07:43.319: INFO: stderr: "" Mar 16 13:07:43.319: INFO: stdout: "update-demo-kitten-66nxp update-demo-kitten-txw2b " Mar 16 13:07:43.319: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-66nxp -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3156' Mar 16 13:07:43.422: INFO: stderr: "" Mar 16 13:07:43.422: INFO: stdout: "true" Mar 16 13:07:43.422: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-66nxp -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3156' Mar 16 13:07:43.521: INFO: stderr: "" Mar 16 13:07:43.522: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" Mar 16 13:07:43.522: INFO: validating pod update-demo-kitten-66nxp Mar 16 13:07:43.526: INFO: got data: { "image": "kitten.jpg" } Mar 16 13:07:43.526: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . Mar 16 13:07:43.526: INFO: update-demo-kitten-66nxp is verified up and running Mar 16 13:07:43.526: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-txw2b -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3156' Mar 16 13:07:43.621: INFO: stderr: "" Mar 16 13:07:43.621: INFO: stdout: "true" Mar 16 13:07:43.621: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-txw2b -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3156' Mar 16 13:07:43.715: INFO: stderr: "" Mar 16 13:07:43.715: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" Mar 16 13:07:43.715: INFO: validating pod update-demo-kitten-txw2b Mar 16 13:07:43.719: INFO: got data: { "image": "kitten.jpg" } Mar 16 13:07:43.719: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . Mar 16 13:07:43.719: INFO: update-demo-kitten-txw2b is verified up and running [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 16 13:07:43.719: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3156" for this suite. Mar 16 13:08:05.732: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 13:08:05.816: INFO: namespace kubectl-3156 deletion completed in 22.094077206s • [SLOW TEST:53.948 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 16 13:08:05.817: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0666 on node default medium Mar 16 13:08:05.885: INFO: Waiting up to 5m0s for pod "pod-be49a4d1-8746-4ce1-8648-cf46727a3287" in namespace "emptydir-730" to be "success or failure" Mar 16 13:08:05.888: INFO: Pod "pod-be49a4d1-8746-4ce1-8648-cf46727a3287": Phase="Pending", Reason="", readiness=false. Elapsed: 3.712203ms Mar 16 13:08:07.893: INFO: Pod "pod-be49a4d1-8746-4ce1-8648-cf46727a3287": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008139875s Mar 16 13:08:09.898: INFO: Pod "pod-be49a4d1-8746-4ce1-8648-cf46727a3287": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013288813s STEP: Saw pod success Mar 16 13:08:09.898: INFO: Pod "pod-be49a4d1-8746-4ce1-8648-cf46727a3287" satisfied condition "success or failure" Mar 16 13:08:09.901: INFO: Trying to get logs from node iruya-worker pod pod-be49a4d1-8746-4ce1-8648-cf46727a3287 container test-container: STEP: delete the pod Mar 16 13:08:09.947: INFO: Waiting for pod pod-be49a4d1-8746-4ce1-8648-cf46727a3287 to disappear Mar 16 13:08:09.966: INFO: Pod pod-be49a4d1-8746-4ce1-8648-cf46727a3287 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 16 13:08:09.966: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-730" for this suite. Mar 16 13:08:15.982: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 13:08:16.062: INFO: namespace emptydir-730 deletion completed in 6.092919878s • [SLOW TEST:10.245 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 16 13:08:16.063: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating secret secrets-3684/secret-test-94e69dc3-eb9e-4a75-bd33-9697cc77a3a0 STEP: Creating a pod to test consume secrets Mar 16 13:08:16.143: INFO: Waiting up to 5m0s for pod "pod-configmaps-e5102f86-44d9-464b-9015-c8ce6658634e" in namespace "secrets-3684" to be "success or failure" Mar 16 13:08:16.146: INFO: Pod "pod-configmaps-e5102f86-44d9-464b-9015-c8ce6658634e": Phase="Pending", Reason="", readiness=false. Elapsed: 3.435715ms Mar 16 13:08:18.149: INFO: Pod "pod-configmaps-e5102f86-44d9-464b-9015-c8ce6658634e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00673057s Mar 16 13:08:20.153: INFO: Pod "pod-configmaps-e5102f86-44d9-464b-9015-c8ce6658634e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010206863s STEP: Saw pod success Mar 16 13:08:20.153: INFO: Pod "pod-configmaps-e5102f86-44d9-464b-9015-c8ce6658634e" satisfied condition "success or failure" Mar 16 13:08:20.155: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-e5102f86-44d9-464b-9015-c8ce6658634e container env-test: STEP: delete the pod Mar 16 13:08:20.171: INFO: Waiting for pod pod-configmaps-e5102f86-44d9-464b-9015-c8ce6658634e to disappear Mar 16 13:08:20.206: INFO: Pod pod-configmaps-e5102f86-44d9-464b-9015-c8ce6658634e no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 16 13:08:20.206: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-3684" for this suite. Mar 16 13:08:26.224: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 13:08:26.304: INFO: namespace secrets-3684 deletion completed in 6.094829785s • [SLOW TEST:10.242 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 16 13:08:26.304: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Mar 16 13:08:29.435: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 16 13:08:29.460: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-8198" for this suite. Mar 16 13:08:35.475: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 13:08:35.591: INFO: namespace container-runtime-8198 deletion completed in 6.128233526s • [SLOW TEST:9.287 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129 should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 16 13:08:35.592: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Mar 16 13:08:40.195: INFO: Successfully updated pod "pod-update-activedeadlineseconds-63905d69-8b40-4699-ba1a-b9eb7ecda666" Mar 16 13:08:40.195: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-63905d69-8b40-4699-ba1a-b9eb7ecda666" in namespace "pods-173" to be "terminated due to deadline exceeded" Mar 16 13:08:40.427: INFO: Pod "pod-update-activedeadlineseconds-63905d69-8b40-4699-ba1a-b9eb7ecda666": Phase="Running", Reason="", readiness=true. Elapsed: 231.910664ms Mar 16 13:08:42.431: INFO: Pod "pod-update-activedeadlineseconds-63905d69-8b40-4699-ba1a-b9eb7ecda666": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.236121572s Mar 16 13:08:42.431: INFO: Pod "pod-update-activedeadlineseconds-63905d69-8b40-4699-ba1a-b9eb7ecda666" satisfied condition "terminated due to deadline exceeded" [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 16 13:08:42.431: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-173" for this suite. Mar 16 13:08:48.472: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 13:08:48.548: INFO: namespace pods-173 deletion completed in 6.113697324s • [SLOW TEST:12.957 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl api-versions should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 16 13:08:48.549: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: validating api versions Mar 16 13:08:48.658: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions' Mar 16 13:08:49.031: INFO: stderr: "" Mar 16 13:08:49.031: INFO: stdout: "admissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\napps/v1beta1\napps/v1beta2\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 16 13:08:49.031: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-763" for this suite. Mar 16 13:08:55.061: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 13:08:55.131: INFO: namespace kubectl-763 deletion completed in 6.095214017s • [SLOW TEST:6.582 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl api-versions /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 16 13:08:55.132: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Mar 16 13:08:55.291: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"e0bced63-4d91-49ce-b990-cdf70fc8f1e9", Controller:(*bool)(0xc00301ae7a), BlockOwnerDeletion:(*bool)(0xc00301ae7b)}} Mar 16 13:08:55.303: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"628e5095-1abe-44d8-9f36-39a9a2941f29", Controller:(*bool)(0xc002fbc64a), BlockOwnerDeletion:(*bool)(0xc002fbc64b)}} Mar 16 13:08:55.356: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"07f28812-fdb7-435a-a265-d0d96eaef7cb", Controller:(*bool)(0xc00301b02a), BlockOwnerDeletion:(*bool)(0xc00301b02b)}} [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 16 13:09:00.385: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-2239" for this suite. Mar 16 13:09:06.403: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 13:09:06.481: INFO: namespace gc-2239 deletion completed in 6.09193733s • [SLOW TEST:11.349 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Guestbook application should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 16 13:09:06.481: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating all guestbook components Mar 16 13:09:06.525: INFO: apiVersion: v1 kind: Service metadata: name: redis-slave labels: app: redis role: slave tier: backend spec: ports: - port: 6379 selector: app: redis role: slave tier: backend Mar 16 13:09:06.525: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1197' Mar 16 13:09:06.848: INFO: stderr: "" Mar 16 13:09:06.848: INFO: stdout: "service/redis-slave created\n" Mar 16 13:09:06.848: INFO: apiVersion: v1 kind: Service metadata: name: redis-master labels: app: redis role: master tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: redis role: master tier: backend Mar 16 13:09:06.848: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1197' Mar 16 13:09:07.203: INFO: stderr: "" Mar 16 13:09:07.203: INFO: stdout: "service/redis-master created\n" Mar 16 13:09:07.203: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend Mar 16 13:09:07.203: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1197' Mar 16 13:09:07.598: INFO: stderr: "" Mar 16 13:09:07.598: INFO: stdout: "service/frontend created\n" Mar 16 13:09:07.598: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: frontend spec: replicas: 3 selector: matchLabels: app: guestbook tier: frontend template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: php-redis image: gcr.io/google-samples/gb-frontend:v6 resources: requests: cpu: 100m memory: 100Mi env: - name: GET_HOSTS_FROM value: dns # If your cluster config does not include a dns service, then to # instead access environment variables to find service host # info, comment out the 'value: dns' line above, and uncomment the # line below: # value: env ports: - containerPort: 80 Mar 16 13:09:07.598: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1197' Mar 16 13:09:07.826: INFO: stderr: "" Mar 16 13:09:07.826: INFO: stdout: "deployment.apps/frontend created\n" Mar 16 13:09:07.826: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: redis-master spec: replicas: 1 selector: matchLabels: app: redis role: master tier: backend template: metadata: labels: app: redis role: master tier: backend spec: containers: - name: master image: gcr.io/kubernetes-e2e-test-images/redis:1.0 resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Mar 16 13:09:07.826: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1197' Mar 16 13:09:08.855: INFO: stderr: "" Mar 16 13:09:08.855: INFO: stdout: "deployment.apps/redis-master created\n" Mar 16 13:09:08.855: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: redis-slave spec: replicas: 2 selector: matchLabels: app: redis role: slave tier: backend template: metadata: labels: app: redis role: slave tier: backend spec: containers: - name: slave image: gcr.io/google-samples/gb-redisslave:v3 resources: requests: cpu: 100m memory: 100Mi env: - name: GET_HOSTS_FROM value: dns # If your cluster config does not include a dns service, then to # instead access an environment variable to find the master # service's host, comment out the 'value: dns' line above, and # uncomment the line below: # value: env ports: - containerPort: 6379 Mar 16 13:09:08.856: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1197' Mar 16 13:09:09.168: INFO: stderr: "" Mar 16 13:09:09.168: INFO: stdout: "deployment.apps/redis-slave created\n" STEP: validating guestbook app Mar 16 13:09:09.168: INFO: Waiting for all frontend pods to be Running. Mar 16 13:09:14.218: INFO: Waiting for frontend to serve content. Mar 16 13:09:16.043: INFO: Failed to get response from guestbook. err: , response:
Fatal error: Uncaught exception 'Predis\Connection\ConnectionException' with message 'Connection refused [tcp://redis-slave:6379]' in /usr/local/lib/php/Predis/Connection/AbstractConnection.php:155 Stack trace: #0 /usr/local/lib/php/Predis/Connection/StreamConnection.php(128): Predis\Connection\AbstractConnection->onConnectionError('Connection refu...', 111) #1 /usr/local/lib/php/Predis/Connection/StreamConnection.php(178): Predis\Connection\StreamConnection->createStreamSocket(Object(Predis\Connection\Parameters), 'tcp://redis-sla...', 4) #2 /usr/local/lib/php/Predis/Connection/StreamConnection.php(100): Predis\Connection\StreamConnection->tcpStreamInitializer(Object(Predis\Connection\Parameters)) #3 /usr/local/lib/php/Predis/Connection/AbstractConnection.php(81): Predis\Connection\StreamConnection->createResource() #4 /usr/local/lib/php/Predis/Connection/StreamConnection.php(258): Predis\Connection\AbstractConnection->connect() #5 /usr/local/lib/php/Predis/Connection/AbstractConnection.php(180): Predis\Connection\Stream in /usr/local/lib/php/Predis/Connection/AbstractConnection.php on line 155
Mar 16 13:09:21.061: INFO: Trying to add a new entry to the guestbook. Mar 16 13:09:21.073: INFO: Verifying that added entry can be retrieved. STEP: using delete to clean up resources Mar 16 13:09:21.087: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1197' Mar 16 13:09:21.397: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 16 13:09:21.397: INFO: stdout: "service \"redis-slave\" force deleted\n" STEP: using delete to clean up resources Mar 16 13:09:21.398: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1197' Mar 16 13:09:21.543: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 16 13:09:21.543: INFO: stdout: "service \"redis-master\" force deleted\n" STEP: using delete to clean up resources Mar 16 13:09:21.543: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1197' Mar 16 13:09:21.684: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 16 13:09:21.684: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources Mar 16 13:09:21.685: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1197' Mar 16 13:09:21.782: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 16 13:09:21.782: INFO: stdout: "deployment.apps \"frontend\" force deleted\n" STEP: using delete to clean up resources Mar 16 13:09:21.782: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1197' Mar 16 13:09:21.884: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 16 13:09:21.884: INFO: stdout: "deployment.apps \"redis-master\" force deleted\n" STEP: using delete to clean up resources Mar 16 13:09:21.884: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1197' Mar 16 13:09:22.156: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 16 13:09:22.156: INFO: stdout: "deployment.apps \"redis-slave\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 16 13:09:22.156: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1197" for this suite. Mar 16 13:10:04.323: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 13:10:04.393: INFO: namespace kubectl-1197 deletion completed in 42.233897744s • [SLOW TEST:57.912 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Guestbook application /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run deployment should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 16 13:10:04.394: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1557 [It] should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Mar 16 13:10:04.471: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --generator=deployment/apps.v1 --namespace=kubectl-2501' Mar 16 13:10:04.581: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Mar 16 13:10:04.581: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n" STEP: verifying the deployment e2e-test-nginx-deployment was created STEP: verifying the pod controlled by deployment e2e-test-nginx-deployment was created [AfterEach] [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1562 Mar 16 13:10:08.605: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=kubectl-2501' Mar 16 13:10:08.724: INFO: stderr: "" Mar 16 13:10:08.724: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 16 13:10:08.724: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2501" for this suite. Mar 16 13:10:30.737: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 13:10:30.817: INFO: namespace kubectl-2501 deletion completed in 22.088683574s • [SLOW TEST:26.423 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 16 13:10:30.817: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-configmap-9jpf STEP: Creating a pod to test atomic-volume-subpath Mar 16 13:10:30.883: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-9jpf" in namespace "subpath-7156" to be "success or failure" Mar 16 13:10:30.887: INFO: Pod "pod-subpath-test-configmap-9jpf": Phase="Pending", Reason="", readiness=false. Elapsed: 4.28939ms Mar 16 13:10:32.891: INFO: Pod "pod-subpath-test-configmap-9jpf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008390137s Mar 16 13:10:34.895: INFO: Pod "pod-subpath-test-configmap-9jpf": Phase="Running", Reason="", readiness=true. Elapsed: 4.012354381s Mar 16 13:10:36.899: INFO: Pod "pod-subpath-test-configmap-9jpf": Phase="Running", Reason="", readiness=true. Elapsed: 6.016745861s Mar 16 13:10:38.903: INFO: Pod "pod-subpath-test-configmap-9jpf": Phase="Running", Reason="", readiness=true. Elapsed: 8.020507716s Mar 16 13:10:40.907: INFO: Pod "pod-subpath-test-configmap-9jpf": Phase="Running", Reason="", readiness=true. Elapsed: 10.024936077s Mar 16 13:10:42.911: INFO: Pod "pod-subpath-test-configmap-9jpf": Phase="Running", Reason="", readiness=true. Elapsed: 12.028406625s Mar 16 13:10:44.915: INFO: Pod "pod-subpath-test-configmap-9jpf": Phase="Running", Reason="", readiness=true. Elapsed: 14.032784055s Mar 16 13:10:46.919: INFO: Pod "pod-subpath-test-configmap-9jpf": Phase="Running", Reason="", readiness=true. Elapsed: 16.036894996s Mar 16 13:10:48.924: INFO: Pod "pod-subpath-test-configmap-9jpf": Phase="Running", Reason="", readiness=true. Elapsed: 18.041427202s Mar 16 13:10:50.928: INFO: Pod "pod-subpath-test-configmap-9jpf": Phase="Running", Reason="", readiness=true. Elapsed: 20.045740562s Mar 16 13:10:52.933: INFO: Pod "pod-subpath-test-configmap-9jpf": Phase="Running", Reason="", readiness=true. Elapsed: 22.050567739s Mar 16 13:10:54.938: INFO: Pod "pod-subpath-test-configmap-9jpf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.055083894s STEP: Saw pod success Mar 16 13:10:54.938: INFO: Pod "pod-subpath-test-configmap-9jpf" satisfied condition "success or failure" Mar 16 13:10:54.941: INFO: Trying to get logs from node iruya-worker2 pod pod-subpath-test-configmap-9jpf container test-container-subpath-configmap-9jpf: STEP: delete the pod Mar 16 13:10:54.974: INFO: Waiting for pod pod-subpath-test-configmap-9jpf to disappear Mar 16 13:10:54.978: INFO: Pod pod-subpath-test-configmap-9jpf no longer exists STEP: Deleting pod pod-subpath-test-configmap-9jpf Mar 16 13:10:54.978: INFO: Deleting pod "pod-subpath-test-configmap-9jpf" in namespace "subpath-7156" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 16 13:10:54.980: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-7156" for this suite. Mar 16 13:11:00.994: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 13:11:01.071: INFO: namespace subpath-7156 deletion completed in 6.08722038s • [SLOW TEST:30.254 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 16 13:11:01.071: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Performing setup for networking test in namespace pod-network-test-6448 STEP: creating a selector STEP: Creating the service pods in kubernetes Mar 16 13:11:01.146: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Mar 16 13:11:23.266: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.147:8080/dial?request=hostName&protocol=udp&host=10.244.2.151&port=8081&tries=1'] Namespace:pod-network-test-6448 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 16 13:11:23.266: INFO: >>> kubeConfig: /root/.kube/config I0316 13:11:23.307150 6 log.go:172] (0xc000d86370) (0xc001e003c0) Create stream I0316 13:11:23.307188 6 log.go:172] (0xc000d86370) (0xc001e003c0) Stream added, broadcasting: 1 I0316 13:11:23.310192 6 log.go:172] (0xc000d86370) Reply frame received for 1 I0316 13:11:23.310224 6 log.go:172] (0xc000d86370) (0xc0010fcb40) Create stream I0316 13:11:23.310231 6 log.go:172] (0xc000d86370) (0xc0010fcb40) Stream added, broadcasting: 3 I0316 13:11:23.311209 6 log.go:172] (0xc000d86370) Reply frame received for 3 I0316 13:11:23.311249 6 log.go:172] (0xc000d86370) (0xc0010cde00) Create stream I0316 13:11:23.311265 6 log.go:172] (0xc000d86370) (0xc0010cde00) Stream added, broadcasting: 5 I0316 13:11:23.312416 6 log.go:172] (0xc000d86370) Reply frame received for 5 I0316 13:11:23.394394 6 log.go:172] (0xc000d86370) Data frame received for 3 I0316 13:11:23.394431 6 log.go:172] (0xc0010fcb40) (3) Data frame handling I0316 13:11:23.394448 6 log.go:172] (0xc0010fcb40) (3) Data frame sent I0316 13:11:23.395154 6 log.go:172] (0xc000d86370) Data frame received for 5 I0316 13:11:23.395179 6 log.go:172] (0xc000d86370) Data frame received for 3 I0316 13:11:23.395200 6 log.go:172] (0xc0010fcb40) (3) Data frame handling I0316 13:11:23.395272 6 log.go:172] (0xc0010cde00) (5) Data frame handling I0316 13:11:23.397428 6 log.go:172] (0xc000d86370) Data frame received for 1 I0316 13:11:23.397446 6 log.go:172] (0xc001e003c0) (1) Data frame handling I0316 13:11:23.397464 6 log.go:172] (0xc001e003c0) (1) Data frame sent I0316 13:11:23.397476 6 log.go:172] (0xc000d86370) (0xc001e003c0) Stream removed, broadcasting: 1 I0316 13:11:23.397490 6 log.go:172] (0xc000d86370) Go away received I0316 13:11:23.397896 6 log.go:172] (0xc000d86370) (0xc001e003c0) Stream removed, broadcasting: 1 I0316 13:11:23.397920 6 log.go:172] (0xc000d86370) (0xc0010fcb40) Stream removed, broadcasting: 3 I0316 13:11:23.397934 6 log.go:172] (0xc000d86370) (0xc0010cde00) Stream removed, broadcasting: 5 Mar 16 13:11:23.397: INFO: Waiting for endpoints: map[] Mar 16 13:11:23.401: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.147:8080/dial?request=hostName&protocol=udp&host=10.244.1.146&port=8081&tries=1'] Namespace:pod-network-test-6448 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 16 13:11:23.401: INFO: >>> kubeConfig: /root/.kube/config I0316 13:11:23.434314 6 log.go:172] (0xc000d86c60) (0xc001e00b40) Create stream I0316 13:11:23.434348 6 log.go:172] (0xc000d86c60) (0xc001e00b40) Stream added, broadcasting: 1 I0316 13:11:23.437106 6 log.go:172] (0xc000d86c60) Reply frame received for 1 I0316 13:11:23.437271 6 log.go:172] (0xc000d86c60) (0xc0010fcc80) Create stream I0316 13:11:23.437286 6 log.go:172] (0xc000d86c60) (0xc0010fcc80) Stream added, broadcasting: 3 I0316 13:11:23.438305 6 log.go:172] (0xc000d86c60) Reply frame received for 3 I0316 13:11:23.438346 6 log.go:172] (0xc000d86c60) (0xc0010cdea0) Create stream I0316 13:11:23.438366 6 log.go:172] (0xc000d86c60) (0xc0010cdea0) Stream added, broadcasting: 5 I0316 13:11:23.439314 6 log.go:172] (0xc000d86c60) Reply frame received for 5 I0316 13:11:23.526579 6 log.go:172] (0xc000d86c60) Data frame received for 3 I0316 13:11:23.526606 6 log.go:172] (0xc0010fcc80) (3) Data frame handling I0316 13:11:23.526626 6 log.go:172] (0xc0010fcc80) (3) Data frame sent I0316 13:11:23.526917 6 log.go:172] (0xc000d86c60) Data frame received for 5 I0316 13:11:23.526946 6 log.go:172] (0xc000d86c60) Data frame received for 3 I0316 13:11:23.526972 6 log.go:172] (0xc0010fcc80) (3) Data frame handling I0316 13:11:23.526992 6 log.go:172] (0xc0010cdea0) (5) Data frame handling I0316 13:11:23.528321 6 log.go:172] (0xc000d86c60) Data frame received for 1 I0316 13:11:23.528343 6 log.go:172] (0xc001e00b40) (1) Data frame handling I0316 13:11:23.528352 6 log.go:172] (0xc001e00b40) (1) Data frame sent I0316 13:11:23.528362 6 log.go:172] (0xc000d86c60) (0xc001e00b40) Stream removed, broadcasting: 1 I0316 13:11:23.528385 6 log.go:172] (0xc000d86c60) Go away received I0316 13:11:23.528470 6 log.go:172] (0xc000d86c60) (0xc001e00b40) Stream removed, broadcasting: 1 I0316 13:11:23.528494 6 log.go:172] (0xc000d86c60) (0xc0010fcc80) Stream removed, broadcasting: 3 I0316 13:11:23.528511 6 log.go:172] (0xc000d86c60) (0xc0010cdea0) Stream removed, broadcasting: 5 Mar 16 13:11:23.528: INFO: Waiting for endpoints: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 16 13:11:23.528: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-6448" for this suite. Mar 16 13:11:47.544: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 13:11:47.620: INFO: namespace pod-network-test-6448 deletion completed in 24.088076658s • [SLOW TEST:46.549 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 16 13:11:47.621: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Given a Pod with a 'name' label pod-adoption is created STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 16 13:11:52.750: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-9623" for this suite. Mar 16 13:12:14.768: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 13:12:14.839: INFO: namespace replication-controller-9623 deletion completed in 22.084868412s • [SLOW TEST:27.219 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 16 13:12:14.840: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 16 13:12:18.946: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-3832" for this suite. Mar 16 13:12:56.965: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 13:12:57.044: INFO: namespace kubelet-test-3832 deletion completed in 38.095014187s • [SLOW TEST:42.204 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a busybox command in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:40 should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 16 13:12:57.045: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-6795.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-6795.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6795.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-6795.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-6795.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6795.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe /etc/hosts STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 16 13:13:05.162: INFO: DNS probes using dns-6795/dns-test-e60fd46e-6d82-4079-9150-a77cafe78902 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 16 13:13:05.187: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-6795" for this suite. Mar 16 13:13:11.248: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 13:13:11.332: INFO: namespace dns-6795 deletion completed in 6.115060736s • [SLOW TEST:14.286 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 16 13:13:11.332: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Mar 16 13:13:11.414: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c7e9aa26-2fcf-404e-ac75-be3cd5943f1c" in namespace "downward-api-7946" to be "success or failure" Mar 16 13:13:11.429: INFO: Pod "downwardapi-volume-c7e9aa26-2fcf-404e-ac75-be3cd5943f1c": Phase="Pending", Reason="", readiness=false. Elapsed: 15.064565ms Mar 16 13:13:13.433: INFO: Pod "downwardapi-volume-c7e9aa26-2fcf-404e-ac75-be3cd5943f1c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018933756s Mar 16 13:13:15.437: INFO: Pod "downwardapi-volume-c7e9aa26-2fcf-404e-ac75-be3cd5943f1c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.022883918s STEP: Saw pod success Mar 16 13:13:15.437: INFO: Pod "downwardapi-volume-c7e9aa26-2fcf-404e-ac75-be3cd5943f1c" satisfied condition "success or failure" Mar 16 13:13:15.440: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-c7e9aa26-2fcf-404e-ac75-be3cd5943f1c container client-container: STEP: delete the pod Mar 16 13:13:15.511: INFO: Waiting for pod downwardapi-volume-c7e9aa26-2fcf-404e-ac75-be3cd5943f1c to disappear Mar 16 13:13:15.519: INFO: Pod downwardapi-volume-c7e9aa26-2fcf-404e-ac75-be3cd5943f1c no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 16 13:13:15.519: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7946" for this suite. Mar 16 13:13:21.535: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 13:13:21.616: INFO: namespace downward-api-7946 deletion completed in 6.093554306s • [SLOW TEST:10.284 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 16 13:13:21.616: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 16 13:13:25.735: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-5120" for this suite. Mar 16 13:14:17.779: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 13:14:17.857: INFO: namespace kubelet-test-5120 deletion completed in 52.118589067s • [SLOW TEST:56.241 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a busybox Pod with hostAliases /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:136 should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 16 13:14:17.857: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0644 on node default medium Mar 16 13:14:17.932: INFO: Waiting up to 5m0s for pod "pod-96538c43-fa29-4f0e-8a6d-6d8e6d4ea5e0" in namespace "emptydir-5807" to be "success or failure" Mar 16 13:14:17.940: INFO: Pod "pod-96538c43-fa29-4f0e-8a6d-6d8e6d4ea5e0": Phase="Pending", Reason="", readiness=false. Elapsed: 8.310147ms Mar 16 13:14:19.944: INFO: Pod "pod-96538c43-fa29-4f0e-8a6d-6d8e6d4ea5e0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011635708s Mar 16 13:14:21.947: INFO: Pod "pod-96538c43-fa29-4f0e-8a6d-6d8e6d4ea5e0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014977247s STEP: Saw pod success Mar 16 13:14:21.947: INFO: Pod "pod-96538c43-fa29-4f0e-8a6d-6d8e6d4ea5e0" satisfied condition "success or failure" Mar 16 13:14:21.949: INFO: Trying to get logs from node iruya-worker2 pod pod-96538c43-fa29-4f0e-8a6d-6d8e6d4ea5e0 container test-container: STEP: delete the pod Mar 16 13:14:22.183: INFO: Waiting for pod pod-96538c43-fa29-4f0e-8a6d-6d8e6d4ea5e0 to disappear Mar 16 13:14:22.191: INFO: Pod pod-96538c43-fa29-4f0e-8a6d-6d8e6d4ea5e0 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 16 13:14:22.191: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5807" for this suite. Mar 16 13:14:28.207: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 13:14:28.266: INFO: namespace emptydir-5807 deletion completed in 6.071377213s • [SLOW TEST:10.409 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 16 13:14:28.266: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating Pod STEP: Waiting for the pod running STEP: Geting the pod STEP: Reading file content from the nginx-container Mar 16 13:14:34.385: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec pod-sharedvolume-4cf0af24-c475-481d-a5e7-1b589c0cfe27 -c busybox-main-container --namespace=emptydir-7215 -- cat /usr/share/volumeshare/shareddata.txt' Mar 16 13:14:34.589: INFO: stderr: "I0316 13:14:34.518528 890 log.go:172] (0xc0009a64d0) (0xc000360a00) Create stream\nI0316 13:14:34.518584 890 log.go:172] (0xc0009a64d0) (0xc000360a00) Stream added, broadcasting: 1\nI0316 13:14:34.522614 890 log.go:172] (0xc0009a64d0) Reply frame received for 1\nI0316 13:14:34.522651 890 log.go:172] (0xc0009a64d0) (0xc000726280) Create stream\nI0316 13:14:34.522661 890 log.go:172] (0xc0009a64d0) (0xc000726280) Stream added, broadcasting: 3\nI0316 13:14:34.523352 890 log.go:172] (0xc0009a64d0) Reply frame received for 3\nI0316 13:14:34.523392 890 log.go:172] (0xc0009a64d0) (0xc000360000) Create stream\nI0316 13:14:34.523416 890 log.go:172] (0xc0009a64d0) (0xc000360000) Stream added, broadcasting: 5\nI0316 13:14:34.524068 890 log.go:172] (0xc0009a64d0) Reply frame received for 5\nI0316 13:14:34.584778 890 log.go:172] (0xc0009a64d0) Data frame received for 3\nI0316 13:14:34.584806 890 log.go:172] (0xc000726280) (3) Data frame handling\nI0316 13:14:34.584822 890 log.go:172] (0xc0009a64d0) Data frame received for 5\nI0316 13:14:34.584853 890 log.go:172] (0xc000360000) (5) Data frame handling\nI0316 13:14:34.584876 890 log.go:172] (0xc000726280) (3) Data frame sent\nI0316 13:14:34.584887 890 log.go:172] (0xc0009a64d0) Data frame received for 3\nI0316 13:14:34.584898 890 log.go:172] (0xc000726280) (3) Data frame handling\nI0316 13:14:34.586558 890 log.go:172] (0xc0009a64d0) Data frame received for 1\nI0316 13:14:34.586582 890 log.go:172] (0xc000360a00) (1) Data frame handling\nI0316 13:14:34.586601 890 log.go:172] (0xc000360a00) (1) Data frame sent\nI0316 13:14:34.586621 890 log.go:172] (0xc0009a64d0) (0xc000360a00) Stream removed, broadcasting: 1\nI0316 13:14:34.586715 890 log.go:172] (0xc0009a64d0) Go away received\nI0316 13:14:34.586908 890 log.go:172] (0xc0009a64d0) (0xc000360a00) Stream removed, broadcasting: 1\nI0316 13:14:34.586922 890 log.go:172] (0xc0009a64d0) (0xc000726280) Stream removed, broadcasting: 3\nI0316 13:14:34.586928 890 log.go:172] (0xc0009a64d0) (0xc000360000) Stream removed, broadcasting: 5\n" Mar 16 13:14:34.589: INFO: stdout: "Hello from the busy-box sub-container\n" [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 16 13:14:34.589: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7215" for this suite. Mar 16 13:14:40.717: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 13:14:40.797: INFO: namespace emptydir-7215 deletion completed in 6.203044554s • [SLOW TEST:12.531 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 16 13:14:40.797: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir volume type on node default medium Mar 16 13:14:40.914: INFO: Waiting up to 5m0s for pod "pod-a166b75a-04cf-4dee-8ad5-f3995f0145b1" in namespace "emptydir-2526" to be "success or failure" Mar 16 13:14:40.934: INFO: Pod "pod-a166b75a-04cf-4dee-8ad5-f3995f0145b1": Phase="Pending", Reason="", readiness=false. Elapsed: 19.875846ms Mar 16 13:14:42.938: INFO: Pod "pod-a166b75a-04cf-4dee-8ad5-f3995f0145b1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024153124s Mar 16 13:14:45.027: INFO: Pod "pod-a166b75a-04cf-4dee-8ad5-f3995f0145b1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.112842782s STEP: Saw pod success Mar 16 13:14:45.027: INFO: Pod "pod-a166b75a-04cf-4dee-8ad5-f3995f0145b1" satisfied condition "success or failure" Mar 16 13:14:45.031: INFO: Trying to get logs from node iruya-worker2 pod pod-a166b75a-04cf-4dee-8ad5-f3995f0145b1 container test-container: STEP: delete the pod Mar 16 13:14:45.241: INFO: Waiting for pod pod-a166b75a-04cf-4dee-8ad5-f3995f0145b1 to disappear Mar 16 13:14:45.263: INFO: Pod pod-a166b75a-04cf-4dee-8ad5-f3995f0145b1 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 16 13:14:45.263: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2526" for this suite. Mar 16 13:14:51.308: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 13:14:51.389: INFO: namespace emptydir-2526 deletion completed in 6.122765912s • [SLOW TEST:10.592 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 16 13:14:51.391: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Setting up the test STEP: Creating hostNetwork=false pod STEP: Creating hostNetwork=true pod STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false Mar 16 13:15:03.559: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-3808 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 16 13:15:03.559: INFO: >>> kubeConfig: /root/.kube/config I0316 13:15:03.594058 6 log.go:172] (0xc000b15a20) (0xc0010fcc80) Create stream I0316 13:15:03.594098 6 log.go:172] (0xc000b15a20) (0xc0010fcc80) Stream added, broadcasting: 1 I0316 13:15:03.598001 6 log.go:172] (0xc000b15a20) Reply frame received for 1 I0316 13:15:03.598115 6 log.go:172] (0xc000b15a20) (0xc001178780) Create stream I0316 13:15:03.598179 6 log.go:172] (0xc000b15a20) (0xc001178780) Stream added, broadcasting: 3 I0316 13:15:03.599736 6 log.go:172] (0xc000b15a20) Reply frame received for 3 I0316 13:15:03.599766 6 log.go:172] (0xc000b15a20) (0xc0026380a0) Create stream I0316 13:15:03.599779 6 log.go:172] (0xc000b15a20) (0xc0026380a0) Stream added, broadcasting: 5 I0316 13:15:03.600691 6 log.go:172] (0xc000b15a20) Reply frame received for 5 I0316 13:15:03.677832 6 log.go:172] (0xc000b15a20) Data frame received for 3 I0316 13:15:03.677893 6 log.go:172] (0xc001178780) (3) Data frame handling I0316 13:15:03.677927 6 log.go:172] (0xc001178780) (3) Data frame sent I0316 13:15:03.677951 6 log.go:172] (0xc000b15a20) Data frame received for 3 I0316 13:15:03.677979 6 log.go:172] (0xc001178780) (3) Data frame handling I0316 13:15:03.678022 6 log.go:172] (0xc000b15a20) Data frame received for 5 I0316 13:15:03.678048 6 log.go:172] (0xc0026380a0) (5) Data frame handling I0316 13:15:03.679222 6 log.go:172] (0xc000b15a20) Data frame received for 1 I0316 13:15:03.679235 6 log.go:172] (0xc0010fcc80) (1) Data frame handling I0316 13:15:03.679252 6 log.go:172] (0xc0010fcc80) (1) Data frame sent I0316 13:15:03.679262 6 log.go:172] (0xc000b15a20) (0xc0010fcc80) Stream removed, broadcasting: 1 I0316 13:15:03.679335 6 log.go:172] (0xc000b15a20) (0xc0010fcc80) Stream removed, broadcasting: 1 I0316 13:15:03.679343 6 log.go:172] (0xc000b15a20) (0xc001178780) Stream removed, broadcasting: 3 I0316 13:15:03.679348 6 log.go:172] (0xc000b15a20) (0xc0026380a0) Stream removed, broadcasting: 5 Mar 16 13:15:03.679: INFO: Exec stderr: "" Mar 16 13:15:03.679: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-3808 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 16 13:15:03.679: INFO: >>> kubeConfig: /root/.kube/config I0316 13:15:03.680677 6 log.go:172] (0xc000b15a20) Go away received I0316 13:15:03.705659 6 log.go:172] (0xc000d70580) (0xc0010fd220) Create stream I0316 13:15:03.705695 6 log.go:172] (0xc000d70580) (0xc0010fd220) Stream added, broadcasting: 1 I0316 13:15:03.707980 6 log.go:172] (0xc000d70580) Reply frame received for 1 I0316 13:15:03.708004 6 log.go:172] (0xc000d70580) (0xc0010fd2c0) Create stream I0316 13:15:03.708012 6 log.go:172] (0xc000d70580) (0xc0010fd2c0) Stream added, broadcasting: 3 I0316 13:15:03.708964 6 log.go:172] (0xc000d70580) Reply frame received for 3 I0316 13:15:03.708993 6 log.go:172] (0xc000d70580) (0xc002f020a0) Create stream I0316 13:15:03.709004 6 log.go:172] (0xc000d70580) (0xc002f020a0) Stream added, broadcasting: 5 I0316 13:15:03.709902 6 log.go:172] (0xc000d70580) Reply frame received for 5 I0316 13:15:03.774526 6 log.go:172] (0xc000d70580) Data frame received for 5 I0316 13:15:03.774591 6 log.go:172] (0xc002f020a0) (5) Data frame handling I0316 13:15:03.774646 6 log.go:172] (0xc000d70580) Data frame received for 3 I0316 13:15:03.774673 6 log.go:172] (0xc0010fd2c0) (3) Data frame handling I0316 13:15:03.774703 6 log.go:172] (0xc0010fd2c0) (3) Data frame sent I0316 13:15:03.774723 6 log.go:172] (0xc000d70580) Data frame received for 3 I0316 13:15:03.774736 6 log.go:172] (0xc0010fd2c0) (3) Data frame handling I0316 13:15:03.776118 6 log.go:172] (0xc000d70580) Data frame received for 1 I0316 13:15:03.776140 6 log.go:172] (0xc0010fd220) (1) Data frame handling I0316 13:15:03.776156 6 log.go:172] (0xc0010fd220) (1) Data frame sent I0316 13:15:03.776192 6 log.go:172] (0xc000d70580) (0xc0010fd220) Stream removed, broadcasting: 1 I0316 13:15:03.776214 6 log.go:172] (0xc000d70580) Go away received I0316 13:15:03.776397 6 log.go:172] (0xc000d70580) (0xc0010fd220) Stream removed, broadcasting: 1 I0316 13:15:03.776442 6 log.go:172] (0xc000d70580) (0xc0010fd2c0) Stream removed, broadcasting: 3 I0316 13:15:03.776460 6 log.go:172] (0xc000d70580) (0xc002f020a0) Stream removed, broadcasting: 5 Mar 16 13:15:03.776: INFO: Exec stderr: "" Mar 16 13:15:03.776: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-3808 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 16 13:15:03.776: INFO: >>> kubeConfig: /root/.kube/config I0316 13:15:03.808400 6 log.go:172] (0xc001000b00) (0xc002f02460) Create stream I0316 13:15:03.808425 6 log.go:172] (0xc001000b00) (0xc002f02460) Stream added, broadcasting: 1 I0316 13:15:03.811308 6 log.go:172] (0xc001000b00) Reply frame received for 1 I0316 13:15:03.811370 6 log.go:172] (0xc001000b00) (0xc0010fd540) Create stream I0316 13:15:03.811391 6 log.go:172] (0xc001000b00) (0xc0010fd540) Stream added, broadcasting: 3 I0316 13:15:03.812339 6 log.go:172] (0xc001000b00) Reply frame received for 3 I0316 13:15:03.812382 6 log.go:172] (0xc001000b00) (0xc002b24500) Create stream I0316 13:15:03.812399 6 log.go:172] (0xc001000b00) (0xc002b24500) Stream added, broadcasting: 5 I0316 13:15:03.813262 6 log.go:172] (0xc001000b00) Reply frame received for 5 I0316 13:15:03.863112 6 log.go:172] (0xc001000b00) Data frame received for 3 I0316 13:15:03.863247 6 log.go:172] (0xc0010fd540) (3) Data frame handling I0316 13:15:03.863264 6 log.go:172] (0xc0010fd540) (3) Data frame sent I0316 13:15:03.863277 6 log.go:172] (0xc001000b00) Data frame received for 3 I0316 13:15:03.863289 6 log.go:172] (0xc0010fd540) (3) Data frame handling I0316 13:15:03.863336 6 log.go:172] (0xc001000b00) Data frame received for 5 I0316 13:15:03.863367 6 log.go:172] (0xc002b24500) (5) Data frame handling I0316 13:15:03.864592 6 log.go:172] (0xc001000b00) Data frame received for 1 I0316 13:15:03.864621 6 log.go:172] (0xc002f02460) (1) Data frame handling I0316 13:15:03.864642 6 log.go:172] (0xc002f02460) (1) Data frame sent I0316 13:15:03.864829 6 log.go:172] (0xc001000b00) (0xc002f02460) Stream removed, broadcasting: 1 I0316 13:15:03.864877 6 log.go:172] (0xc001000b00) Go away received I0316 13:15:03.865018 6 log.go:172] (0xc001000b00) (0xc002f02460) Stream removed, broadcasting: 1 I0316 13:15:03.865059 6 log.go:172] (0xc001000b00) (0xc0010fd540) Stream removed, broadcasting: 3 I0316 13:15:03.865091 6 log.go:172] (0xc001000b00) (0xc002b24500) Stream removed, broadcasting: 5 Mar 16 13:15:03.865: INFO: Exec stderr: "" Mar 16 13:15:03.865: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-3808 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 16 13:15:03.865: INFO: >>> kubeConfig: /root/.kube/config I0316 13:15:03.904213 6 log.go:172] (0xc0010b8fd0) (0xc002b24820) Create stream I0316 13:15:03.904249 6 log.go:172] (0xc0010b8fd0) (0xc002b24820) Stream added, broadcasting: 1 I0316 13:15:03.907403 6 log.go:172] (0xc0010b8fd0) Reply frame received for 1 I0316 13:15:03.907444 6 log.go:172] (0xc0010b8fd0) (0xc002b248c0) Create stream I0316 13:15:03.907456 6 log.go:172] (0xc0010b8fd0) (0xc002b248c0) Stream added, broadcasting: 3 I0316 13:15:03.908559 6 log.go:172] (0xc0010b8fd0) Reply frame received for 3 I0316 13:15:03.908603 6 log.go:172] (0xc0010b8fd0) (0xc0010fd680) Create stream I0316 13:15:03.908619 6 log.go:172] (0xc0010b8fd0) (0xc0010fd680) Stream added, broadcasting: 5 I0316 13:15:03.909799 6 log.go:172] (0xc0010b8fd0) Reply frame received for 5 I0316 13:15:03.969451 6 log.go:172] (0xc0010b8fd0) Data frame received for 5 I0316 13:15:03.969491 6 log.go:172] (0xc0010fd680) (5) Data frame handling I0316 13:15:03.969518 6 log.go:172] (0xc0010b8fd0) Data frame received for 3 I0316 13:15:03.969529 6 log.go:172] (0xc002b248c0) (3) Data frame handling I0316 13:15:03.969542 6 log.go:172] (0xc002b248c0) (3) Data frame sent I0316 13:15:03.969553 6 log.go:172] (0xc0010b8fd0) Data frame received for 3 I0316 13:15:03.969562 6 log.go:172] (0xc002b248c0) (3) Data frame handling I0316 13:15:03.970857 6 log.go:172] (0xc0010b8fd0) Data frame received for 1 I0316 13:15:03.970892 6 log.go:172] (0xc002b24820) (1) Data frame handling I0316 13:15:03.970919 6 log.go:172] (0xc002b24820) (1) Data frame sent I0316 13:15:03.970932 6 log.go:172] (0xc0010b8fd0) (0xc002b24820) Stream removed, broadcasting: 1 I0316 13:15:03.970951 6 log.go:172] (0xc0010b8fd0) Go away received I0316 13:15:03.971108 6 log.go:172] (0xc0010b8fd0) (0xc002b24820) Stream removed, broadcasting: 1 I0316 13:15:03.971143 6 log.go:172] (0xc0010b8fd0) (0xc002b248c0) Stream removed, broadcasting: 3 I0316 13:15:03.971162 6 log.go:172] (0xc0010b8fd0) (0xc0010fd680) Stream removed, broadcasting: 5 Mar 16 13:15:03.971: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount Mar 16 13:15:03.971: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-3808 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 16 13:15:03.971: INFO: >>> kubeConfig: /root/.kube/config I0316 13:15:04.004813 6 log.go:172] (0xc000c2b130) (0xc001179040) Create stream I0316 13:15:04.004843 6 log.go:172] (0xc000c2b130) (0xc001179040) Stream added, broadcasting: 1 I0316 13:15:04.007966 6 log.go:172] (0xc000c2b130) Reply frame received for 1 I0316 13:15:04.008008 6 log.go:172] (0xc000c2b130) (0xc002b24960) Create stream I0316 13:15:04.008022 6 log.go:172] (0xc000c2b130) (0xc002b24960) Stream added, broadcasting: 3 I0316 13:15:04.008984 6 log.go:172] (0xc000c2b130) Reply frame received for 3 I0316 13:15:04.009021 6 log.go:172] (0xc000c2b130) (0xc0026381e0) Create stream I0316 13:15:04.009035 6 log.go:172] (0xc000c2b130) (0xc0026381e0) Stream added, broadcasting: 5 I0316 13:15:04.010094 6 log.go:172] (0xc000c2b130) Reply frame received for 5 I0316 13:15:04.071190 6 log.go:172] (0xc000c2b130) Data frame received for 5 I0316 13:15:04.071233 6 log.go:172] (0xc0026381e0) (5) Data frame handling I0316 13:15:04.071253 6 log.go:172] (0xc000c2b130) Data frame received for 3 I0316 13:15:04.071264 6 log.go:172] (0xc002b24960) (3) Data frame handling I0316 13:15:04.071278 6 log.go:172] (0xc002b24960) (3) Data frame sent I0316 13:15:04.071287 6 log.go:172] (0xc000c2b130) Data frame received for 3 I0316 13:15:04.071298 6 log.go:172] (0xc002b24960) (3) Data frame handling I0316 13:15:04.072790 6 log.go:172] (0xc000c2b130) Data frame received for 1 I0316 13:15:04.072814 6 log.go:172] (0xc001179040) (1) Data frame handling I0316 13:15:04.072839 6 log.go:172] (0xc001179040) (1) Data frame sent I0316 13:15:04.072856 6 log.go:172] (0xc000c2b130) (0xc001179040) Stream removed, broadcasting: 1 I0316 13:15:04.072873 6 log.go:172] (0xc000c2b130) Go away received I0316 13:15:04.072975 6 log.go:172] (0xc000c2b130) (0xc001179040) Stream removed, broadcasting: 1 I0316 13:15:04.072996 6 log.go:172] (0xc000c2b130) (0xc002b24960) Stream removed, broadcasting: 3 I0316 13:15:04.073012 6 log.go:172] (0xc000c2b130) (0xc0026381e0) Stream removed, broadcasting: 5 Mar 16 13:15:04.073: INFO: Exec stderr: "" Mar 16 13:15:04.073: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-3808 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 16 13:15:04.073: INFO: >>> kubeConfig: /root/.kube/config I0316 13:15:04.108721 6 log.go:172] (0xc0010016b0) (0xc002f02780) Create stream I0316 13:15:04.108748 6 log.go:172] (0xc0010016b0) (0xc002f02780) Stream added, broadcasting: 1 I0316 13:15:04.113629 6 log.go:172] (0xc0010016b0) Reply frame received for 1 I0316 13:15:04.113695 6 log.go:172] (0xc0010016b0) (0xc002638280) Create stream I0316 13:15:04.113718 6 log.go:172] (0xc0010016b0) (0xc002638280) Stream added, broadcasting: 3 I0316 13:15:04.115198 6 log.go:172] (0xc0010016b0) Reply frame received for 3 I0316 13:15:04.115238 6 log.go:172] (0xc0010016b0) (0xc001179360) Create stream I0316 13:15:04.115251 6 log.go:172] (0xc0010016b0) (0xc001179360) Stream added, broadcasting: 5 I0316 13:15:04.116116 6 log.go:172] (0xc0010016b0) Reply frame received for 5 I0316 13:15:04.158997 6 log.go:172] (0xc0010016b0) Data frame received for 3 I0316 13:15:04.159038 6 log.go:172] (0xc002638280) (3) Data frame handling I0316 13:15:04.159071 6 log.go:172] (0xc002638280) (3) Data frame sent I0316 13:15:04.159231 6 log.go:172] (0xc0010016b0) Data frame received for 3 I0316 13:15:04.159251 6 log.go:172] (0xc002638280) (3) Data frame handling I0316 13:15:04.159262 6 log.go:172] (0xc0010016b0) Data frame received for 5 I0316 13:15:04.159274 6 log.go:172] (0xc001179360) (5) Data frame handling I0316 13:15:04.160676 6 log.go:172] (0xc0010016b0) Data frame received for 1 I0316 13:15:04.160693 6 log.go:172] (0xc002f02780) (1) Data frame handling I0316 13:15:04.160705 6 log.go:172] (0xc002f02780) (1) Data frame sent I0316 13:15:04.160921 6 log.go:172] (0xc0010016b0) (0xc002f02780) Stream removed, broadcasting: 1 I0316 13:15:04.160985 6 log.go:172] (0xc0010016b0) (0xc002f02780) Stream removed, broadcasting: 1 I0316 13:15:04.161001 6 log.go:172] (0xc0010016b0) (0xc002638280) Stream removed, broadcasting: 3 I0316 13:15:04.161014 6 log.go:172] (0xc0010016b0) (0xc001179360) Stream removed, broadcasting: 5 Mar 16 13:15:04.161: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true Mar 16 13:15:04.161: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-3808 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 16 13:15:04.161: INFO: >>> kubeConfig: /root/.kube/config I0316 13:15:04.161071 6 log.go:172] (0xc0010016b0) Go away received I0316 13:15:04.191549 6 log.go:172] (0xc000c2be40) (0xc0011799a0) Create stream I0316 13:15:04.191584 6 log.go:172] (0xc000c2be40) (0xc0011799a0) Stream added, broadcasting: 1 I0316 13:15:04.194653 6 log.go:172] (0xc000c2be40) Reply frame received for 1 I0316 13:15:04.194693 6 log.go:172] (0xc000c2be40) (0xc002b24a00) Create stream I0316 13:15:04.194709 6 log.go:172] (0xc000c2be40) (0xc002b24a00) Stream added, broadcasting: 3 I0316 13:15:04.195904 6 log.go:172] (0xc000c2be40) Reply frame received for 3 I0316 13:15:04.195950 6 log.go:172] (0xc000c2be40) (0xc002b24aa0) Create stream I0316 13:15:04.195972 6 log.go:172] (0xc000c2be40) (0xc002b24aa0) Stream added, broadcasting: 5 I0316 13:15:04.197042 6 log.go:172] (0xc000c2be40) Reply frame received for 5 I0316 13:15:04.247995 6 log.go:172] (0xc000c2be40) Data frame received for 5 I0316 13:15:04.248041 6 log.go:172] (0xc000c2be40) Data frame received for 3 I0316 13:15:04.248174 6 log.go:172] (0xc002b24a00) (3) Data frame handling I0316 13:15:04.248188 6 log.go:172] (0xc002b24a00) (3) Data frame sent I0316 13:15:04.248194 6 log.go:172] (0xc000c2be40) Data frame received for 3 I0316 13:15:04.248198 6 log.go:172] (0xc002b24a00) (3) Data frame handling I0316 13:15:04.248213 6 log.go:172] (0xc002b24aa0) (5) Data frame handling I0316 13:15:04.249755 6 log.go:172] (0xc000c2be40) Data frame received for 1 I0316 13:15:04.249785 6 log.go:172] (0xc0011799a0) (1) Data frame handling I0316 13:15:04.249825 6 log.go:172] (0xc0011799a0) (1) Data frame sent I0316 13:15:04.249844 6 log.go:172] (0xc000c2be40) (0xc0011799a0) Stream removed, broadcasting: 1 I0316 13:15:04.249868 6 log.go:172] (0xc000c2be40) Go away received I0316 13:15:04.249986 6 log.go:172] (0xc000c2be40) (0xc0011799a0) Stream removed, broadcasting: 1 I0316 13:15:04.250023 6 log.go:172] (0xc000c2be40) (0xc002b24a00) Stream removed, broadcasting: 3 I0316 13:15:04.250042 6 log.go:172] (0xc000c2be40) (0xc002b24aa0) Stream removed, broadcasting: 5 Mar 16 13:15:04.250: INFO: Exec stderr: "" Mar 16 13:15:04.250: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-3808 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 16 13:15:04.250: INFO: >>> kubeConfig: /root/.kube/config I0316 13:15:04.331675 6 log.go:172] (0xc0021e0790) (0xc002b24dc0) Create stream I0316 13:15:04.331711 6 log.go:172] (0xc0021e0790) (0xc002b24dc0) Stream added, broadcasting: 1 I0316 13:15:04.334754 6 log.go:172] (0xc0021e0790) Reply frame received for 1 I0316 13:15:04.334820 6 log.go:172] (0xc0021e0790) (0xc002b24e60) Create stream I0316 13:15:04.334846 6 log.go:172] (0xc0021e0790) (0xc002b24e60) Stream added, broadcasting: 3 I0316 13:15:04.336138 6 log.go:172] (0xc0021e0790) Reply frame received for 3 I0316 13:15:04.336198 6 log.go:172] (0xc0021e0790) (0xc002f02820) Create stream I0316 13:15:04.336219 6 log.go:172] (0xc0021e0790) (0xc002f02820) Stream added, broadcasting: 5 I0316 13:15:04.337246 6 log.go:172] (0xc0021e0790) Reply frame received for 5 I0316 13:15:04.392030 6 log.go:172] (0xc0021e0790) Data frame received for 3 I0316 13:15:04.392064 6 log.go:172] (0xc002b24e60) (3) Data frame handling I0316 13:15:04.392078 6 log.go:172] (0xc002b24e60) (3) Data frame sent I0316 13:15:04.392108 6 log.go:172] (0xc0021e0790) Data frame received for 5 I0316 13:15:04.392151 6 log.go:172] (0xc002f02820) (5) Data frame handling I0316 13:15:04.392177 6 log.go:172] (0xc0021e0790) Data frame received for 3 I0316 13:15:04.392188 6 log.go:172] (0xc002b24e60) (3) Data frame handling I0316 13:15:04.393715 6 log.go:172] (0xc0021e0790) Data frame received for 1 I0316 13:15:04.393736 6 log.go:172] (0xc002b24dc0) (1) Data frame handling I0316 13:15:04.393747 6 log.go:172] (0xc002b24dc0) (1) Data frame sent I0316 13:15:04.393838 6 log.go:172] (0xc0021e0790) (0xc002b24dc0) Stream removed, broadcasting: 1 I0316 13:15:04.393942 6 log.go:172] (0xc0021e0790) Go away received I0316 13:15:04.394053 6 log.go:172] (0xc0021e0790) (0xc002b24dc0) Stream removed, broadcasting: 1 I0316 13:15:04.394067 6 log.go:172] (0xc0021e0790) (0xc002b24e60) Stream removed, broadcasting: 3 I0316 13:15:04.394074 6 log.go:172] (0xc0021e0790) (0xc002f02820) Stream removed, broadcasting: 5 Mar 16 13:15:04.394: INFO: Exec stderr: "" Mar 16 13:15:04.394: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-3808 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 16 13:15:04.394: INFO: >>> kubeConfig: /root/.kube/config I0316 13:15:04.430104 6 log.go:172] (0xc0016f7600) (0xc002638640) Create stream I0316 13:15:04.430148 6 log.go:172] (0xc0016f7600) (0xc002638640) Stream added, broadcasting: 1 I0316 13:15:04.432723 6 log.go:172] (0xc0016f7600) Reply frame received for 1 I0316 13:15:04.432754 6 log.go:172] (0xc0016f7600) (0xc0026386e0) Create stream I0316 13:15:04.432763 6 log.go:172] (0xc0016f7600) (0xc0026386e0) Stream added, broadcasting: 3 I0316 13:15:04.433671 6 log.go:172] (0xc0016f7600) Reply frame received for 3 I0316 13:15:04.433723 6 log.go:172] (0xc0016f7600) (0xc002f028c0) Create stream I0316 13:15:04.433747 6 log.go:172] (0xc0016f7600) (0xc002f028c0) Stream added, broadcasting: 5 I0316 13:15:04.434859 6 log.go:172] (0xc0016f7600) Reply frame received for 5 I0316 13:15:04.479814 6 log.go:172] (0xc0016f7600) Data frame received for 3 I0316 13:15:04.479863 6 log.go:172] (0xc0026386e0) (3) Data frame handling I0316 13:15:04.479888 6 log.go:172] (0xc0026386e0) (3) Data frame sent I0316 13:15:04.479905 6 log.go:172] (0xc0016f7600) Data frame received for 3 I0316 13:15:04.479922 6 log.go:172] (0xc0026386e0) (3) Data frame handling I0316 13:15:04.479942 6 log.go:172] (0xc0016f7600) Data frame received for 5 I0316 13:15:04.479958 6 log.go:172] (0xc002f028c0) (5) Data frame handling I0316 13:15:04.481780 6 log.go:172] (0xc0016f7600) Data frame received for 1 I0316 13:15:04.481816 6 log.go:172] (0xc002638640) (1) Data frame handling I0316 13:15:04.481831 6 log.go:172] (0xc002638640) (1) Data frame sent I0316 13:15:04.481844 6 log.go:172] (0xc0016f7600) (0xc002638640) Stream removed, broadcasting: 1 I0316 13:15:04.481865 6 log.go:172] (0xc0016f7600) Go away received I0316 13:15:04.482064 6 log.go:172] (0xc0016f7600) (0xc002638640) Stream removed, broadcasting: 1 I0316 13:15:04.482099 6 log.go:172] (0xc0016f7600) (0xc0026386e0) Stream removed, broadcasting: 3 I0316 13:15:04.482123 6 log.go:172] (0xc0016f7600) (0xc002f028c0) Stream removed, broadcasting: 5 Mar 16 13:15:04.482: INFO: Exec stderr: "" Mar 16 13:15:04.482: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-3808 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 16 13:15:04.482: INFO: >>> kubeConfig: /root/.kube/config I0316 13:15:04.521649 6 log.go:172] (0xc0021e16b0) (0xc002b25180) Create stream I0316 13:15:04.521674 6 log.go:172] (0xc0021e16b0) (0xc002b25180) Stream added, broadcasting: 1 I0316 13:15:04.524087 6 log.go:172] (0xc0021e16b0) Reply frame received for 1 I0316 13:15:04.524112 6 log.go:172] (0xc0021e16b0) (0xc002638780) Create stream I0316 13:15:04.524120 6 log.go:172] (0xc0021e16b0) (0xc002638780) Stream added, broadcasting: 3 I0316 13:15:04.524992 6 log.go:172] (0xc0021e16b0) Reply frame received for 3 I0316 13:15:04.525030 6 log.go:172] (0xc0021e16b0) (0xc002638820) Create stream I0316 13:15:04.525051 6 log.go:172] (0xc0021e16b0) (0xc002638820) Stream added, broadcasting: 5 I0316 13:15:04.525919 6 log.go:172] (0xc0021e16b0) Reply frame received for 5 I0316 13:15:04.579000 6 log.go:172] (0xc0021e16b0) Data frame received for 5 I0316 13:15:04.579041 6 log.go:172] (0xc002638820) (5) Data frame handling I0316 13:15:04.579068 6 log.go:172] (0xc0021e16b0) Data frame received for 3 I0316 13:15:04.579113 6 log.go:172] (0xc002638780) (3) Data frame handling I0316 13:15:04.579149 6 log.go:172] (0xc002638780) (3) Data frame sent I0316 13:15:04.579510 6 log.go:172] (0xc0021e16b0) Data frame received for 3 I0316 13:15:04.579541 6 log.go:172] (0xc002638780) (3) Data frame handling I0316 13:15:04.582878 6 log.go:172] (0xc0021e16b0) Data frame received for 1 I0316 13:15:04.582910 6 log.go:172] (0xc002b25180) (1) Data frame handling I0316 13:15:04.582938 6 log.go:172] (0xc002b25180) (1) Data frame sent I0316 13:15:04.583077 6 log.go:172] (0xc0021e16b0) (0xc002b25180) Stream removed, broadcasting: 1 I0316 13:15:04.583151 6 log.go:172] (0xc0021e16b0) Go away received I0316 13:15:04.583202 6 log.go:172] (0xc0021e16b0) (0xc002b25180) Stream removed, broadcasting: 1 I0316 13:15:04.583236 6 log.go:172] (0xc0021e16b0) (0xc002638780) Stream removed, broadcasting: 3 I0316 13:15:04.583255 6 log.go:172] (0xc0021e16b0) (0xc002638820) Stream removed, broadcasting: 5 Mar 16 13:15:04.583: INFO: Exec stderr: "" [AfterEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 16 13:15:04.583: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-kubelet-etc-hosts-3808" for this suite. Mar 16 13:15:55.062: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 13:15:55.153: INFO: namespace e2e-kubelet-etc-hosts-3808 deletion completed in 50.566289088s • [SLOW TEST:63.763 seconds] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 16 13:15:55.153: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap configmap-705/configmap-test-b0f03ce5-5862-4a22-8e1c-fe8140ce5b2a STEP: Creating a pod to test consume configMaps Mar 16 13:15:55.249: INFO: Waiting up to 5m0s for pod "pod-configmaps-7921276b-657b-4e2b-9c4a-29d35be8594b" in namespace "configmap-705" to be "success or failure" Mar 16 13:15:55.259: INFO: Pod "pod-configmaps-7921276b-657b-4e2b-9c4a-29d35be8594b": Phase="Pending", Reason="", readiness=false. Elapsed: 9.974962ms Mar 16 13:15:57.262: INFO: Pod "pod-configmaps-7921276b-657b-4e2b-9c4a-29d35be8594b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013328787s Mar 16 13:15:59.266: INFO: Pod "pod-configmaps-7921276b-657b-4e2b-9c4a-29d35be8594b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016889446s STEP: Saw pod success Mar 16 13:15:59.266: INFO: Pod "pod-configmaps-7921276b-657b-4e2b-9c4a-29d35be8594b" satisfied condition "success or failure" Mar 16 13:15:59.268: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-7921276b-657b-4e2b-9c4a-29d35be8594b container env-test: STEP: delete the pod Mar 16 13:15:59.352: INFO: Waiting for pod pod-configmaps-7921276b-657b-4e2b-9c4a-29d35be8594b to disappear Mar 16 13:15:59.354: INFO: Pod pod-configmaps-7921276b-657b-4e2b-9c4a-29d35be8594b no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 16 13:15:59.355: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-705" for this suite. Mar 16 13:16:05.369: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 13:16:05.440: INFO: namespace configmap-705 deletion completed in 6.082374756s • [SLOW TEST:10.287 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 16 13:16:05.441: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 16 13:16:09.509: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-3703" for this suite. Mar 16 13:16:15.531: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 13:16:15.614: INFO: namespace kubelet-test-3703 deletion completed in 6.100808711s • [SLOW TEST:10.173 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78 should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 16 13:16:15.615: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-3b847c4f-0023-441a-8ba3-c074f06adbb2 STEP: Creating a pod to test consume secrets Mar 16 13:16:15.705: INFO: Waiting up to 5m0s for pod "pod-secrets-d1660df0-b1a7-40a7-8c75-616c31cd9cdd" in namespace "secrets-6395" to be "success or failure" Mar 16 13:16:15.723: INFO: Pod "pod-secrets-d1660df0-b1a7-40a7-8c75-616c31cd9cdd": Phase="Pending", Reason="", readiness=false. Elapsed: 17.909568ms Mar 16 13:16:17.726: INFO: Pod "pod-secrets-d1660df0-b1a7-40a7-8c75-616c31cd9cdd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021038971s Mar 16 13:16:19.730: INFO: Pod "pod-secrets-d1660df0-b1a7-40a7-8c75-616c31cd9cdd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.02510964s STEP: Saw pod success Mar 16 13:16:19.730: INFO: Pod "pod-secrets-d1660df0-b1a7-40a7-8c75-616c31cd9cdd" satisfied condition "success or failure" Mar 16 13:16:19.732: INFO: Trying to get logs from node iruya-worker2 pod pod-secrets-d1660df0-b1a7-40a7-8c75-616c31cd9cdd container secret-volume-test: STEP: delete the pod Mar 16 13:16:19.751: INFO: Waiting for pod pod-secrets-d1660df0-b1a7-40a7-8c75-616c31cd9cdd to disappear Mar 16 13:16:19.756: INFO: Pod pod-secrets-d1660df0-b1a7-40a7-8c75-616c31cd9cdd no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 16 13:16:19.756: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-6395" for this suite. Mar 16 13:16:25.772: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 13:16:25.845: INFO: namespace secrets-6395 deletion completed in 6.086172992s • [SLOW TEST:10.231 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 16 13:16:25.846: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [It] should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a replication controller Mar 16 13:16:25.886: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3162' Mar 16 13:16:26.123: INFO: stderr: "" Mar 16 13:16:26.123: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Mar 16 13:16:26.123: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3162' Mar 16 13:16:26.243: INFO: stderr: "" Mar 16 13:16:26.244: INFO: stdout: "update-demo-nautilus-4vnhb update-demo-nautilus-xmk5m " Mar 16 13:16:26.244: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4vnhb -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3162' Mar 16 13:16:26.326: INFO: stderr: "" Mar 16 13:16:26.326: INFO: stdout: "" Mar 16 13:16:26.326: INFO: update-demo-nautilus-4vnhb is created but not running Mar 16 13:16:31.326: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3162' Mar 16 13:16:31.419: INFO: stderr: "" Mar 16 13:16:31.419: INFO: stdout: "update-demo-nautilus-4vnhb update-demo-nautilus-xmk5m " Mar 16 13:16:31.419: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4vnhb -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3162' Mar 16 13:16:31.518: INFO: stderr: "" Mar 16 13:16:31.518: INFO: stdout: "true" Mar 16 13:16:31.518: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4vnhb -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3162' Mar 16 13:16:31.609: INFO: stderr: "" Mar 16 13:16:31.609: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 16 13:16:31.609: INFO: validating pod update-demo-nautilus-4vnhb Mar 16 13:16:31.613: INFO: got data: { "image": "nautilus.jpg" } Mar 16 13:16:31.613: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 16 13:16:31.613: INFO: update-demo-nautilus-4vnhb is verified up and running Mar 16 13:16:31.613: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-xmk5m -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3162' Mar 16 13:16:31.703: INFO: stderr: "" Mar 16 13:16:31.703: INFO: stdout: "true" Mar 16 13:16:31.703: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-xmk5m -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3162' Mar 16 13:16:31.788: INFO: stderr: "" Mar 16 13:16:31.788: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 16 13:16:31.788: INFO: validating pod update-demo-nautilus-xmk5m Mar 16 13:16:31.792: INFO: got data: { "image": "nautilus.jpg" } Mar 16 13:16:31.792: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 16 13:16:31.792: INFO: update-demo-nautilus-xmk5m is verified up and running STEP: scaling down the replication controller Mar 16 13:16:31.794: INFO: scanned /root for discovery docs: Mar 16 13:16:31.794: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-3162' Mar 16 13:16:32.911: INFO: stderr: "" Mar 16 13:16:32.911: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Mar 16 13:16:32.911: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3162' Mar 16 13:16:33.013: INFO: stderr: "" Mar 16 13:16:33.013: INFO: stdout: "update-demo-nautilus-4vnhb update-demo-nautilus-xmk5m " STEP: Replicas for name=update-demo: expected=1 actual=2 Mar 16 13:16:38.014: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3162' Mar 16 13:16:38.121: INFO: stderr: "" Mar 16 13:16:38.121: INFO: stdout: "update-demo-nautilus-4vnhb update-demo-nautilus-xmk5m " STEP: Replicas for name=update-demo: expected=1 actual=2 Mar 16 13:16:43.121: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3162' Mar 16 13:16:43.207: INFO: stderr: "" Mar 16 13:16:43.207: INFO: stdout: "update-demo-nautilus-4vnhb " Mar 16 13:16:43.208: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4vnhb -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3162' Mar 16 13:16:43.297: INFO: stderr: "" Mar 16 13:16:43.297: INFO: stdout: "true" Mar 16 13:16:43.297: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4vnhb -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3162' Mar 16 13:16:43.393: INFO: stderr: "" Mar 16 13:16:43.393: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 16 13:16:43.393: INFO: validating pod update-demo-nautilus-4vnhb Mar 16 13:16:43.396: INFO: got data: { "image": "nautilus.jpg" } Mar 16 13:16:43.396: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 16 13:16:43.396: INFO: update-demo-nautilus-4vnhb is verified up and running STEP: scaling up the replication controller Mar 16 13:16:43.398: INFO: scanned /root for discovery docs: Mar 16 13:16:43.398: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-3162' Mar 16 13:16:44.521: INFO: stderr: "" Mar 16 13:16:44.521: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Mar 16 13:16:44.521: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3162' Mar 16 13:16:44.628: INFO: stderr: "" Mar 16 13:16:44.628: INFO: stdout: "update-demo-nautilus-4vnhb update-demo-nautilus-jmj28 " Mar 16 13:16:44.629: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4vnhb -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3162' Mar 16 13:16:44.728: INFO: stderr: "" Mar 16 13:16:44.728: INFO: stdout: "true" Mar 16 13:16:44.728: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4vnhb -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3162' Mar 16 13:16:44.803: INFO: stderr: "" Mar 16 13:16:44.803: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 16 13:16:44.803: INFO: validating pod update-demo-nautilus-4vnhb Mar 16 13:16:44.819: INFO: got data: { "image": "nautilus.jpg" } Mar 16 13:16:44.819: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 16 13:16:44.819: INFO: update-demo-nautilus-4vnhb is verified up and running Mar 16 13:16:44.819: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jmj28 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3162' Mar 16 13:16:44.896: INFO: stderr: "" Mar 16 13:16:44.897: INFO: stdout: "" Mar 16 13:16:44.897: INFO: update-demo-nautilus-jmj28 is created but not running Mar 16 13:16:49.897: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3162' Mar 16 13:16:49.995: INFO: stderr: "" Mar 16 13:16:49.995: INFO: stdout: "update-demo-nautilus-4vnhb update-demo-nautilus-jmj28 " Mar 16 13:16:49.995: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4vnhb -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3162' Mar 16 13:16:50.084: INFO: stderr: "" Mar 16 13:16:50.085: INFO: stdout: "true" Mar 16 13:16:50.085: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4vnhb -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3162' Mar 16 13:16:50.170: INFO: stderr: "" Mar 16 13:16:50.170: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 16 13:16:50.170: INFO: validating pod update-demo-nautilus-4vnhb Mar 16 13:16:50.173: INFO: got data: { "image": "nautilus.jpg" } Mar 16 13:16:50.173: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 16 13:16:50.173: INFO: update-demo-nautilus-4vnhb is verified up and running Mar 16 13:16:50.173: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jmj28 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3162' Mar 16 13:16:50.254: INFO: stderr: "" Mar 16 13:16:50.254: INFO: stdout: "true" Mar 16 13:16:50.254: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jmj28 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3162' Mar 16 13:16:50.357: INFO: stderr: "" Mar 16 13:16:50.357: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 16 13:16:50.357: INFO: validating pod update-demo-nautilus-jmj28 Mar 16 13:16:50.361: INFO: got data: { "image": "nautilus.jpg" } Mar 16 13:16:50.361: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 16 13:16:50.362: INFO: update-demo-nautilus-jmj28 is verified up and running STEP: using delete to clean up resources Mar 16 13:16:50.362: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-3162' Mar 16 13:16:50.454: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 16 13:16:50.454: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Mar 16 13:16:50.454: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-3162' Mar 16 13:16:50.564: INFO: stderr: "No resources found.\n" Mar 16 13:16:50.564: INFO: stdout: "" Mar 16 13:16:50.564: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-3162 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Mar 16 13:16:50.711: INFO: stderr: "" Mar 16 13:16:50.711: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 16 13:16:50.711: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3162" for this suite. Mar 16 13:17:12.766: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 13:17:12.896: INFO: namespace kubectl-3162 deletion completed in 22.150638537s • [SLOW TEST:47.050 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 16 13:17:12.897: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the container STEP: wait for the container to reach Failed STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Mar 16 13:17:16.986: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 16 13:17:17.064: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-8335" for this suite. Mar 16 13:17:23.091: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 13:17:23.176: INFO: namespace container-runtime-8335 deletion completed in 6.108530443s • [SLOW TEST:10.280 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129 should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run --rm job should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 16 13:17:23.178: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: executing a command with run --rm and attach with stdin Mar 16 13:17:23.240: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-9742 run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed'' Mar 16 13:17:29.208: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\nI0316 13:17:27.357865 1463 log.go:172] (0xc000734e70) (0xc000480960) Create stream\nI0316 13:17:27.357916 1463 log.go:172] (0xc000734e70) (0xc000480960) Stream added, broadcasting: 1\nI0316 13:17:27.361945 1463 log.go:172] (0xc000734e70) Reply frame received for 1\nI0316 13:17:27.362012 1463 log.go:172] (0xc000734e70) (0xc000480000) Create stream\nI0316 13:17:27.362024 1463 log.go:172] (0xc000734e70) (0xc000480000) Stream added, broadcasting: 3\nI0316 13:17:27.362807 1463 log.go:172] (0xc000734e70) Reply frame received for 3\nI0316 13:17:27.362844 1463 log.go:172] (0xc000734e70) (0xc00036c280) Create stream\nI0316 13:17:27.362861 1463 log.go:172] (0xc000734e70) (0xc00036c280) Stream added, broadcasting: 5\nI0316 13:17:27.363649 1463 log.go:172] (0xc000734e70) Reply frame received for 5\nI0316 13:17:27.363683 1463 log.go:172] (0xc000734e70) (0xc000124000) Create stream\nI0316 13:17:27.363693 1463 log.go:172] (0xc000734e70) (0xc000124000) Stream added, broadcasting: 7\nI0316 13:17:27.364456 1463 log.go:172] (0xc000734e70) Reply frame received for 7\nI0316 13:17:27.364578 1463 log.go:172] (0xc000480000) (3) Writing data frame\nI0316 13:17:27.364671 1463 log.go:172] (0xc000480000) (3) Writing data frame\nI0316 13:17:27.365583 1463 log.go:172] (0xc000734e70) Data frame received for 5\nI0316 13:17:27.365600 1463 log.go:172] (0xc00036c280) (5) Data frame handling\nI0316 13:17:27.365611 1463 log.go:172] (0xc00036c280) (5) Data frame sent\nI0316 13:17:27.366047 1463 log.go:172] (0xc000734e70) Data frame received for 5\nI0316 13:17:27.366064 1463 log.go:172] (0xc00036c280) (5) Data frame handling\nI0316 13:17:27.366079 1463 log.go:172] (0xc00036c280) (5) Data frame sent\nI0316 13:17:27.405641 1463 log.go:172] (0xc000734e70) Data frame received for 7\nI0316 13:17:27.405667 1463 log.go:172] (0xc000124000) (7) Data frame handling\nI0316 13:17:27.405687 1463 log.go:172] (0xc000734e70) Data frame received for 5\nI0316 13:17:27.405700 1463 log.go:172] (0xc00036c280) (5) Data frame handling\nI0316 13:17:27.405729 1463 log.go:172] (0xc000734e70) (0xc000480000) Stream removed, broadcasting: 3\nI0316 13:17:27.405795 1463 log.go:172] (0xc000734e70) Data frame received for 1\nI0316 13:17:27.405849 1463 log.go:172] (0xc000480960) (1) Data frame handling\nI0316 13:17:27.405900 1463 log.go:172] (0xc000480960) (1) Data frame sent\nI0316 13:17:27.405936 1463 log.go:172] (0xc000734e70) (0xc000480960) Stream removed, broadcasting: 1\nI0316 13:17:27.405999 1463 log.go:172] (0xc000734e70) Go away received\nI0316 13:17:27.406065 1463 log.go:172] (0xc000734e70) (0xc000480960) Stream removed, broadcasting: 1\nI0316 13:17:27.406135 1463 log.go:172] (0xc000734e70) (0xc000480000) Stream removed, broadcasting: 3\nI0316 13:17:27.406159 1463 log.go:172] (0xc000734e70) (0xc00036c280) Stream removed, broadcasting: 5\nI0316 13:17:27.406178 1463 log.go:172] (0xc000734e70) (0xc000124000) Stream removed, broadcasting: 7\n" Mar 16 13:17:29.208: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n" STEP: verifying the job e2e-test-rm-busybox-job was deleted [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 16 13:17:31.213: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9742" for this suite. Mar 16 13:17:37.252: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 13:17:37.323: INFO: namespace kubectl-9742 deletion completed in 6.101439518s • [SLOW TEST:14.145 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run --rm job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-network] DNS should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 16 13:17:37.323: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a test externalName service STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-1364.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-1364.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-1364.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-1364.svc.cluster.local; sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 16 13:17:43.550: INFO: DNS probes using dns-test-e473fd10-94c9-4339-8efa-300abd76a861 succeeded STEP: deleting the pod STEP: changing the externalName to bar.example.com STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-1364.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-1364.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-1364.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-1364.svc.cluster.local; sleep 1; done STEP: creating a second pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 16 13:17:51.718: INFO: File wheezy_udp@dns-test-service-3.dns-1364.svc.cluster.local from pod dns-1364/dns-test-e8106c8e-ff56-4b9c-a971-649801383972 contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 16 13:17:51.722: INFO: File jessie_udp@dns-test-service-3.dns-1364.svc.cluster.local from pod dns-1364/dns-test-e8106c8e-ff56-4b9c-a971-649801383972 contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 16 13:17:51.722: INFO: Lookups using dns-1364/dns-test-e8106c8e-ff56-4b9c-a971-649801383972 failed for: [wheezy_udp@dns-test-service-3.dns-1364.svc.cluster.local jessie_udp@dns-test-service-3.dns-1364.svc.cluster.local] Mar 16 13:17:56.727: INFO: File wheezy_udp@dns-test-service-3.dns-1364.svc.cluster.local from pod dns-1364/dns-test-e8106c8e-ff56-4b9c-a971-649801383972 contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 16 13:17:56.730: INFO: File jessie_udp@dns-test-service-3.dns-1364.svc.cluster.local from pod dns-1364/dns-test-e8106c8e-ff56-4b9c-a971-649801383972 contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 16 13:17:56.730: INFO: Lookups using dns-1364/dns-test-e8106c8e-ff56-4b9c-a971-649801383972 failed for: [wheezy_udp@dns-test-service-3.dns-1364.svc.cluster.local jessie_udp@dns-test-service-3.dns-1364.svc.cluster.local] Mar 16 13:18:01.728: INFO: File wheezy_udp@dns-test-service-3.dns-1364.svc.cluster.local from pod dns-1364/dns-test-e8106c8e-ff56-4b9c-a971-649801383972 contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 16 13:18:01.731: INFO: File jessie_udp@dns-test-service-3.dns-1364.svc.cluster.local from pod dns-1364/dns-test-e8106c8e-ff56-4b9c-a971-649801383972 contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 16 13:18:01.731: INFO: Lookups using dns-1364/dns-test-e8106c8e-ff56-4b9c-a971-649801383972 failed for: [wheezy_udp@dns-test-service-3.dns-1364.svc.cluster.local jessie_udp@dns-test-service-3.dns-1364.svc.cluster.local] Mar 16 13:18:06.727: INFO: File wheezy_udp@dns-test-service-3.dns-1364.svc.cluster.local from pod dns-1364/dns-test-e8106c8e-ff56-4b9c-a971-649801383972 contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 16 13:18:06.730: INFO: File jessie_udp@dns-test-service-3.dns-1364.svc.cluster.local from pod dns-1364/dns-test-e8106c8e-ff56-4b9c-a971-649801383972 contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 16 13:18:06.730: INFO: Lookups using dns-1364/dns-test-e8106c8e-ff56-4b9c-a971-649801383972 failed for: [wheezy_udp@dns-test-service-3.dns-1364.svc.cluster.local jessie_udp@dns-test-service-3.dns-1364.svc.cluster.local] Mar 16 13:18:11.727: INFO: File wheezy_udp@dns-test-service-3.dns-1364.svc.cluster.local from pod dns-1364/dns-test-e8106c8e-ff56-4b9c-a971-649801383972 contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 16 13:18:11.731: INFO: File jessie_udp@dns-test-service-3.dns-1364.svc.cluster.local from pod dns-1364/dns-test-e8106c8e-ff56-4b9c-a971-649801383972 contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 16 13:18:11.731: INFO: Lookups using dns-1364/dns-test-e8106c8e-ff56-4b9c-a971-649801383972 failed for: [wheezy_udp@dns-test-service-3.dns-1364.svc.cluster.local jessie_udp@dns-test-service-3.dns-1364.svc.cluster.local] Mar 16 13:18:16.736: INFO: DNS probes using dns-test-e8106c8e-ff56-4b9c-a971-649801383972 succeeded STEP: deleting the pod STEP: changing the service to type=ClusterIP STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-1364.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-1364.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-1364.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-1364.svc.cluster.local; sleep 1; done STEP: creating a third pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 16 13:18:25.153: INFO: File jessie_udp@dns-test-service-3.dns-1364.svc.cluster.local from pod dns-1364/dns-test-7bec7bcd-cfb8-4ea1-bf29-19f920941120 contains '' instead of '10.98.53.132' Mar 16 13:18:25.154: INFO: Lookups using dns-1364/dns-test-7bec7bcd-cfb8-4ea1-bf29-19f920941120 failed for: [jessie_udp@dns-test-service-3.dns-1364.svc.cluster.local] Mar 16 13:18:30.160: INFO: DNS probes using dns-test-7bec7bcd-cfb8-4ea1-bf29-19f920941120 succeeded STEP: deleting the pod STEP: deleting the test externalName service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 16 13:18:30.277: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-1364" for this suite. Mar 16 13:18:36.437: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 13:18:36.528: INFO: namespace dns-1364 deletion completed in 6.246145939s • [SLOW TEST:59.205 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 16 13:18:36.528: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0666 on tmpfs Mar 16 13:18:36.642: INFO: Waiting up to 5m0s for pod "pod-5ce4ff5a-b448-4eea-b019-76d57d4d888a" in namespace "emptydir-112" to be "success or failure" Mar 16 13:18:36.712: INFO: Pod "pod-5ce4ff5a-b448-4eea-b019-76d57d4d888a": Phase="Pending", Reason="", readiness=false. Elapsed: 70.094241ms Mar 16 13:18:38.716: INFO: Pod "pod-5ce4ff5a-b448-4eea-b019-76d57d4d888a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.074216383s Mar 16 13:18:40.720: INFO: Pod "pod-5ce4ff5a-b448-4eea-b019-76d57d4d888a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.077902974s STEP: Saw pod success Mar 16 13:18:40.720: INFO: Pod "pod-5ce4ff5a-b448-4eea-b019-76d57d4d888a" satisfied condition "success or failure" Mar 16 13:18:40.723: INFO: Trying to get logs from node iruya-worker2 pod pod-5ce4ff5a-b448-4eea-b019-76d57d4d888a container test-container: STEP: delete the pod Mar 16 13:18:40.917: INFO: Waiting for pod pod-5ce4ff5a-b448-4eea-b019-76d57d4d888a to disappear Mar 16 13:18:40.921: INFO: Pod pod-5ce4ff5a-b448-4eea-b019-76d57d4d888a no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 16 13:18:40.921: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-112" for this suite. Mar 16 13:18:46.943: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 13:18:47.016: INFO: namespace emptydir-112 deletion completed in 6.091897437s • [SLOW TEST:10.488 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 16 13:18:47.017: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Mar 16 13:18:47.099: INFO: Waiting up to 5m0s for pod "downwardapi-volume-2ba95a2f-5754-48be-9ab3-d5feeb4fe442" in namespace "projected-4374" to be "success or failure" Mar 16 13:18:47.102: INFO: Pod "downwardapi-volume-2ba95a2f-5754-48be-9ab3-d5feeb4fe442": Phase="Pending", Reason="", readiness=false. Elapsed: 3.122509ms Mar 16 13:18:49.461: INFO: Pod "downwardapi-volume-2ba95a2f-5754-48be-9ab3-d5feeb4fe442": Phase="Pending", Reason="", readiness=false. Elapsed: 2.362473775s Mar 16 13:18:51.465: INFO: Pod "downwardapi-volume-2ba95a2f-5754-48be-9ab3-d5feeb4fe442": Phase="Running", Reason="", readiness=true. Elapsed: 4.366594442s Mar 16 13:18:53.470: INFO: Pod "downwardapi-volume-2ba95a2f-5754-48be-9ab3-d5feeb4fe442": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.370878311s STEP: Saw pod success Mar 16 13:18:53.470: INFO: Pod "downwardapi-volume-2ba95a2f-5754-48be-9ab3-d5feeb4fe442" satisfied condition "success or failure" Mar 16 13:18:53.473: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-2ba95a2f-5754-48be-9ab3-d5feeb4fe442 container client-container: STEP: delete the pod Mar 16 13:18:53.503: INFO: Waiting for pod downwardapi-volume-2ba95a2f-5754-48be-9ab3-d5feeb4fe442 to disappear Mar 16 13:18:53.519: INFO: Pod downwardapi-volume-2ba95a2f-5754-48be-9ab3-d5feeb4fe442 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 16 13:18:53.519: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4374" for this suite. Mar 16 13:18:59.549: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 13:18:59.646: INFO: namespace projected-4374 deletion completed in 6.123239491s • [SLOW TEST:12.629 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 16 13:18:59.646: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:179 [It] should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 16 13:18:59.747: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-8310" for this suite. Mar 16 13:19:21.792: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 13:19:21.874: INFO: namespace pods-8310 deletion completed in 22.12021914s • [SLOW TEST:22.227 seconds] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 16 13:19:21.874: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test use defaults Mar 16 13:19:21.948: INFO: Waiting up to 5m0s for pod "client-containers-8f08dde2-73f7-4e0f-b58e-ba817399e1a1" in namespace "containers-1997" to be "success or failure" Mar 16 13:19:21.952: INFO: Pod "client-containers-8f08dde2-73f7-4e0f-b58e-ba817399e1a1": Phase="Pending", Reason="", readiness=false. Elapsed: 3.798934ms Mar 16 13:19:23.955: INFO: Pod "client-containers-8f08dde2-73f7-4e0f-b58e-ba817399e1a1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007227203s Mar 16 13:19:25.959: INFO: Pod "client-containers-8f08dde2-73f7-4e0f-b58e-ba817399e1a1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011377703s STEP: Saw pod success Mar 16 13:19:25.959: INFO: Pod "client-containers-8f08dde2-73f7-4e0f-b58e-ba817399e1a1" satisfied condition "success or failure" Mar 16 13:19:25.962: INFO: Trying to get logs from node iruya-worker2 pod client-containers-8f08dde2-73f7-4e0f-b58e-ba817399e1a1 container test-container: STEP: delete the pod Mar 16 13:19:25.977: INFO: Waiting for pod client-containers-8f08dde2-73f7-4e0f-b58e-ba817399e1a1 to disappear Mar 16 13:19:25.982: INFO: Pod client-containers-8f08dde2-73f7-4e0f-b58e-ba817399e1a1 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 16 13:19:25.982: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-1997" for this suite. Mar 16 13:19:32.034: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 13:19:32.112: INFO: namespace containers-1997 deletion completed in 6.127614081s • [SLOW TEST:10.238 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 16 13:19:32.112: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0777 on tmpfs Mar 16 13:19:32.218: INFO: Waiting up to 5m0s for pod "pod-1ddb29ae-37a9-4ef0-a6bf-dafa9a27a59c" in namespace "emptydir-5500" to be "success or failure" Mar 16 13:19:32.228: INFO: Pod "pod-1ddb29ae-37a9-4ef0-a6bf-dafa9a27a59c": Phase="Pending", Reason="", readiness=false. Elapsed: 9.806893ms Mar 16 13:19:34.253: INFO: Pod "pod-1ddb29ae-37a9-4ef0-a6bf-dafa9a27a59c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034916491s Mar 16 13:19:36.258: INFO: Pod "pod-1ddb29ae-37a9-4ef0-a6bf-dafa9a27a59c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.039449551s STEP: Saw pod success Mar 16 13:19:36.258: INFO: Pod "pod-1ddb29ae-37a9-4ef0-a6bf-dafa9a27a59c" satisfied condition "success or failure" Mar 16 13:19:36.261: INFO: Trying to get logs from node iruya-worker2 pod pod-1ddb29ae-37a9-4ef0-a6bf-dafa9a27a59c container test-container: STEP: delete the pod Mar 16 13:19:36.277: INFO: Waiting for pod pod-1ddb29ae-37a9-4ef0-a6bf-dafa9a27a59c to disappear Mar 16 13:19:36.281: INFO: Pod pod-1ddb29ae-37a9-4ef0-a6bf-dafa9a27a59c no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 16 13:19:36.281: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5500" for this suite. Mar 16 13:19:42.292: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 13:19:42.370: INFO: namespace emptydir-5500 deletion completed in 6.086029223s • [SLOW TEST:10.257 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 16 13:19:42.370: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88 [It] should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating service endpoint-test2 in namespace services-3266 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-3266 to expose endpoints map[] Mar 16 13:19:42.461: INFO: Get endpoints failed (32.781942ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found Mar 16 13:19:43.465: INFO: successfully validated that service endpoint-test2 in namespace services-3266 exposes endpoints map[] (1.036438763s elapsed) STEP: Creating pod pod1 in namespace services-3266 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-3266 to expose endpoints map[pod1:[80]] Mar 16 13:19:46.558: INFO: successfully validated that service endpoint-test2 in namespace services-3266 exposes endpoints map[pod1:[80]] (3.086254968s elapsed) STEP: Creating pod pod2 in namespace services-3266 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-3266 to expose endpoints map[pod1:[80] pod2:[80]] Mar 16 13:19:50.638: INFO: successfully validated that service endpoint-test2 in namespace services-3266 exposes endpoints map[pod1:[80] pod2:[80]] (4.076645812s elapsed) STEP: Deleting pod pod1 in namespace services-3266 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-3266 to expose endpoints map[pod2:[80]] Mar 16 13:19:51.702: INFO: successfully validated that service endpoint-test2 in namespace services-3266 exposes endpoints map[pod2:[80]] (1.059001973s elapsed) STEP: Deleting pod pod2 in namespace services-3266 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-3266 to expose endpoints map[] Mar 16 13:19:52.715: INFO: successfully validated that service endpoint-test2 in namespace services-3266 exposes endpoints map[] (1.007609082s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 16 13:19:52.796: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-3266" for this suite. Mar 16 13:20:14.835: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 13:20:14.907: INFO: namespace services-3266 deletion completed in 22.087993952s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92 • [SLOW TEST:32.537 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 16 13:20:14.908: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Mar 16 13:20:14.990: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7e913c25-5ff0-4eaf-af6e-01a5aac69b2a" in namespace "projected-3555" to be "success or failure" Mar 16 13:20:14.994: INFO: Pod "downwardapi-volume-7e913c25-5ff0-4eaf-af6e-01a5aac69b2a": Phase="Pending", Reason="", readiness=false. Elapsed: 3.669884ms Mar 16 13:20:17.002: INFO: Pod "downwardapi-volume-7e913c25-5ff0-4eaf-af6e-01a5aac69b2a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01179655s Mar 16 13:20:19.007: INFO: Pod "downwardapi-volume-7e913c25-5ff0-4eaf-af6e-01a5aac69b2a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016576707s STEP: Saw pod success Mar 16 13:20:19.007: INFO: Pod "downwardapi-volume-7e913c25-5ff0-4eaf-af6e-01a5aac69b2a" satisfied condition "success or failure" Mar 16 13:20:19.010: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-7e913c25-5ff0-4eaf-af6e-01a5aac69b2a container client-container: STEP: delete the pod Mar 16 13:20:19.032: INFO: Waiting for pod downwardapi-volume-7e913c25-5ff0-4eaf-af6e-01a5aac69b2a to disappear Mar 16 13:20:19.037: INFO: Pod downwardapi-volume-7e913c25-5ff0-4eaf-af6e-01a5aac69b2a no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 16 13:20:19.037: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3555" for this suite. Mar 16 13:20:25.048: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 13:20:25.131: INFO: namespace projected-3555 deletion completed in 6.091271712s • [SLOW TEST:10.223 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 16 13:20:25.133: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Mar 16 13:20:25.230: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 16 13:20:25.254: INFO: Number of nodes with available pods: 0 Mar 16 13:20:25.254: INFO: Node iruya-worker is running more than one daemon pod Mar 16 13:20:26.259: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 16 13:20:26.288: INFO: Number of nodes with available pods: 0 Mar 16 13:20:26.288: INFO: Node iruya-worker is running more than one daemon pod Mar 16 13:20:27.262: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 16 13:20:27.265: INFO: Number of nodes with available pods: 0 Mar 16 13:20:27.265: INFO: Node iruya-worker is running more than one daemon pod Mar 16 13:20:28.259: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 16 13:20:28.262: INFO: Number of nodes with available pods: 0 Mar 16 13:20:28.262: INFO: Node iruya-worker is running more than one daemon pod Mar 16 13:20:29.259: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 16 13:20:29.262: INFO: Number of nodes with available pods: 2 Mar 16 13:20:29.262: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Stop a daemon pod, check that the daemon pod is revived. Mar 16 13:20:29.278: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 16 13:20:29.281: INFO: Number of nodes with available pods: 1 Mar 16 13:20:29.281: INFO: Node iruya-worker2 is running more than one daemon pod Mar 16 13:20:30.409: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 16 13:20:30.412: INFO: Number of nodes with available pods: 1 Mar 16 13:20:30.412: INFO: Node iruya-worker2 is running more than one daemon pod Mar 16 13:20:31.289: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 16 13:20:31.292: INFO: Number of nodes with available pods: 1 Mar 16 13:20:31.292: INFO: Node iruya-worker2 is running more than one daemon pod Mar 16 13:20:32.285: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 16 13:20:32.288: INFO: Number of nodes with available pods: 1 Mar 16 13:20:32.288: INFO: Node iruya-worker2 is running more than one daemon pod Mar 16 13:20:33.286: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 16 13:20:33.289: INFO: Number of nodes with available pods: 1 Mar 16 13:20:33.289: INFO: Node iruya-worker2 is running more than one daemon pod Mar 16 13:20:34.468: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 16 13:20:34.473: INFO: Number of nodes with available pods: 1 Mar 16 13:20:34.473: INFO: Node iruya-worker2 is running more than one daemon pod Mar 16 13:20:35.285: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 16 13:20:35.288: INFO: Number of nodes with available pods: 1 Mar 16 13:20:35.288: INFO: Node iruya-worker2 is running more than one daemon pod Mar 16 13:20:36.286: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 16 13:20:36.288: INFO: Number of nodes with available pods: 1 Mar 16 13:20:36.288: INFO: Node iruya-worker2 is running more than one daemon pod Mar 16 13:20:37.289: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 16 13:20:37.292: INFO: Number of nodes with available pods: 1 Mar 16 13:20:37.292: INFO: Node iruya-worker2 is running more than one daemon pod Mar 16 13:20:38.286: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 16 13:20:38.289: INFO: Number of nodes with available pods: 1 Mar 16 13:20:38.289: INFO: Node iruya-worker2 is running more than one daemon pod Mar 16 13:20:39.286: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 16 13:20:39.312: INFO: Number of nodes with available pods: 1 Mar 16 13:20:39.312: INFO: Node iruya-worker2 is running more than one daemon pod Mar 16 13:20:40.286: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 16 13:20:40.289: INFO: Number of nodes with available pods: 1 Mar 16 13:20:40.289: INFO: Node iruya-worker2 is running more than one daemon pod Mar 16 13:20:41.295: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 16 13:20:41.299: INFO: Number of nodes with available pods: 1 Mar 16 13:20:41.299: INFO: Node iruya-worker2 is running more than one daemon pod Mar 16 13:20:42.286: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 16 13:20:42.289: INFO: Number of nodes with available pods: 1 Mar 16 13:20:42.289: INFO: Node iruya-worker2 is running more than one daemon pod Mar 16 13:20:43.285: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 16 13:20:43.288: INFO: Number of nodes with available pods: 1 Mar 16 13:20:43.288: INFO: Node iruya-worker2 is running more than one daemon pod Mar 16 13:20:44.286: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 16 13:20:44.290: INFO: Number of nodes with available pods: 1 Mar 16 13:20:44.290: INFO: Node iruya-worker2 is running more than one daemon pod Mar 16 13:20:45.285: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 16 13:20:45.287: INFO: Number of nodes with available pods: 2 Mar 16 13:20:45.287: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-2407, will wait for the garbage collector to delete the pods Mar 16 13:20:45.348: INFO: Deleting DaemonSet.extensions daemon-set took: 5.828564ms Mar 16 13:20:45.648: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.312596ms Mar 16 13:20:52.252: INFO: Number of nodes with available pods: 0 Mar 16 13:20:52.252: INFO: Number of running nodes: 0, number of available pods: 0 Mar 16 13:20:52.258: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-2407/daemonsets","resourceVersion":"158963"},"items":null} Mar 16 13:20:52.261: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-2407/pods","resourceVersion":"158963"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 16 13:20:52.271: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-2407" for this suite. Mar 16 13:20:58.291: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 13:20:58.366: INFO: namespace daemonsets-2407 deletion completed in 6.091184288s • [SLOW TEST:33.233 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 16 13:20:58.366: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Mar 16 13:20:58.403: INFO: Creating ReplicaSet my-hostname-basic-a1972026-ae09-47b3-b008-b87a5307f814 Mar 16 13:20:58.409: INFO: Pod name my-hostname-basic-a1972026-ae09-47b3-b008-b87a5307f814: Found 0 pods out of 1 Mar 16 13:21:03.413: INFO: Pod name my-hostname-basic-a1972026-ae09-47b3-b008-b87a5307f814: Found 1 pods out of 1 Mar 16 13:21:03.413: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-a1972026-ae09-47b3-b008-b87a5307f814" is running Mar 16 13:21:03.415: INFO: Pod "my-hostname-basic-a1972026-ae09-47b3-b008-b87a5307f814-trlsb" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-16 13:20:58 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-16 13:21:01 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-16 13:21:01 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-16 13:20:58 +0000 UTC Reason: Message:}]) Mar 16 13:21:03.415: INFO: Trying to dial the pod Mar 16 13:21:08.422: INFO: Controller my-hostname-basic-a1972026-ae09-47b3-b008-b87a5307f814: Got expected result from replica 1 [my-hostname-basic-a1972026-ae09-47b3-b008-b87a5307f814-trlsb]: "my-hostname-basic-a1972026-ae09-47b3-b008-b87a5307f814-trlsb", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 16 13:21:08.422: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-3623" for this suite. Mar 16 13:21:14.438: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 13:21:14.510: INFO: namespace replicaset-3623 deletion completed in 6.085820997s • [SLOW TEST:16.144 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 16 13:21:14.511: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-map-00f32b52-9967-424a-bb3e-6bebb81136b4 STEP: Creating a pod to test consume secrets Mar 16 13:21:14.713: INFO: Waiting up to 5m0s for pod "pod-secrets-c50b7109-7ea4-4f50-9f02-2472f823893a" in namespace "secrets-6118" to be "success or failure" Mar 16 13:21:14.721: INFO: Pod "pod-secrets-c50b7109-7ea4-4f50-9f02-2472f823893a": Phase="Pending", Reason="", readiness=false. Elapsed: 7.740127ms Mar 16 13:21:16.725: INFO: Pod "pod-secrets-c50b7109-7ea4-4f50-9f02-2472f823893a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011812966s Mar 16 13:21:18.728: INFO: Pod "pod-secrets-c50b7109-7ea4-4f50-9f02-2472f823893a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014665971s STEP: Saw pod success Mar 16 13:21:18.728: INFO: Pod "pod-secrets-c50b7109-7ea4-4f50-9f02-2472f823893a" satisfied condition "success or failure" Mar 16 13:21:18.730: INFO: Trying to get logs from node iruya-worker2 pod pod-secrets-c50b7109-7ea4-4f50-9f02-2472f823893a container secret-volume-test: STEP: delete the pod Mar 16 13:21:18.783: INFO: Waiting for pod pod-secrets-c50b7109-7ea4-4f50-9f02-2472f823893a to disappear Mar 16 13:21:18.803: INFO: Pod pod-secrets-c50b7109-7ea4-4f50-9f02-2472f823893a no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 16 13:21:18.803: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-6118" for this suite. Mar 16 13:21:24.860: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 13:21:24.919: INFO: namespace secrets-6118 deletion completed in 6.113147843s • [SLOW TEST:10.409 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 16 13:21:24.920: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-9565 [It] Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating stateful set ss in namespace statefulset-9565 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-9565 Mar 16 13:21:25.168: INFO: Found 0 stateful pods, waiting for 1 Mar 16 13:21:35.173: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod Mar 16 13:21:35.177: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9565 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Mar 16 13:21:35.486: INFO: stderr: "I0316 13:21:35.386922 1500 log.go:172] (0xc000a2e0b0) (0xc00067c960) Create stream\nI0316 13:21:35.386955 1500 log.go:172] (0xc000a2e0b0) (0xc00067c960) Stream added, broadcasting: 1\nI0316 13:21:35.388774 1500 log.go:172] (0xc000a2e0b0) Reply frame received for 1\nI0316 13:21:35.388813 1500 log.go:172] (0xc000a2e0b0) (0xc000186000) Create stream\nI0316 13:21:35.388827 1500 log.go:172] (0xc000a2e0b0) (0xc000186000) Stream added, broadcasting: 3\nI0316 13:21:35.389994 1500 log.go:172] (0xc000a2e0b0) Reply frame received for 3\nI0316 13:21:35.390036 1500 log.go:172] (0xc000a2e0b0) (0xc0002c4000) Create stream\nI0316 13:21:35.390053 1500 log.go:172] (0xc000a2e0b0) (0xc0002c4000) Stream added, broadcasting: 5\nI0316 13:21:35.390942 1500 log.go:172] (0xc000a2e0b0) Reply frame received for 5\nI0316 13:21:35.451897 1500 log.go:172] (0xc000a2e0b0) Data frame received for 5\nI0316 13:21:35.451924 1500 log.go:172] (0xc0002c4000) (5) Data frame handling\nI0316 13:21:35.451963 1500 log.go:172] (0xc0002c4000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0316 13:21:35.480130 1500 log.go:172] (0xc000a2e0b0) Data frame received for 3\nI0316 13:21:35.480170 1500 log.go:172] (0xc000186000) (3) Data frame handling\nI0316 13:21:35.480181 1500 log.go:172] (0xc000186000) (3) Data frame sent\nI0316 13:21:35.480189 1500 log.go:172] (0xc000a2e0b0) Data frame received for 3\nI0316 13:21:35.480196 1500 log.go:172] (0xc000186000) (3) Data frame handling\nI0316 13:21:35.480225 1500 log.go:172] (0xc000a2e0b0) Data frame received for 5\nI0316 13:21:35.480238 1500 log.go:172] (0xc0002c4000) (5) Data frame handling\nI0316 13:21:35.482295 1500 log.go:172] (0xc000a2e0b0) Data frame received for 1\nI0316 13:21:35.482325 1500 log.go:172] (0xc00067c960) (1) Data frame handling\nI0316 13:21:35.482353 1500 log.go:172] (0xc00067c960) (1) Data frame sent\nI0316 13:21:35.482375 1500 log.go:172] (0xc000a2e0b0) (0xc00067c960) Stream removed, broadcasting: 1\nI0316 13:21:35.482403 1500 log.go:172] (0xc000a2e0b0) Go away received\nI0316 13:21:35.482890 1500 log.go:172] (0xc000a2e0b0) (0xc00067c960) Stream removed, broadcasting: 1\nI0316 13:21:35.482911 1500 log.go:172] (0xc000a2e0b0) (0xc000186000) Stream removed, broadcasting: 3\nI0316 13:21:35.482920 1500 log.go:172] (0xc000a2e0b0) (0xc0002c4000) Stream removed, broadcasting: 5\n" Mar 16 13:21:35.486: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Mar 16 13:21:35.486: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Mar 16 13:21:35.490: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Mar 16 13:21:45.505: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Mar 16 13:21:45.505: INFO: Waiting for statefulset status.replicas updated to 0 Mar 16 13:21:45.530: INFO: POD NODE PHASE GRACE CONDITIONS Mar 16 13:21:45.530: INFO: ss-0 iruya-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-16 13:21:25 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-16 13:21:35 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-16 13:21:35 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-16 13:21:25 +0000 UTC }] Mar 16 13:21:45.531: INFO: Mar 16 13:21:45.531: INFO: StatefulSet ss has not reached scale 3, at 1 Mar 16 13:21:46.535: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.982159956s Mar 16 13:21:47.706: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.977372868s Mar 16 13:21:48.711: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.806267544s Mar 16 13:21:49.717: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.802020321s Mar 16 13:21:50.722: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.795484745s Mar 16 13:21:51.726: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.790603227s Mar 16 13:21:52.730: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.78615174s Mar 16 13:21:53.735: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.782196359s Mar 16 13:21:54.741: INFO: Verifying statefulset ss doesn't scale past 3 for another 777.250455ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-9565 Mar 16 13:21:55.751: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9565 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 16 13:21:55.984: INFO: stderr: "I0316 13:21:55.890997 1521 log.go:172] (0xc000712b00) (0xc000978780) Create stream\nI0316 13:21:55.891070 1521 log.go:172] (0xc000712b00) (0xc000978780) Stream added, broadcasting: 1\nI0316 13:21:55.895786 1521 log.go:172] (0xc000712b00) Reply frame received for 1\nI0316 13:21:55.895830 1521 log.go:172] (0xc000712b00) (0xc0006f2000) Create stream\nI0316 13:21:55.895844 1521 log.go:172] (0xc000712b00) (0xc0006f2000) Stream added, broadcasting: 3\nI0316 13:21:55.896794 1521 log.go:172] (0xc000712b00) Reply frame received for 3\nI0316 13:21:55.896839 1521 log.go:172] (0xc000712b00) (0xc000978000) Create stream\nI0316 13:21:55.896852 1521 log.go:172] (0xc000712b00) (0xc000978000) Stream added, broadcasting: 5\nI0316 13:21:55.897859 1521 log.go:172] (0xc000712b00) Reply frame received for 5\nI0316 13:21:55.979013 1521 log.go:172] (0xc000712b00) Data frame received for 5\nI0316 13:21:55.979084 1521 log.go:172] (0xc000978000) (5) Data frame handling\nI0316 13:21:55.979098 1521 log.go:172] (0xc000978000) (5) Data frame sent\nI0316 13:21:55.979109 1521 log.go:172] (0xc000712b00) Data frame received for 5\nI0316 13:21:55.979117 1521 log.go:172] (0xc000978000) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0316 13:21:55.979141 1521 log.go:172] (0xc000712b00) Data frame received for 3\nI0316 13:21:55.979150 1521 log.go:172] (0xc0006f2000) (3) Data frame handling\nI0316 13:21:55.979164 1521 log.go:172] (0xc0006f2000) (3) Data frame sent\nI0316 13:21:55.979173 1521 log.go:172] (0xc000712b00) Data frame received for 3\nI0316 13:21:55.979182 1521 log.go:172] (0xc0006f2000) (3) Data frame handling\nI0316 13:21:55.980921 1521 log.go:172] (0xc000712b00) Data frame received for 1\nI0316 13:21:55.980939 1521 log.go:172] (0xc000978780) (1) Data frame handling\nI0316 13:21:55.980949 1521 log.go:172] (0xc000978780) (1) Data frame sent\nI0316 13:21:55.980962 1521 log.go:172] (0xc000712b00) (0xc000978780) Stream removed, broadcasting: 1\nI0316 13:21:55.981089 1521 log.go:172] (0xc000712b00) Go away received\nI0316 13:21:55.981351 1521 log.go:172] (0xc000712b00) (0xc000978780) Stream removed, broadcasting: 1\nI0316 13:21:55.981366 1521 log.go:172] (0xc000712b00) (0xc0006f2000) Stream removed, broadcasting: 3\nI0316 13:21:55.981374 1521 log.go:172] (0xc000712b00) (0xc000978000) Stream removed, broadcasting: 5\n" Mar 16 13:21:55.985: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Mar 16 13:21:55.985: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Mar 16 13:21:55.985: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9565 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 16 13:21:56.197: INFO: stderr: "I0316 13:21:56.118221 1542 log.go:172] (0xc00013b080) (0xc00066caa0) Create stream\nI0316 13:21:56.118271 1542 log.go:172] (0xc00013b080) (0xc00066caa0) Stream added, broadcasting: 1\nI0316 13:21:56.120813 1542 log.go:172] (0xc00013b080) Reply frame received for 1\nI0316 13:21:56.120858 1542 log.go:172] (0xc00013b080) (0xc00086a000) Create stream\nI0316 13:21:56.120870 1542 log.go:172] (0xc00013b080) (0xc00086a000) Stream added, broadcasting: 3\nI0316 13:21:56.122115 1542 log.go:172] (0xc00013b080) Reply frame received for 3\nI0316 13:21:56.122133 1542 log.go:172] (0xc00013b080) (0xc00086a0a0) Create stream\nI0316 13:21:56.122138 1542 log.go:172] (0xc00013b080) (0xc00086a0a0) Stream added, broadcasting: 5\nI0316 13:21:56.123004 1542 log.go:172] (0xc00013b080) Reply frame received for 5\nI0316 13:21:56.191717 1542 log.go:172] (0xc00013b080) Data frame received for 5\nI0316 13:21:56.191755 1542 log.go:172] (0xc00086a0a0) (5) Data frame handling\nI0316 13:21:56.191777 1542 log.go:172] (0xc00086a0a0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0316 13:21:56.191837 1542 log.go:172] (0xc00013b080) Data frame received for 5\nI0316 13:21:56.191857 1542 log.go:172] (0xc00086a0a0) (5) Data frame handling\nI0316 13:21:56.191887 1542 log.go:172] (0xc00013b080) Data frame received for 3\nI0316 13:21:56.191908 1542 log.go:172] (0xc00086a000) (3) Data frame handling\nI0316 13:21:56.191925 1542 log.go:172] (0xc00086a000) (3) Data frame sent\nI0316 13:21:56.191947 1542 log.go:172] (0xc00013b080) Data frame received for 3\nI0316 13:21:56.191963 1542 log.go:172] (0xc00086a000) (3) Data frame handling\nI0316 13:21:56.194062 1542 log.go:172] (0xc00013b080) Data frame received for 1\nI0316 13:21:56.194081 1542 log.go:172] (0xc00066caa0) (1) Data frame handling\nI0316 13:21:56.194094 1542 log.go:172] (0xc00066caa0) (1) Data frame sent\nI0316 13:21:56.194108 1542 log.go:172] (0xc00013b080) (0xc00066caa0) Stream removed, broadcasting: 1\nI0316 13:21:56.194272 1542 log.go:172] (0xc00013b080) Go away received\nI0316 13:21:56.194482 1542 log.go:172] (0xc00013b080) (0xc00066caa0) Stream removed, broadcasting: 1\nI0316 13:21:56.194506 1542 log.go:172] (0xc00013b080) (0xc00086a000) Stream removed, broadcasting: 3\nI0316 13:21:56.194517 1542 log.go:172] (0xc00013b080) (0xc00086a0a0) Stream removed, broadcasting: 5\n" Mar 16 13:21:56.197: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Mar 16 13:21:56.197: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Mar 16 13:21:56.197: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9565 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 16 13:21:56.410: INFO: stderr: "I0316 13:21:56.325617 1563 log.go:172] (0xc000a2a420) (0xc0008d4640) Create stream\nI0316 13:21:56.325676 1563 log.go:172] (0xc000a2a420) (0xc0008d4640) Stream added, broadcasting: 1\nI0316 13:21:56.327762 1563 log.go:172] (0xc000a2a420) Reply frame received for 1\nI0316 13:21:56.327824 1563 log.go:172] (0xc000a2a420) (0xc000a06000) Create stream\nI0316 13:21:56.327869 1563 log.go:172] (0xc000a2a420) (0xc000a06000) Stream added, broadcasting: 3\nI0316 13:21:56.328957 1563 log.go:172] (0xc000a2a420) Reply frame received for 3\nI0316 13:21:56.329005 1563 log.go:172] (0xc000a2a420) (0xc0006ae140) Create stream\nI0316 13:21:56.329035 1563 log.go:172] (0xc000a2a420) (0xc0006ae140) Stream added, broadcasting: 5\nI0316 13:21:56.330405 1563 log.go:172] (0xc000a2a420) Reply frame received for 5\nI0316 13:21:56.404283 1563 log.go:172] (0xc000a2a420) Data frame received for 3\nI0316 13:21:56.404319 1563 log.go:172] (0xc000a06000) (3) Data frame handling\nI0316 13:21:56.404353 1563 log.go:172] (0xc000a06000) (3) Data frame sent\nI0316 13:21:56.404539 1563 log.go:172] (0xc000a2a420) Data frame received for 3\nI0316 13:21:56.404585 1563 log.go:172] (0xc000a2a420) Data frame received for 5\nI0316 13:21:56.404628 1563 log.go:172] (0xc0006ae140) (5) Data frame handling\nI0316 13:21:56.404653 1563 log.go:172] (0xc0006ae140) (5) Data frame sent\nI0316 13:21:56.404678 1563 log.go:172] (0xc000a2a420) Data frame received for 5\nI0316 13:21:56.404693 1563 log.go:172] (0xc0006ae140) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0316 13:21:56.404722 1563 log.go:172] (0xc000a06000) (3) Data frame handling\nI0316 13:21:56.406503 1563 log.go:172] (0xc000a2a420) Data frame received for 1\nI0316 13:21:56.406532 1563 log.go:172] (0xc0008d4640) (1) Data frame handling\nI0316 13:21:56.406552 1563 log.go:172] (0xc0008d4640) (1) Data frame sent\nI0316 13:21:56.406576 1563 log.go:172] (0xc000a2a420) (0xc0008d4640) Stream removed, broadcasting: 1\nI0316 13:21:56.406616 1563 log.go:172] (0xc000a2a420) Go away received\nI0316 13:21:56.407037 1563 log.go:172] (0xc000a2a420) (0xc0008d4640) Stream removed, broadcasting: 1\nI0316 13:21:56.407061 1563 log.go:172] (0xc000a2a420) (0xc000a06000) Stream removed, broadcasting: 3\nI0316 13:21:56.407073 1563 log.go:172] (0xc000a2a420) (0xc0006ae140) Stream removed, broadcasting: 5\n" Mar 16 13:21:56.410: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Mar 16 13:21:56.410: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Mar 16 13:21:56.415: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false Mar 16 13:22:06.422: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Mar 16 13:22:06.422: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Mar 16 13:22:06.422: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod Mar 16 13:22:06.426: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9565 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Mar 16 13:22:06.646: INFO: stderr: "I0316 13:22:06.561258 1583 log.go:172] (0xc000a1c630) (0xc000752960) Create stream\nI0316 13:22:06.561315 1583 log.go:172] (0xc000a1c630) (0xc000752960) Stream added, broadcasting: 1\nI0316 13:22:06.564809 1583 log.go:172] (0xc000a1c630) Reply frame received for 1\nI0316 13:22:06.564862 1583 log.go:172] (0xc000a1c630) (0xc0007521e0) Create stream\nI0316 13:22:06.564876 1583 log.go:172] (0xc000a1c630) (0xc0007521e0) Stream added, broadcasting: 3\nI0316 13:22:06.566074 1583 log.go:172] (0xc000a1c630) Reply frame received for 3\nI0316 13:22:06.566139 1583 log.go:172] (0xc000a1c630) (0xc000424000) Create stream\nI0316 13:22:06.566159 1583 log.go:172] (0xc000a1c630) (0xc000424000) Stream added, broadcasting: 5\nI0316 13:22:06.567044 1583 log.go:172] (0xc000a1c630) Reply frame received for 5\nI0316 13:22:06.639668 1583 log.go:172] (0xc000a1c630) Data frame received for 3\nI0316 13:22:06.639711 1583 log.go:172] (0xc0007521e0) (3) Data frame handling\nI0316 13:22:06.639726 1583 log.go:172] (0xc0007521e0) (3) Data frame sent\nI0316 13:22:06.639738 1583 log.go:172] (0xc000a1c630) Data frame received for 3\nI0316 13:22:06.639747 1583 log.go:172] (0xc0007521e0) (3) Data frame handling\nI0316 13:22:06.639786 1583 log.go:172] (0xc000a1c630) Data frame received for 5\nI0316 13:22:06.639799 1583 log.go:172] (0xc000424000) (5) Data frame handling\nI0316 13:22:06.639816 1583 log.go:172] (0xc000424000) (5) Data frame sent\nI0316 13:22:06.639826 1583 log.go:172] (0xc000a1c630) Data frame received for 5\nI0316 13:22:06.639836 1583 log.go:172] (0xc000424000) (5) Data frame handling\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0316 13:22:06.641580 1583 log.go:172] (0xc000a1c630) Data frame received for 1\nI0316 13:22:06.641621 1583 log.go:172] (0xc000752960) (1) Data frame handling\nI0316 13:22:06.641657 1583 log.go:172] (0xc000752960) (1) Data frame sent\nI0316 13:22:06.641674 1583 log.go:172] (0xc000a1c630) (0xc000752960) Stream removed, broadcasting: 1\nI0316 13:22:06.641695 1583 log.go:172] (0xc000a1c630) Go away received\nI0316 13:22:06.642183 1583 log.go:172] (0xc000a1c630) (0xc000752960) Stream removed, broadcasting: 1\nI0316 13:22:06.642210 1583 log.go:172] (0xc000a1c630) (0xc0007521e0) Stream removed, broadcasting: 3\nI0316 13:22:06.642223 1583 log.go:172] (0xc000a1c630) (0xc000424000) Stream removed, broadcasting: 5\n" Mar 16 13:22:06.646: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Mar 16 13:22:06.646: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Mar 16 13:22:06.646: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9565 ss-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Mar 16 13:22:06.901: INFO: stderr: "I0316 13:22:06.767450 1605 log.go:172] (0xc0009a0630) (0xc0005f8d20) Create stream\nI0316 13:22:06.767513 1605 log.go:172] (0xc0009a0630) (0xc0005f8d20) Stream added, broadcasting: 1\nI0316 13:22:06.770120 1605 log.go:172] (0xc0009a0630) Reply frame received for 1\nI0316 13:22:06.770164 1605 log.go:172] (0xc0009a0630) (0xc000a42000) Create stream\nI0316 13:22:06.770191 1605 log.go:172] (0xc0009a0630) (0xc000a42000) Stream added, broadcasting: 3\nI0316 13:22:06.771003 1605 log.go:172] (0xc0009a0630) Reply frame received for 3\nI0316 13:22:06.771355 1605 log.go:172] (0xc0009a0630) (0xc0009b6000) Create stream\nI0316 13:22:06.771431 1605 log.go:172] (0xc0009a0630) (0xc0009b6000) Stream added, broadcasting: 5\nI0316 13:22:06.773355 1605 log.go:172] (0xc0009a0630) Reply frame received for 5\nI0316 13:22:06.833571 1605 log.go:172] (0xc0009a0630) Data frame received for 5\nI0316 13:22:06.833600 1605 log.go:172] (0xc0009b6000) (5) Data frame handling\nI0316 13:22:06.833617 1605 log.go:172] (0xc0009b6000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0316 13:22:06.895886 1605 log.go:172] (0xc0009a0630) Data frame received for 5\nI0316 13:22:06.895999 1605 log.go:172] (0xc0009b6000) (5) Data frame handling\nI0316 13:22:06.896033 1605 log.go:172] (0xc0009a0630) Data frame received for 3\nI0316 13:22:06.896051 1605 log.go:172] (0xc000a42000) (3) Data frame handling\nI0316 13:22:06.896066 1605 log.go:172] (0xc000a42000) (3) Data frame sent\nI0316 13:22:06.896087 1605 log.go:172] (0xc0009a0630) Data frame received for 3\nI0316 13:22:06.896102 1605 log.go:172] (0xc000a42000) (3) Data frame handling\nI0316 13:22:06.897611 1605 log.go:172] (0xc0009a0630) Data frame received for 1\nI0316 13:22:06.897633 1605 log.go:172] (0xc0005f8d20) (1) Data frame handling\nI0316 13:22:06.897648 1605 log.go:172] (0xc0005f8d20) (1) Data frame sent\nI0316 13:22:06.897661 1605 log.go:172] (0xc0009a0630) (0xc0005f8d20) Stream removed, broadcasting: 1\nI0316 13:22:06.897925 1605 log.go:172] (0xc0009a0630) (0xc0005f8d20) Stream removed, broadcasting: 1\nI0316 13:22:06.897948 1605 log.go:172] (0xc0009a0630) Go away received\nI0316 13:22:06.897983 1605 log.go:172] (0xc0009a0630) (0xc000a42000) Stream removed, broadcasting: 3\nI0316 13:22:06.898051 1605 log.go:172] (0xc0009a0630) (0xc0009b6000) Stream removed, broadcasting: 5\n" Mar 16 13:22:06.901: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Mar 16 13:22:06.901: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Mar 16 13:22:06.901: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9565 ss-2 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Mar 16 13:22:07.107: INFO: stderr: "I0316 13:22:07.020570 1628 log.go:172] (0xc000a4a420) (0xc000a48640) Create stream\nI0316 13:22:07.020660 1628 log.go:172] (0xc000a4a420) (0xc000a48640) Stream added, broadcasting: 1\nI0316 13:22:07.023647 1628 log.go:172] (0xc000a4a420) Reply frame received for 1\nI0316 13:22:07.023701 1628 log.go:172] (0xc000a4a420) (0xc0009ba000) Create stream\nI0316 13:22:07.023729 1628 log.go:172] (0xc000a4a420) (0xc0009ba000) Stream added, broadcasting: 3\nI0316 13:22:07.024608 1628 log.go:172] (0xc000a4a420) Reply frame received for 3\nI0316 13:22:07.024630 1628 log.go:172] (0xc000a4a420) (0xc0009ba0a0) Create stream\nI0316 13:22:07.024637 1628 log.go:172] (0xc000a4a420) (0xc0009ba0a0) Stream added, broadcasting: 5\nI0316 13:22:07.025569 1628 log.go:172] (0xc000a4a420) Reply frame received for 5\nI0316 13:22:07.066062 1628 log.go:172] (0xc000a4a420) Data frame received for 5\nI0316 13:22:07.066087 1628 log.go:172] (0xc0009ba0a0) (5) Data frame handling\nI0316 13:22:07.066113 1628 log.go:172] (0xc0009ba0a0) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0316 13:22:07.100052 1628 log.go:172] (0xc000a4a420) Data frame received for 3\nI0316 13:22:07.100090 1628 log.go:172] (0xc0009ba000) (3) Data frame handling\nI0316 13:22:07.100210 1628 log.go:172] (0xc0009ba000) (3) Data frame sent\nI0316 13:22:07.100244 1628 log.go:172] (0xc000a4a420) Data frame received for 3\nI0316 13:22:07.100260 1628 log.go:172] (0xc0009ba000) (3) Data frame handling\nI0316 13:22:07.100561 1628 log.go:172] (0xc000a4a420) Data frame received for 5\nI0316 13:22:07.100584 1628 log.go:172] (0xc0009ba0a0) (5) Data frame handling\nI0316 13:22:07.102362 1628 log.go:172] (0xc000a4a420) Data frame received for 1\nI0316 13:22:07.102483 1628 log.go:172] (0xc000a48640) (1) Data frame handling\nI0316 13:22:07.102522 1628 log.go:172] (0xc000a48640) (1) Data frame sent\nI0316 13:22:07.102542 1628 log.go:172] (0xc000a4a420) (0xc000a48640) Stream removed, broadcasting: 1\nI0316 13:22:07.102855 1628 log.go:172] (0xc000a4a420) Go away received\nI0316 13:22:07.103105 1628 log.go:172] (0xc000a4a420) (0xc000a48640) Stream removed, broadcasting: 1\nI0316 13:22:07.103136 1628 log.go:172] (0xc000a4a420) (0xc0009ba000) Stream removed, broadcasting: 3\nI0316 13:22:07.103153 1628 log.go:172] (0xc000a4a420) (0xc0009ba0a0) Stream removed, broadcasting: 5\n" Mar 16 13:22:07.107: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Mar 16 13:22:07.107: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Mar 16 13:22:07.107: INFO: Waiting for statefulset status.replicas updated to 0 Mar 16 13:22:07.116: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1 Mar 16 13:22:17.125: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Mar 16 13:22:17.125: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Mar 16 13:22:17.125: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Mar 16 13:22:17.142: INFO: POD NODE PHASE GRACE CONDITIONS Mar 16 13:22:17.142: INFO: ss-0 iruya-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-16 13:21:25 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-16 13:22:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-16 13:22:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-16 13:21:25 +0000 UTC }] Mar 16 13:22:17.142: INFO: ss-1 iruya-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-16 13:21:45 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-16 13:22:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-16 13:22:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-16 13:21:45 +0000 UTC }] Mar 16 13:22:17.142: INFO: ss-2 iruya-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-16 13:21:45 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-16 13:22:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-16 13:22:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-16 13:21:45 +0000 UTC }] Mar 16 13:22:17.142: INFO: Mar 16 13:22:17.142: INFO: StatefulSet ss has not reached scale 0, at 3 Mar 16 13:22:18.148: INFO: POD NODE PHASE GRACE CONDITIONS Mar 16 13:22:18.148: INFO: ss-0 iruya-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-16 13:21:25 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-16 13:22:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-16 13:22:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-16 13:21:25 +0000 UTC }] Mar 16 13:22:18.148: INFO: ss-1 iruya-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-16 13:21:45 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-16 13:22:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-16 13:22:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-16 13:21:45 +0000 UTC }] Mar 16 13:22:18.148: INFO: ss-2 iruya-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-16 13:21:45 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-16 13:22:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-16 13:22:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-16 13:21:45 +0000 UTC }] Mar 16 13:22:18.148: INFO: Mar 16 13:22:18.148: INFO: StatefulSet ss has not reached scale 0, at 3 Mar 16 13:22:19.153: INFO: POD NODE PHASE GRACE CONDITIONS Mar 16 13:22:19.153: INFO: ss-0 iruya-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-16 13:21:25 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-16 13:22:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-16 13:22:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-16 13:21:25 +0000 UTC }] Mar 16 13:22:19.154: INFO: ss-1 iruya-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-16 13:21:45 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-16 13:22:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-16 13:22:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-16 13:21:45 +0000 UTC }] Mar 16 13:22:19.154: INFO: ss-2 iruya-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-16 13:21:45 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-16 13:22:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-16 13:22:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-16 13:21:45 +0000 UTC }] Mar 16 13:22:19.154: INFO: Mar 16 13:22:19.154: INFO: StatefulSet ss has not reached scale 0, at 3 Mar 16 13:22:20.158: INFO: POD NODE PHASE GRACE CONDITIONS Mar 16 13:22:20.158: INFO: ss-1 iruya-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-16 13:21:45 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-16 13:22:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-16 13:22:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-16 13:21:45 +0000 UTC }] Mar 16 13:22:20.158: INFO: Mar 16 13:22:20.158: INFO: StatefulSet ss has not reached scale 0, at 1 Mar 16 13:22:21.163: INFO: POD NODE PHASE GRACE CONDITIONS Mar 16 13:22:21.163: INFO: ss-1 iruya-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-16 13:21:45 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-16 13:22:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-16 13:22:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-16 13:21:45 +0000 UTC }] Mar 16 13:22:21.163: INFO: Mar 16 13:22:21.163: INFO: StatefulSet ss has not reached scale 0, at 1 Mar 16 13:22:22.172: INFO: Verifying statefulset ss doesn't scale past 0 for another 4.969077154s Mar 16 13:22:23.176: INFO: Verifying statefulset ss doesn't scale past 0 for another 3.960224201s Mar 16 13:22:24.181: INFO: Verifying statefulset ss doesn't scale past 0 for another 2.955827067s Mar 16 13:22:25.185: INFO: Verifying statefulset ss doesn't scale past 0 for another 1.951416924s Mar 16 13:22:26.189: INFO: Verifying statefulset ss doesn't scale past 0 for another 947.187409ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-9565 Mar 16 13:22:27.194: INFO: Scaling statefulset ss to 0 Mar 16 13:22:27.204: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 Mar 16 13:22:27.207: INFO: Deleting all statefulset in ns statefulset-9565 Mar 16 13:22:27.210: INFO: Scaling statefulset ss to 0 Mar 16 13:22:27.217: INFO: Waiting for statefulset status.replicas updated to 0 Mar 16 13:22:27.219: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 16 13:22:27.234: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-9565" for this suite. Mar 16 13:22:33.290: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 13:22:33.373: INFO: namespace statefulset-9565 deletion completed in 6.11910192s • [SLOW TEST:68.454 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 16 13:22:33.373: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-6c0e1114-86e1-4f77-b717-a3a13d006e39 STEP: Creating a pod to test consume secrets Mar 16 13:22:33.443: INFO: Waiting up to 5m0s for pod "pod-secrets-3503ba6d-6879-45cc-aa6d-19e833a69856" in namespace "secrets-4220" to be "success or failure" Mar 16 13:22:33.448: INFO: Pod "pod-secrets-3503ba6d-6879-45cc-aa6d-19e833a69856": Phase="Pending", Reason="", readiness=false. Elapsed: 4.041816ms Mar 16 13:22:35.452: INFO: Pod "pod-secrets-3503ba6d-6879-45cc-aa6d-19e833a69856": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008096135s Mar 16 13:22:37.456: INFO: Pod "pod-secrets-3503ba6d-6879-45cc-aa6d-19e833a69856": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012183392s STEP: Saw pod success Mar 16 13:22:37.456: INFO: Pod "pod-secrets-3503ba6d-6879-45cc-aa6d-19e833a69856" satisfied condition "success or failure" Mar 16 13:22:37.461: INFO: Trying to get logs from node iruya-worker2 pod pod-secrets-3503ba6d-6879-45cc-aa6d-19e833a69856 container secret-volume-test: STEP: delete the pod Mar 16 13:22:37.492: INFO: Waiting for pod pod-secrets-3503ba6d-6879-45cc-aa6d-19e833a69856 to disappear Mar 16 13:22:37.511: INFO: Pod pod-secrets-3503ba6d-6879-45cc-aa6d-19e833a69856 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 16 13:22:37.511: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-4220" for this suite. Mar 16 13:22:43.526: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 13:22:43.606: INFO: namespace secrets-4220 deletion completed in 6.091330565s • [SLOW TEST:10.233 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 16 13:22:43.607: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Mar 16 13:22:43.790: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8025096a-e2f0-4f19-a5f2-7e114a82fd09" in namespace "downward-api-9833" to be "success or failure" Mar 16 13:22:43.938: INFO: Pod "downwardapi-volume-8025096a-e2f0-4f19-a5f2-7e114a82fd09": Phase="Pending", Reason="", readiness=false. Elapsed: 148.452542ms Mar 16 13:22:45.943: INFO: Pod "downwardapi-volume-8025096a-e2f0-4f19-a5f2-7e114a82fd09": Phase="Pending", Reason="", readiness=false. Elapsed: 2.152861809s Mar 16 13:22:47.947: INFO: Pod "downwardapi-volume-8025096a-e2f0-4f19-a5f2-7e114a82fd09": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.157203045s STEP: Saw pod success Mar 16 13:22:47.947: INFO: Pod "downwardapi-volume-8025096a-e2f0-4f19-a5f2-7e114a82fd09" satisfied condition "success or failure" Mar 16 13:22:47.950: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-8025096a-e2f0-4f19-a5f2-7e114a82fd09 container client-container: STEP: delete the pod Mar 16 13:22:47.979: INFO: Waiting for pod downwardapi-volume-8025096a-e2f0-4f19-a5f2-7e114a82fd09 to disappear Mar 16 13:22:47.990: INFO: Pod downwardapi-volume-8025096a-e2f0-4f19-a5f2-7e114a82fd09 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 16 13:22:47.990: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9833" for this suite. Mar 16 13:22:54.011: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 13:22:54.110: INFO: namespace downward-api-9833 deletion completed in 6.116134825s • [SLOW TEST:10.503 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 16 13:22:54.110: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44 [It] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod Mar 16 13:22:54.184: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 16 13:23:00.811: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-3857" for this suite. Mar 16 13:23:22.847: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 13:23:22.921: INFO: namespace init-container-3857 deletion completed in 22.092872975s • [SLOW TEST:28.811 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 16 13:23:22.922: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Mar 16 13:23:23.011: INFO: Create a RollingUpdate DaemonSet Mar 16 13:23:23.014: INFO: Check that daemon pods launch on every node of the cluster Mar 16 13:23:23.033: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 16 13:23:23.044: INFO: Number of nodes with available pods: 0 Mar 16 13:23:23.044: INFO: Node iruya-worker is running more than one daemon pod Mar 16 13:23:24.049: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 16 13:23:24.052: INFO: Number of nodes with available pods: 0 Mar 16 13:23:24.052: INFO: Node iruya-worker is running more than one daemon pod Mar 16 13:23:25.735: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 16 13:23:25.776: INFO: Number of nodes with available pods: 0 Mar 16 13:23:25.776: INFO: Node iruya-worker is running more than one daemon pod Mar 16 13:23:26.050: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 16 13:23:26.053: INFO: Number of nodes with available pods: 0 Mar 16 13:23:26.053: INFO: Node iruya-worker is running more than one daemon pod Mar 16 13:23:27.050: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 16 13:23:27.053: INFO: Number of nodes with available pods: 0 Mar 16 13:23:27.053: INFO: Node iruya-worker is running more than one daemon pod Mar 16 13:23:28.048: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 16 13:23:28.051: INFO: Number of nodes with available pods: 0 Mar 16 13:23:28.051: INFO: Node iruya-worker is running more than one daemon pod Mar 16 13:23:29.049: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 16 13:23:29.052: INFO: Number of nodes with available pods: 2 Mar 16 13:23:29.052: INFO: Number of running nodes: 2, number of available pods: 2 Mar 16 13:23:29.052: INFO: Update the DaemonSet to trigger a rollout Mar 16 13:23:29.058: INFO: Updating DaemonSet daemon-set Mar 16 13:23:32.250: INFO: Roll back the DaemonSet before rollout is complete Mar 16 13:23:32.256: INFO: Updating DaemonSet daemon-set Mar 16 13:23:32.256: INFO: Make sure DaemonSet rollback is complete Mar 16 13:23:32.263: INFO: Wrong image for pod: daemon-set-gxgff. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent. Mar 16 13:23:32.263: INFO: Pod daemon-set-gxgff is not available Mar 16 13:23:32.300: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 16 13:23:33.305: INFO: Wrong image for pod: daemon-set-gxgff. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent. Mar 16 13:23:33.305: INFO: Pod daemon-set-gxgff is not available Mar 16 13:23:33.309: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 16 13:23:34.337: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 16 13:23:35.305: INFO: Pod daemon-set-kr4mm is not available Mar 16 13:23:35.309: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-5637, will wait for the garbage collector to delete the pods Mar 16 13:23:35.375: INFO: Deleting DaemonSet.extensions daemon-set took: 6.663042ms Mar 16 13:23:35.675: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.288207ms Mar 16 13:23:42.193: INFO: Number of nodes with available pods: 0 Mar 16 13:23:42.193: INFO: Number of running nodes: 0, number of available pods: 0 Mar 16 13:23:42.195: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-5637/daemonsets","resourceVersion":"159698"},"items":null} Mar 16 13:23:42.212: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-5637/pods","resourceVersion":"159698"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 16 13:23:42.223: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-5637" for this suite. Mar 16 13:23:48.244: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 13:23:48.316: INFO: namespace daemonsets-5637 deletion completed in 6.090112495s • [SLOW TEST:25.394 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Proxy server should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 16 13:23:48.316: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Starting the proxy Mar 16 13:23:48.491: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix941852598/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 16 13:23:48.555: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5017" for this suite. Mar 16 13:23:54.610: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 13:23:54.684: INFO: namespace kubectl-5017 deletion completed in 6.111029811s • [SLOW TEST:6.368 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Proxy server /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 16 13:23:54.685: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0777 on tmpfs Mar 16 13:23:55.024: INFO: Waiting up to 5m0s for pod "pod-703e37ca-2d73-46ae-a679-998d6c994495" in namespace "emptydir-291" to be "success or failure" Mar 16 13:23:55.052: INFO: Pod "pod-703e37ca-2d73-46ae-a679-998d6c994495": Phase="Pending", Reason="", readiness=false. Elapsed: 28.296853ms Mar 16 13:23:57.056: INFO: Pod "pod-703e37ca-2d73-46ae-a679-998d6c994495": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031948844s Mar 16 13:23:59.060: INFO: Pod "pod-703e37ca-2d73-46ae-a679-998d6c994495": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.035847094s STEP: Saw pod success Mar 16 13:23:59.060: INFO: Pod "pod-703e37ca-2d73-46ae-a679-998d6c994495" satisfied condition "success or failure" Mar 16 13:23:59.063: INFO: Trying to get logs from node iruya-worker pod pod-703e37ca-2d73-46ae-a679-998d6c994495 container test-container: STEP: delete the pod Mar 16 13:23:59.131: INFO: Waiting for pod pod-703e37ca-2d73-46ae-a679-998d6c994495 to disappear Mar 16 13:23:59.147: INFO: Pod pod-703e37ca-2d73-46ae-a679-998d6c994495 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 16 13:23:59.147: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-291" for this suite. Mar 16 13:24:05.205: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 13:24:05.286: INFO: namespace emptydir-291 deletion completed in 6.13597224s • [SLOW TEST:10.602 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 16 13:24:05.287: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Performing setup for networking test in namespace pod-network-test-3432 STEP: creating a selector STEP: Creating the service pods in kubernetes Mar 16 13:24:05.351: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Mar 16 13:24:27.435: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.2.166:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-3432 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 16 13:24:27.435: INFO: >>> kubeConfig: /root/.kube/config I0316 13:24:27.459686 6 log.go:172] (0xc000b15a20) (0xc002f02460) Create stream I0316 13:24:27.459713 6 log.go:172] (0xc000b15a20) (0xc002f02460) Stream added, broadcasting: 1 I0316 13:24:27.461207 6 log.go:172] (0xc000b15a20) Reply frame received for 1 I0316 13:24:27.461246 6 log.go:172] (0xc000b15a20) (0xc002f02500) Create stream I0316 13:24:27.461254 6 log.go:172] (0xc000b15a20) (0xc002f02500) Stream added, broadcasting: 3 I0316 13:24:27.462289 6 log.go:172] (0xc000b15a20) Reply frame received for 3 I0316 13:24:27.462329 6 log.go:172] (0xc000b15a20) (0xc0011780a0) Create stream I0316 13:24:27.462343 6 log.go:172] (0xc000b15a20) (0xc0011780a0) Stream added, broadcasting: 5 I0316 13:24:27.463172 6 log.go:172] (0xc000b15a20) Reply frame received for 5 I0316 13:24:27.531162 6 log.go:172] (0xc000b15a20) Data frame received for 3 I0316 13:24:27.531219 6 log.go:172] (0xc002f02500) (3) Data frame handling I0316 13:24:27.531240 6 log.go:172] (0xc002f02500) (3) Data frame sent I0316 13:24:27.531668 6 log.go:172] (0xc000b15a20) Data frame received for 5 I0316 13:24:27.531730 6 log.go:172] (0xc0011780a0) (5) Data frame handling I0316 13:24:27.531755 6 log.go:172] (0xc000b15a20) Data frame received for 3 I0316 13:24:27.531797 6 log.go:172] (0xc002f02500) (3) Data frame handling I0316 13:24:27.532823 6 log.go:172] (0xc000b15a20) Data frame received for 1 I0316 13:24:27.532839 6 log.go:172] (0xc002f02460) (1) Data frame handling I0316 13:24:27.532849 6 log.go:172] (0xc002f02460) (1) Data frame sent I0316 13:24:27.532861 6 log.go:172] (0xc000b15a20) (0xc002f02460) Stream removed, broadcasting: 1 I0316 13:24:27.532885 6 log.go:172] (0xc000b15a20) Go away received I0316 13:24:27.532914 6 log.go:172] (0xc000b15a20) (0xc002f02460) Stream removed, broadcasting: 1 I0316 13:24:27.532921 6 log.go:172] (0xc000b15a20) (0xc002f02500) Stream removed, broadcasting: 3 I0316 13:24:27.532928 6 log.go:172] (0xc000b15a20) (0xc0011780a0) Stream removed, broadcasting: 5 Mar 16 13:24:27.532: INFO: Found all expected endpoints: [netserver-0] Mar 16 13:24:27.535: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.1.179:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-3432 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 16 13:24:27.535: INFO: >>> kubeConfig: /root/.kube/config I0316 13:24:27.556646 6 log.go:172] (0xc000c2ac60) (0xc002f028c0) Create stream I0316 13:24:27.556669 6 log.go:172] (0xc000c2ac60) (0xc002f028c0) Stream added, broadcasting: 1 I0316 13:24:27.558188 6 log.go:172] (0xc000c2ac60) Reply frame received for 1 I0316 13:24:27.558223 6 log.go:172] (0xc000c2ac60) (0xc002f02960) Create stream I0316 13:24:27.558235 6 log.go:172] (0xc000c2ac60) (0xc002f02960) Stream added, broadcasting: 3 I0316 13:24:27.559017 6 log.go:172] (0xc000c2ac60) Reply frame received for 3 I0316 13:24:27.559053 6 log.go:172] (0xc000c2ac60) (0xc002f02a00) Create stream I0316 13:24:27.559070 6 log.go:172] (0xc000c2ac60) (0xc002f02a00) Stream added, broadcasting: 5 I0316 13:24:27.559832 6 log.go:172] (0xc000c2ac60) Reply frame received for 5 I0316 13:24:27.625184 6 log.go:172] (0xc000c2ac60) Data frame received for 3 I0316 13:24:27.625236 6 log.go:172] (0xc002f02960) (3) Data frame handling I0316 13:24:27.625243 6 log.go:172] (0xc002f02960) (3) Data frame sent I0316 13:24:27.625434 6 log.go:172] (0xc000c2ac60) Data frame received for 3 I0316 13:24:27.625443 6 log.go:172] (0xc002f02960) (3) Data frame handling I0316 13:24:27.625464 6 log.go:172] (0xc000c2ac60) Data frame received for 5 I0316 13:24:27.625501 6 log.go:172] (0xc002f02a00) (5) Data frame handling I0316 13:24:27.626664 6 log.go:172] (0xc000c2ac60) Data frame received for 1 I0316 13:24:27.626699 6 log.go:172] (0xc002f028c0) (1) Data frame handling I0316 13:24:27.626801 6 log.go:172] (0xc002f028c0) (1) Data frame sent I0316 13:24:27.626830 6 log.go:172] (0xc000c2ac60) (0xc002f028c0) Stream removed, broadcasting: 1 I0316 13:24:27.626871 6 log.go:172] (0xc000c2ac60) Go away received I0316 13:24:27.626984 6 log.go:172] (0xc000c2ac60) (0xc002f028c0) Stream removed, broadcasting: 1 I0316 13:24:27.627023 6 log.go:172] (0xc000c2ac60) (0xc002f02960) Stream removed, broadcasting: 3 I0316 13:24:27.627054 6 log.go:172] (0xc000c2ac60) (0xc002f02a00) Stream removed, broadcasting: 5 Mar 16 13:24:27.627: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 16 13:24:27.627: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-3432" for this suite. Mar 16 13:24:45.648: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 13:24:45.728: INFO: namespace pod-network-test-3432 deletion completed in 18.097123687s • [SLOW TEST:40.441 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 16 13:24:45.728: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Mar 16 13:24:45.871: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. Mar 16 13:24:45.884: INFO: Number of nodes with available pods: 0 Mar 16 13:24:45.884: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. Mar 16 13:24:45.992: INFO: Number of nodes with available pods: 0 Mar 16 13:24:45.992: INFO: Node iruya-worker is running more than one daemon pod Mar 16 13:24:46.995: INFO: Number of nodes with available pods: 0 Mar 16 13:24:46.995: INFO: Node iruya-worker is running more than one daemon pod Mar 16 13:24:47.996: INFO: Number of nodes with available pods: 0 Mar 16 13:24:47.996: INFO: Node iruya-worker is running more than one daemon pod Mar 16 13:24:48.996: INFO: Number of nodes with available pods: 0 Mar 16 13:24:48.996: INFO: Node iruya-worker is running more than one daemon pod Mar 16 13:24:49.996: INFO: Number of nodes with available pods: 1 Mar 16 13:24:49.996: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled Mar 16 13:24:50.027: INFO: Number of nodes with available pods: 1 Mar 16 13:24:50.027: INFO: Number of running nodes: 0, number of available pods: 1 Mar 16 13:24:51.032: INFO: Number of nodes with available pods: 0 Mar 16 13:24:51.032: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate Mar 16 13:24:51.044: INFO: Number of nodes with available pods: 0 Mar 16 13:24:51.044: INFO: Node iruya-worker is running more than one daemon pod Mar 16 13:24:52.048: INFO: Number of nodes with available pods: 0 Mar 16 13:24:52.048: INFO: Node iruya-worker is running more than one daemon pod Mar 16 13:24:53.052: INFO: Number of nodes with available pods: 0 Mar 16 13:24:53.052: INFO: Node iruya-worker is running more than one daemon pod Mar 16 13:24:54.048: INFO: Number of nodes with available pods: 0 Mar 16 13:24:54.048: INFO: Node iruya-worker is running more than one daemon pod Mar 16 13:24:55.048: INFO: Number of nodes with available pods: 0 Mar 16 13:24:55.048: INFO: Node iruya-worker is running more than one daemon pod Mar 16 13:24:56.048: INFO: Number of nodes with available pods: 0 Mar 16 13:24:56.048: INFO: Node iruya-worker is running more than one daemon pod Mar 16 13:24:57.048: INFO: Number of nodes with available pods: 0 Mar 16 13:24:57.048: INFO: Node iruya-worker is running more than one daemon pod Mar 16 13:24:58.048: INFO: Number of nodes with available pods: 0 Mar 16 13:24:58.048: INFO: Node iruya-worker is running more than one daemon pod Mar 16 13:24:59.048: INFO: Number of nodes with available pods: 0 Mar 16 13:24:59.048: INFO: Node iruya-worker is running more than one daemon pod Mar 16 13:25:00.048: INFO: Number of nodes with available pods: 0 Mar 16 13:25:00.048: INFO: Node iruya-worker is running more than one daemon pod Mar 16 13:25:01.052: INFO: Number of nodes with available pods: 0 Mar 16 13:25:01.052: INFO: Node iruya-worker is running more than one daemon pod Mar 16 13:25:02.048: INFO: Number of nodes with available pods: 0 Mar 16 13:25:02.048: INFO: Node iruya-worker is running more than one daemon pod Mar 16 13:25:03.090: INFO: Number of nodes with available pods: 0 Mar 16 13:25:03.090: INFO: Node iruya-worker is running more than one daemon pod Mar 16 13:25:04.048: INFO: Number of nodes with available pods: 0 Mar 16 13:25:04.048: INFO: Node iruya-worker is running more than one daemon pod Mar 16 13:25:05.048: INFO: Number of nodes with available pods: 0 Mar 16 13:25:05.048: INFO: Node iruya-worker is running more than one daemon pod Mar 16 13:25:06.048: INFO: Number of nodes with available pods: 1 Mar 16 13:25:06.048: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-6216, will wait for the garbage collector to delete the pods Mar 16 13:25:06.110: INFO: Deleting DaemonSet.extensions daemon-set took: 4.888299ms Mar 16 13:25:06.411: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.200445ms Mar 16 13:25:12.314: INFO: Number of nodes with available pods: 0 Mar 16 13:25:12.314: INFO: Number of running nodes: 0, number of available pods: 0 Mar 16 13:25:12.316: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-6216/daemonsets","resourceVersion":"160052"},"items":null} Mar 16 13:25:12.318: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-6216/pods","resourceVersion":"160052"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 16 13:25:12.430: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-6216" for this suite. Mar 16 13:25:18.511: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 13:25:18.588: INFO: namespace daemonsets-6216 deletion completed in 6.154288808s • [SLOW TEST:32.860 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 16 13:25:18.589: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod Mar 16 13:25:18.731: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 16 13:25:28.139: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-7631" for this suite. Mar 16 13:25:34.292: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 13:25:34.365: INFO: namespace init-container-7631 deletion completed in 6.205875423s • [SLOW TEST:15.776 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 16 13:25:34.366: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88 [It] should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 16 13:25:34.583: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-9664" for this suite. Mar 16 13:25:40.636: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 13:25:40.715: INFO: namespace services-9664 deletion completed in 6.127886531s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92 • [SLOW TEST:6.349 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 16 13:25:40.716: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod busybox-8afbc726-538e-48fa-a8bf-cc7d42082847 in namespace container-probe-7759 Mar 16 13:25:47.047: INFO: Started pod busybox-8afbc726-538e-48fa-a8bf-cc7d42082847 in namespace container-probe-7759 STEP: checking the pod's current state and verifying that restartCount is present Mar 16 13:25:47.050: INFO: Initial restart count of pod busybox-8afbc726-538e-48fa-a8bf-cc7d42082847 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 16 13:29:47.242: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-7759" for this suite. Mar 16 13:29:53.452: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 13:29:53.519: INFO: namespace container-probe-7759 deletion completed in 6.272271625s • [SLOW TEST:252.803 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 16 13:29:53.519: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Mar 16 13:29:54.146: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 16 13:29:54.157: INFO: Number of nodes with available pods: 0 Mar 16 13:29:54.157: INFO: Node iruya-worker is running more than one daemon pod Mar 16 13:29:55.162: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 16 13:29:55.164: INFO: Number of nodes with available pods: 0 Mar 16 13:29:55.164: INFO: Node iruya-worker is running more than one daemon pod Mar 16 13:29:56.182: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 16 13:29:56.186: INFO: Number of nodes with available pods: 0 Mar 16 13:29:56.186: INFO: Node iruya-worker is running more than one daemon pod Mar 16 13:29:57.161: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 16 13:29:57.164: INFO: Number of nodes with available pods: 0 Mar 16 13:29:57.164: INFO: Node iruya-worker is running more than one daemon pod Mar 16 13:29:58.261: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 16 13:29:58.264: INFO: Number of nodes with available pods: 0 Mar 16 13:29:58.264: INFO: Node iruya-worker is running more than one daemon pod Mar 16 13:29:59.162: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 16 13:29:59.165: INFO: Number of nodes with available pods: 1 Mar 16 13:29:59.165: INFO: Node iruya-worker is running more than one daemon pod Mar 16 13:30:00.161: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 16 13:30:00.165: INFO: Number of nodes with available pods: 2 Mar 16 13:30:00.165: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. Mar 16 13:30:00.329: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 16 13:30:00.355: INFO: Number of nodes with available pods: 1 Mar 16 13:30:00.355: INFO: Node iruya-worker is running more than one daemon pod Mar 16 13:30:01.360: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 16 13:30:01.363: INFO: Number of nodes with available pods: 1 Mar 16 13:30:01.363: INFO: Node iruya-worker is running more than one daemon pod Mar 16 13:30:02.380: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 16 13:30:02.384: INFO: Number of nodes with available pods: 1 Mar 16 13:30:02.384: INFO: Node iruya-worker is running more than one daemon pod Mar 16 13:30:03.360: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 16 13:30:03.362: INFO: Number of nodes with available pods: 1 Mar 16 13:30:03.362: INFO: Node iruya-worker is running more than one daemon pod Mar 16 13:30:04.488: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 16 13:30:04.686: INFO: Number of nodes with available pods: 1 Mar 16 13:30:04.686: INFO: Node iruya-worker is running more than one daemon pod Mar 16 13:30:05.360: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 16 13:30:05.362: INFO: Number of nodes with available pods: 1 Mar 16 13:30:05.362: INFO: Node iruya-worker is running more than one daemon pod Mar 16 13:30:06.359: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 16 13:30:06.362: INFO: Number of nodes with available pods: 2 Mar 16 13:30:06.362: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-2558, will wait for the garbage collector to delete the pods Mar 16 13:30:06.424: INFO: Deleting DaemonSet.extensions daemon-set took: 6.388501ms Mar 16 13:30:06.725: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.335941ms Mar 16 13:30:12.362: INFO: Number of nodes with available pods: 0 Mar 16 13:30:12.362: INFO: Number of running nodes: 0, number of available pods: 0 Mar 16 13:30:12.365: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-2558/daemonsets","resourceVersion":"160774"},"items":null} Mar 16 13:30:12.397: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-2558/pods","resourceVersion":"160775"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 16 13:30:12.405: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-2558" for this suite. Mar 16 13:30:20.587: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 13:30:20.661: INFO: namespace daemonsets-2558 deletion completed in 8.254090114s • [SLOW TEST:27.142 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 16 13:30:20.661: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name secret-emptykey-test-e95fa3ff-eeb0-4988-b975-44e9df25615b [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 16 13:30:20.949: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-4144" for this suite. Mar 16 13:30:26.997: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 13:30:27.066: INFO: namespace secrets-4144 deletion completed in 6.105861463s • [SLOW TEST:6.405 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31 should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 16 13:30:27.066: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir volume type on tmpfs Mar 16 13:30:27.304: INFO: Waiting up to 5m0s for pod "pod-073fb937-087a-4e36-bdc4-c3eebb5c8d26" in namespace "emptydir-4101" to be "success or failure" Mar 16 13:30:27.395: INFO: Pod "pod-073fb937-087a-4e36-bdc4-c3eebb5c8d26": Phase="Pending", Reason="", readiness=false. Elapsed: 90.860085ms Mar 16 13:30:29.399: INFO: Pod "pod-073fb937-087a-4e36-bdc4-c3eebb5c8d26": Phase="Pending", Reason="", readiness=false. Elapsed: 2.094737742s Mar 16 13:30:31.403: INFO: Pod "pod-073fb937-087a-4e36-bdc4-c3eebb5c8d26": Phase="Pending", Reason="", readiness=false. Elapsed: 4.098240012s Mar 16 13:30:33.536: INFO: Pod "pod-073fb937-087a-4e36-bdc4-c3eebb5c8d26": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.231761186s STEP: Saw pod success Mar 16 13:30:33.536: INFO: Pod "pod-073fb937-087a-4e36-bdc4-c3eebb5c8d26" satisfied condition "success or failure" Mar 16 13:30:33.539: INFO: Trying to get logs from node iruya-worker pod pod-073fb937-087a-4e36-bdc4-c3eebb5c8d26 container test-container: STEP: delete the pod Mar 16 13:30:33.777: INFO: Waiting for pod pod-073fb937-087a-4e36-bdc4-c3eebb5c8d26 to disappear Mar 16 13:30:33.822: INFO: Pod pod-073fb937-087a-4e36-bdc4-c3eebb5c8d26 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 16 13:30:33.822: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4101" for this suite. Mar 16 13:30:42.027: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 13:30:42.096: INFO: namespace emptydir-4101 deletion completed in 8.270974504s • [SLOW TEST:15.030 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 16 13:30:42.096: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification Mar 16 13:30:42.434: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-4758,SelfLink:/api/v1/namespaces/watch-4758/configmaps/e2e-watch-test-configmap-a,UID:34889d0e-822e-4fce-b803-0d8d99cb05e2,ResourceVersion:160894,Generation:0,CreationTimestamp:2020-03-16 13:30:42 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} Mar 16 13:30:42.435: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-4758,SelfLink:/api/v1/namespaces/watch-4758/configmaps/e2e-watch-test-configmap-a,UID:34889d0e-822e-4fce-b803-0d8d99cb05e2,ResourceVersion:160894,Generation:0,CreationTimestamp:2020-03-16 13:30:42 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: modifying configmap A and ensuring the correct watchers observe the notification Mar 16 13:30:52.443: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-4758,SelfLink:/api/v1/namespaces/watch-4758/configmaps/e2e-watch-test-configmap-a,UID:34889d0e-822e-4fce-b803-0d8d99cb05e2,ResourceVersion:160916,Generation:0,CreationTimestamp:2020-03-16 13:30:42 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} Mar 16 13:30:52.443: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-4758,SelfLink:/api/v1/namespaces/watch-4758/configmaps/e2e-watch-test-configmap-a,UID:34889d0e-822e-4fce-b803-0d8d99cb05e2,ResourceVersion:160916,Generation:0,CreationTimestamp:2020-03-16 13:30:42 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying configmap A again and ensuring the correct watchers observe the notification Mar 16 13:31:02.449: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-4758,SelfLink:/api/v1/namespaces/watch-4758/configmaps/e2e-watch-test-configmap-a,UID:34889d0e-822e-4fce-b803-0d8d99cb05e2,ResourceVersion:160936,Generation:0,CreationTimestamp:2020-03-16 13:30:42 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Mar 16 13:31:02.450: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-4758,SelfLink:/api/v1/namespaces/watch-4758/configmaps/e2e-watch-test-configmap-a,UID:34889d0e-822e-4fce-b803-0d8d99cb05e2,ResourceVersion:160936,Generation:0,CreationTimestamp:2020-03-16 13:30:42 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: deleting configmap A and ensuring the correct watchers observe the notification Mar 16 13:31:12.456: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-4758,SelfLink:/api/v1/namespaces/watch-4758/configmaps/e2e-watch-test-configmap-a,UID:34889d0e-822e-4fce-b803-0d8d99cb05e2,ResourceVersion:160956,Generation:0,CreationTimestamp:2020-03-16 13:30:42 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Mar 16 13:31:12.456: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-4758,SelfLink:/api/v1/namespaces/watch-4758/configmaps/e2e-watch-test-configmap-a,UID:34889d0e-822e-4fce-b803-0d8d99cb05e2,ResourceVersion:160956,Generation:0,CreationTimestamp:2020-03-16 13:30:42 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification Mar 16 13:31:22.463: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-4758,SelfLink:/api/v1/namespaces/watch-4758/configmaps/e2e-watch-test-configmap-b,UID:7428f806-e751-45c3-a23f-5a794b6fc3e6,ResourceVersion:160977,Generation:0,CreationTimestamp:2020-03-16 13:31:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} Mar 16 13:31:22.464: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-4758,SelfLink:/api/v1/namespaces/watch-4758/configmaps/e2e-watch-test-configmap-b,UID:7428f806-e751-45c3-a23f-5a794b6fc3e6,ResourceVersion:160977,Generation:0,CreationTimestamp:2020-03-16 13:31:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: deleting configmap B and ensuring the correct watchers observe the notification Mar 16 13:31:32.471: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-4758,SelfLink:/api/v1/namespaces/watch-4758/configmaps/e2e-watch-test-configmap-b,UID:7428f806-e751-45c3-a23f-5a794b6fc3e6,ResourceVersion:160995,Generation:0,CreationTimestamp:2020-03-16 13:31:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} Mar 16 13:31:32.471: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-4758,SelfLink:/api/v1/namespaces/watch-4758/configmaps/e2e-watch-test-configmap-b,UID:7428f806-e751-45c3-a23f-5a794b6fc3e6,ResourceVersion:160995,Generation:0,CreationTimestamp:2020-03-16 13:31:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 16 13:31:42.471: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-4758" for this suite. Mar 16 13:31:49.193: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 13:31:49.558: INFO: namespace watch-4758 deletion completed in 6.672732139s • [SLOW TEST:67.463 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 16 13:31:49.559: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name s-test-opt-del-ee3b6464-f2d3-4f98-a6c8-90bbca27a241 STEP: Creating secret with name s-test-opt-upd-b9cda127-15fd-4ef0-bfc0-1b6a97b431a5 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-ee3b6464-f2d3-4f98-a6c8-90bbca27a241 STEP: Updating secret s-test-opt-upd-b9cda127-15fd-4ef0-bfc0-1b6a97b431a5 STEP: Creating secret with name s-test-opt-create-d2bd8673-d6ed-4856-a3d1-dcf7fae7f2da STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 16 13:33:19.593: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7021" for this suite. Mar 16 13:33:41.766: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 13:33:41.832: INFO: namespace projected-7021 deletion completed in 22.235911942s • [SLOW TEST:112.273 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Service endpoints latency should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 16 13:33:41.833: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating replication controller svc-latency-rc in namespace svc-latency-662 I0316 13:33:42.029509 6 runners.go:180] Created replication controller with name: svc-latency-rc, namespace: svc-latency-662, replica count: 1 I0316 13:33:43.079969 6 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0316 13:33:44.080221 6 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0316 13:33:45.080489 6 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0316 13:33:46.080721 6 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0316 13:33:47.080942 6 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Mar 16 13:33:47.344: INFO: Created: latency-svc-kmslr Mar 16 13:33:47.479: INFO: Got endpoints: latency-svc-kmslr [297.982256ms] Mar 16 13:33:47.629: INFO: Created: latency-svc-xlmg4 Mar 16 13:33:47.641: INFO: Got endpoints: latency-svc-xlmg4 [161.701992ms] Mar 16 13:33:47.689: INFO: Created: latency-svc-jnl6q Mar 16 13:33:47.772: INFO: Got endpoints: latency-svc-jnl6q [293.387713ms] Mar 16 13:33:47.806: INFO: Created: latency-svc-wfmbb Mar 16 13:33:47.822: INFO: Got endpoints: latency-svc-wfmbb [342.799928ms] Mar 16 13:33:47.965: INFO: Created: latency-svc-dtt9m Mar 16 13:33:47.971: INFO: Got endpoints: latency-svc-dtt9m [491.67247ms] Mar 16 13:33:48.041: INFO: Created: latency-svc-2gfb6 Mar 16 13:33:48.150: INFO: Got endpoints: latency-svc-2gfb6 [670.678758ms] Mar 16 13:33:48.152: INFO: Created: latency-svc-mwms6 Mar 16 13:33:48.248: INFO: Got endpoints: latency-svc-mwms6 [768.919597ms] Mar 16 13:33:48.330: INFO: Created: latency-svc-nmd86 Mar 16 13:33:48.333: INFO: Got endpoints: latency-svc-nmd86 [854.022386ms] Mar 16 13:33:48.413: INFO: Created: latency-svc-wss49 Mar 16 13:33:48.473: INFO: Got endpoints: latency-svc-wss49 [993.814212ms] Mar 16 13:33:48.475: INFO: Created: latency-svc-t9cxh Mar 16 13:33:48.487: INFO: Got endpoints: latency-svc-t9cxh [1.00804453s] Mar 16 13:33:48.533: INFO: Created: latency-svc-x2ksx Mar 16 13:33:48.554: INFO: Got endpoints: latency-svc-x2ksx [1.074933918s] Mar 16 13:33:48.640: INFO: Created: latency-svc-pttpw Mar 16 13:33:48.692: INFO: Got endpoints: latency-svc-pttpw [1.21261948s] Mar 16 13:33:48.811: INFO: Created: latency-svc-mxndd Mar 16 13:33:48.815: INFO: Got endpoints: latency-svc-mxndd [1.335572946s] Mar 16 13:33:49.031: INFO: Created: latency-svc-hkfzk Mar 16 13:33:49.033: INFO: Got endpoints: latency-svc-hkfzk [1.554423882s] Mar 16 13:33:49.078: INFO: Created: latency-svc-jvpdz Mar 16 13:33:49.239: INFO: Got endpoints: latency-svc-jvpdz [1.759945074s] Mar 16 13:33:49.258: INFO: Created: latency-svc-gkndl Mar 16 13:33:49.329: INFO: Got endpoints: latency-svc-gkndl [1.849793364s] Mar 16 13:33:49.516: INFO: Created: latency-svc-srrwl Mar 16 13:33:49.665: INFO: Got endpoints: latency-svc-srrwl [2.024044971s] Mar 16 13:33:49.690: INFO: Created: latency-svc-kwx55 Mar 16 13:33:49.735: INFO: Got endpoints: latency-svc-kwx55 [1.962946215s] Mar 16 13:33:49.845: INFO: Created: latency-svc-vftr4 Mar 16 13:33:49.875: INFO: Got endpoints: latency-svc-vftr4 [2.053599578s] Mar 16 13:33:50.019: INFO: Created: latency-svc-7qjs2 Mar 16 13:33:50.022: INFO: Got endpoints: latency-svc-7qjs2 [2.051540566s] Mar 16 13:33:50.077: INFO: Created: latency-svc-sphrd Mar 16 13:33:50.096: INFO: Got endpoints: latency-svc-sphrd [1.946808026s] Mar 16 13:33:50.186: INFO: Created: latency-svc-rs2s7 Mar 16 13:33:50.189: INFO: Got endpoints: latency-svc-rs2s7 [1.940522966s] Mar 16 13:33:50.361: INFO: Created: latency-svc-zlqdv Mar 16 13:33:50.436: INFO: Got endpoints: latency-svc-zlqdv [2.103466895s] Mar 16 13:33:50.552: INFO: Created: latency-svc-lcvvh Mar 16 13:33:50.564: INFO: Got endpoints: latency-svc-lcvvh [2.090977064s] Mar 16 13:33:50.625: INFO: Created: latency-svc-z4hwg Mar 16 13:33:50.636: INFO: Got endpoints: latency-svc-z4hwg [2.149149803s] Mar 16 13:33:50.725: INFO: Created: latency-svc-9lggz Mar 16 13:33:50.728: INFO: Got endpoints: latency-svc-9lggz [2.173557663s] Mar 16 13:33:50.761: INFO: Created: latency-svc-88xfp Mar 16 13:33:50.824: INFO: Got endpoints: latency-svc-88xfp [2.131760381s] Mar 16 13:33:50.874: INFO: Created: latency-svc-w4phs Mar 16 13:33:50.918: INFO: Got endpoints: latency-svc-w4phs [2.103589564s] Mar 16 13:33:50.919: INFO: Created: latency-svc-snv49 Mar 16 13:33:51.048: INFO: Got endpoints: latency-svc-snv49 [2.013955004s] Mar 16 13:33:51.050: INFO: Created: latency-svc-s76hd Mar 16 13:33:51.101: INFO: Got endpoints: latency-svc-s76hd [1.861813684s] Mar 16 13:33:51.133: INFO: Created: latency-svc-h8lbv Mar 16 13:33:51.347: INFO: Got endpoints: latency-svc-h8lbv [2.017643515s] Mar 16 13:33:51.386: INFO: Created: latency-svc-s6bgn Mar 16 13:33:51.539: INFO: Got endpoints: latency-svc-s6bgn [1.873679675s] Mar 16 13:33:51.564: INFO: Created: latency-svc-rltxx Mar 16 13:33:51.586: INFO: Got endpoints: latency-svc-rltxx [1.850526227s] Mar 16 13:33:51.749: INFO: Created: latency-svc-qxwb7 Mar 16 13:33:51.771: INFO: Got endpoints: latency-svc-qxwb7 [1.896069666s] Mar 16 13:33:51.822: INFO: Created: latency-svc-989ch Mar 16 13:33:51.964: INFO: Got endpoints: latency-svc-989ch [1.941801997s] Mar 16 13:33:51.973: INFO: Created: latency-svc-fbn9k Mar 16 13:33:52.015: INFO: Got endpoints: latency-svc-fbn9k [1.918764596s] Mar 16 13:33:52.121: INFO: Created: latency-svc-fdjkq Mar 16 13:33:52.174: INFO: Got endpoints: latency-svc-fdjkq [1.985438574s] Mar 16 13:33:52.205: INFO: Created: latency-svc-88wgb Mar 16 13:33:52.360: INFO: Got endpoints: latency-svc-88wgb [1.923002825s] Mar 16 13:33:52.379: INFO: Created: latency-svc-btdkt Mar 16 13:33:52.421: INFO: Got endpoints: latency-svc-btdkt [1.85715694s] Mar 16 13:33:52.528: INFO: Created: latency-svc-mqztm Mar 16 13:33:52.540: INFO: Got endpoints: latency-svc-mqztm [1.903504493s] Mar 16 13:33:52.591: INFO: Created: latency-svc-q2659 Mar 16 13:33:52.606: INFO: Got endpoints: latency-svc-q2659 [1.878362402s] Mar 16 13:33:52.680: INFO: Created: latency-svc-4vt9s Mar 16 13:33:52.696: INFO: Got endpoints: latency-svc-4vt9s [1.872483097s] Mar 16 13:33:52.739: INFO: Created: latency-svc-lrbnp Mar 16 13:33:52.790: INFO: Got endpoints: latency-svc-lrbnp [1.871312378s] Mar 16 13:33:52.807: INFO: Created: latency-svc-vdkr5 Mar 16 13:33:52.823: INFO: Got endpoints: latency-svc-vdkr5 [1.775356086s] Mar 16 13:33:52.860: INFO: Created: latency-svc-7bv5t Mar 16 13:33:52.883: INFO: Got endpoints: latency-svc-7bv5t [1.782199838s] Mar 16 13:33:52.934: INFO: Created: latency-svc-xqrp9 Mar 16 13:33:52.973: INFO: Got endpoints: latency-svc-xqrp9 [1.626722213s] Mar 16 13:33:53.027: INFO: Created: latency-svc-2bql7 Mar 16 13:33:53.102: INFO: Got endpoints: latency-svc-2bql7 [1.562993182s] Mar 16 13:33:53.155: INFO: Created: latency-svc-wfpcx Mar 16 13:33:53.171: INFO: Got endpoints: latency-svc-wfpcx [1.585417114s] Mar 16 13:33:53.262: INFO: Created: latency-svc-bbdg6 Mar 16 13:33:53.286: INFO: Got endpoints: latency-svc-bbdg6 [1.514445786s] Mar 16 13:33:53.332: INFO: Created: latency-svc-xjgd4 Mar 16 13:33:53.352: INFO: Got endpoints: latency-svc-xjgd4 [1.387584208s] Mar 16 13:33:53.413: INFO: Created: latency-svc-hw675 Mar 16 13:33:53.422: INFO: Got endpoints: latency-svc-hw675 [1.406691431s] Mar 16 13:33:53.484: INFO: Created: latency-svc-qbdvb Mar 16 13:33:53.550: INFO: Got endpoints: latency-svc-qbdvb [1.376320803s] Mar 16 13:33:53.591: INFO: Created: latency-svc-8pblb Mar 16 13:33:53.605: INFO: Got endpoints: latency-svc-8pblb [1.245303722s] Mar 16 13:33:53.750: INFO: Created: latency-svc-cmqlr Mar 16 13:33:53.802: INFO: Got endpoints: latency-svc-cmqlr [1.381038596s] Mar 16 13:33:53.803: INFO: Created: latency-svc-rvs6r Mar 16 13:33:53.899: INFO: Got endpoints: latency-svc-rvs6r [1.358696476s] Mar 16 13:33:53.915: INFO: Created: latency-svc-nxplz Mar 16 13:33:53.947: INFO: Got endpoints: latency-svc-nxplz [1.340742936s] Mar 16 13:33:53.992: INFO: Created: latency-svc-785mk Mar 16 13:33:54.017: INFO: Got endpoints: latency-svc-785mk [1.321211811s] Mar 16 13:33:54.040: INFO: Created: latency-svc-x2sdk Mar 16 13:33:54.067: INFO: Got endpoints: latency-svc-x2sdk [1.276956181s] Mar 16 13:33:54.174: INFO: Created: latency-svc-vqrnp Mar 16 13:33:54.181: INFO: Got endpoints: latency-svc-vqrnp [1.357957797s] Mar 16 13:33:54.221: INFO: Created: latency-svc-hj9dq Mar 16 13:33:54.242: INFO: Got endpoints: latency-svc-hj9dq [1.358305537s] Mar 16 13:33:54.323: INFO: Created: latency-svc-8tdm4 Mar 16 13:33:54.333: INFO: Got endpoints: latency-svc-8tdm4 [1.359696652s] Mar 16 13:33:54.376: INFO: Created: latency-svc-676hb Mar 16 13:33:54.392: INFO: Got endpoints: latency-svc-676hb [1.290206218s] Mar 16 13:33:54.486: INFO: Created: latency-svc-fl2pk Mar 16 13:33:54.512: INFO: Got endpoints: latency-svc-fl2pk [1.34089036s] Mar 16 13:33:54.564: INFO: Created: latency-svc-4ppnn Mar 16 13:33:54.640: INFO: Got endpoints: latency-svc-4ppnn [1.354407972s] Mar 16 13:33:54.645: INFO: Created: latency-svc-hjhlq Mar 16 13:33:54.663: INFO: Got endpoints: latency-svc-hjhlq [1.310943724s] Mar 16 13:33:54.821: INFO: Created: latency-svc-hc6th Mar 16 13:33:54.824: INFO: Got endpoints: latency-svc-hc6th [1.402121172s] Mar 16 13:33:54.911: INFO: Created: latency-svc-wpmp7 Mar 16 13:33:55.024: INFO: Got endpoints: latency-svc-wpmp7 [1.473323176s] Mar 16 13:33:55.041: INFO: Created: latency-svc-qp87r Mar 16 13:33:55.107: INFO: Got endpoints: latency-svc-qp87r [1.501940522s] Mar 16 13:33:55.210: INFO: Created: latency-svc-576b7 Mar 16 13:33:55.257: INFO: Got endpoints: latency-svc-576b7 [1.455026167s] Mar 16 13:33:55.384: INFO: Created: latency-svc-xfrbd Mar 16 13:33:55.413: INFO: Got endpoints: latency-svc-xfrbd [1.514757895s] Mar 16 13:33:55.534: INFO: Created: latency-svc-fzz9r Mar 16 13:33:55.559: INFO: Got endpoints: latency-svc-fzz9r [1.611716343s] Mar 16 13:33:55.617: INFO: Created: latency-svc-vgm2k Mar 16 13:33:55.748: INFO: Got endpoints: latency-svc-vgm2k [1.730947685s] Mar 16 13:33:55.842: INFO: Created: latency-svc-mh6pv Mar 16 13:33:56.042: INFO: Got endpoints: latency-svc-mh6pv [1.974848062s] Mar 16 13:33:56.060: INFO: Created: latency-svc-4mldq Mar 16 13:33:56.109: INFO: Got endpoints: latency-svc-4mldq [1.927779199s] Mar 16 13:33:56.204: INFO: Created: latency-svc-7546k Mar 16 13:33:56.236: INFO: Got endpoints: latency-svc-7546k [1.993902485s] Mar 16 13:33:56.342: INFO: Created: latency-svc-55h5l Mar 16 13:33:56.385: INFO: Got endpoints: latency-svc-55h5l [2.051770147s] Mar 16 13:33:56.426: INFO: Created: latency-svc-4trks Mar 16 13:33:56.491: INFO: Got endpoints: latency-svc-4trks [2.098712771s] Mar 16 13:33:56.522: INFO: Created: latency-svc-z6rd7 Mar 16 13:33:56.541: INFO: Got endpoints: latency-svc-z6rd7 [2.028936017s] Mar 16 13:33:56.653: INFO: Created: latency-svc-8hb57 Mar 16 13:33:56.686: INFO: Got endpoints: latency-svc-8hb57 [2.045109334s] Mar 16 13:33:56.881: INFO: Created: latency-svc-5l5pq Mar 16 13:33:56.918: INFO: Got endpoints: latency-svc-5l5pq [2.254928028s] Mar 16 13:33:56.918: INFO: Created: latency-svc-vn4bk Mar 16 13:33:56.977: INFO: Got endpoints: latency-svc-vn4bk [2.153072718s] Mar 16 13:33:57.072: INFO: Created: latency-svc-4kbd6 Mar 16 13:33:57.075: INFO: Got endpoints: latency-svc-4kbd6 [2.051416233s] Mar 16 13:33:57.147: INFO: Created: latency-svc-l6z2q Mar 16 13:33:57.233: INFO: Got endpoints: latency-svc-l6z2q [2.126534922s] Mar 16 13:33:57.261: INFO: Created: latency-svc-5fmmk Mar 16 13:33:57.286: INFO: Got endpoints: latency-svc-5fmmk [2.028967764s] Mar 16 13:33:57.320: INFO: Created: latency-svc-tdpz2 Mar 16 13:33:57.365: INFO: Got endpoints: latency-svc-tdpz2 [1.951349012s] Mar 16 13:33:57.379: INFO: Created: latency-svc-hn242 Mar 16 13:33:57.401: INFO: Got endpoints: latency-svc-hn242 [1.841993637s] Mar 16 13:33:57.453: INFO: Created: latency-svc-p9rbv Mar 16 13:33:57.569: INFO: Got endpoints: latency-svc-p9rbv [1.820097642s] Mar 16 13:33:57.572: INFO: Created: latency-svc-qrhnk Mar 16 13:33:57.581: INFO: Got endpoints: latency-svc-qrhnk [1.539068875s] Mar 16 13:33:57.625: INFO: Created: latency-svc-dfwz7 Mar 16 13:33:57.659: INFO: Got endpoints: latency-svc-dfwz7 [1.550206131s] Mar 16 13:33:57.775: INFO: Created: latency-svc-p2hfj Mar 16 13:33:57.818: INFO: Got endpoints: latency-svc-p2hfj [1.582660154s] Mar 16 13:33:57.891: INFO: Created: latency-svc-bbh6b Mar 16 13:33:57.917: INFO: Got endpoints: latency-svc-bbh6b [1.532074264s] Mar 16 13:33:57.968: INFO: Created: latency-svc-fgf4q Mar 16 13:33:58.029: INFO: Got endpoints: latency-svc-fgf4q [1.538549966s] Mar 16 13:33:58.058: INFO: Created: latency-svc-hh6np Mar 16 13:33:58.079: INFO: Got endpoints: latency-svc-hh6np [1.537960322s] Mar 16 13:33:58.107: INFO: Created: latency-svc-xxhmr Mar 16 13:33:58.191: INFO: Got endpoints: latency-svc-xxhmr [1.505212384s] Mar 16 13:33:58.193: INFO: Created: latency-svc-rmtf7 Mar 16 13:33:58.212: INFO: Got endpoints: latency-svc-rmtf7 [1.293811836s] Mar 16 13:33:58.249: INFO: Created: latency-svc-zb5mf Mar 16 13:33:58.266: INFO: Got endpoints: latency-svc-zb5mf [1.289027227s] Mar 16 13:33:58.353: INFO: Created: latency-svc-kwk2s Mar 16 13:33:58.356: INFO: Got endpoints: latency-svc-kwk2s [1.280855751s] Mar 16 13:33:58.546: INFO: Created: latency-svc-6vb4v Mar 16 13:33:58.548: INFO: Got endpoints: latency-svc-6vb4v [1.314582708s] Mar 16 13:33:58.714: INFO: Created: latency-svc-jf42z Mar 16 13:33:58.722: INFO: Got endpoints: latency-svc-jf42z [1.435989183s] Mar 16 13:33:58.796: INFO: Created: latency-svc-rftzj Mar 16 13:33:58.952: INFO: Got endpoints: latency-svc-rftzj [1.587144827s] Mar 16 13:33:58.998: INFO: Created: latency-svc-ptczh Mar 16 13:33:59.210: INFO: Got endpoints: latency-svc-ptczh [1.809209493s] Mar 16 13:33:59.213: INFO: Created: latency-svc-mswtc Mar 16 13:33:59.263: INFO: Got endpoints: latency-svc-mswtc [1.69407995s] Mar 16 13:33:59.468: INFO: Created: latency-svc-w4dqv Mar 16 13:33:59.470: INFO: Got endpoints: latency-svc-w4dqv [1.889099409s] Mar 16 13:33:59.534: INFO: Created: latency-svc-cz98k Mar 16 13:33:59.701: INFO: Got endpoints: latency-svc-cz98k [2.041925353s] Mar 16 13:33:59.726: INFO: Created: latency-svc-gn6j5 Mar 16 13:33:59.767: INFO: Got endpoints: latency-svc-gn6j5 [1.948089057s] Mar 16 13:33:59.923: INFO: Created: latency-svc-xwrn8 Mar 16 13:33:59.947: INFO: Got endpoints: latency-svc-xwrn8 [2.029725375s] Mar 16 13:34:00.013: INFO: Created: latency-svc-gltpf Mar 16 13:34:00.263: INFO: Got endpoints: latency-svc-gltpf [2.233571869s] Mar 16 13:34:00.320: INFO: Created: latency-svc-wnnld Mar 16 13:34:00.415: INFO: Got endpoints: latency-svc-wnnld [2.335633074s] Mar 16 13:34:00.593: INFO: Created: latency-svc-brc7v Mar 16 13:34:00.596: INFO: Got endpoints: latency-svc-brc7v [2.405064766s] Mar 16 13:34:00.798: INFO: Created: latency-svc-nn5gl Mar 16 13:34:00.801: INFO: Got endpoints: latency-svc-nn5gl [2.589638117s] Mar 16 13:34:00.887: INFO: Created: latency-svc-p7bc5 Mar 16 13:34:00.970: INFO: Got endpoints: latency-svc-p7bc5 [2.703130127s] Mar 16 13:34:00.972: INFO: Created: latency-svc-d54bk Mar 16 13:34:00.984: INFO: Got endpoints: latency-svc-d54bk [2.627905666s] Mar 16 13:34:01.032: INFO: Created: latency-svc-h6579 Mar 16 13:34:01.044: INFO: Got endpoints: latency-svc-h6579 [2.496110625s] Mar 16 13:34:01.122: INFO: Created: latency-svc-jcc62 Mar 16 13:34:01.147: INFO: Got endpoints: latency-svc-jcc62 [2.424227898s] Mar 16 13:34:01.263: INFO: Created: latency-svc-djdbr Mar 16 13:34:01.267: INFO: Got endpoints: latency-svc-djdbr [2.314505296s] Mar 16 13:34:01.333: INFO: Created: latency-svc-2tk4d Mar 16 13:34:01.345: INFO: Got endpoints: latency-svc-2tk4d [2.134891481s] Mar 16 13:34:01.431: INFO: Created: latency-svc-wgq69 Mar 16 13:34:01.463: INFO: Got endpoints: latency-svc-wgq69 [2.200115351s] Mar 16 13:34:01.501: INFO: Created: latency-svc-dmcv6 Mar 16 13:34:01.623: INFO: Got endpoints: latency-svc-dmcv6 [2.152205373s] Mar 16 13:34:01.657: INFO: Created: latency-svc-77qk7 Mar 16 13:34:01.676: INFO: Got endpoints: latency-svc-77qk7 [1.974482596s] Mar 16 13:34:01.703: INFO: Created: latency-svc-z566x Mar 16 13:34:01.796: INFO: Got endpoints: latency-svc-z566x [2.029602432s] Mar 16 13:34:01.835: INFO: Created: latency-svc-84djw Mar 16 13:34:01.862: INFO: Got endpoints: latency-svc-84djw [1.915132371s] Mar 16 13:34:01.964: INFO: Created: latency-svc-9bk8b Mar 16 13:34:02.000: INFO: Got endpoints: latency-svc-9bk8b [1.737028286s] Mar 16 13:34:02.063: INFO: Created: latency-svc-7p46p Mar 16 13:34:02.149: INFO: Got endpoints: latency-svc-7p46p [1.734429933s] Mar 16 13:34:02.189: INFO: Created: latency-svc-77mll Mar 16 13:34:02.216: INFO: Got endpoints: latency-svc-77mll [1.620315681s] Mar 16 13:34:02.373: INFO: Created: latency-svc-k28k6 Mar 16 13:34:02.400: INFO: Got endpoints: latency-svc-k28k6 [1.598640898s] Mar 16 13:34:02.564: INFO: Created: latency-svc-xwnpp Mar 16 13:34:02.572: INFO: Got endpoints: latency-svc-xwnpp [1.602347881s] Mar 16 13:34:02.640: INFO: Created: latency-svc-h289n Mar 16 13:34:02.736: INFO: Got endpoints: latency-svc-h289n [1.752019247s] Mar 16 13:34:02.760: INFO: Created: latency-svc-2786g Mar 16 13:34:02.787: INFO: Got endpoints: latency-svc-2786g [1.742229163s] Mar 16 13:34:02.830: INFO: Created: latency-svc-n7c5c Mar 16 13:34:02.928: INFO: Got endpoints: latency-svc-n7c5c [1.780830725s] Mar 16 13:34:02.962: INFO: Created: latency-svc-6mqjq Mar 16 13:34:03.015: INFO: Got endpoints: latency-svc-6mqjq [1.748221s] Mar 16 13:34:03.193: INFO: Created: latency-svc-rp9rx Mar 16 13:34:03.196: INFO: Got endpoints: latency-svc-rp9rx [1.851367678s] Mar 16 13:34:03.335: INFO: Created: latency-svc-l7hds Mar 16 13:34:03.338: INFO: Got endpoints: latency-svc-l7hds [1.875062531s] Mar 16 13:34:03.515: INFO: Created: latency-svc-6c9tg Mar 16 13:34:03.525: INFO: Got endpoints: latency-svc-6c9tg [1.902454737s] Mar 16 13:34:03.573: INFO: Created: latency-svc-4f96r Mar 16 13:34:03.665: INFO: Got endpoints: latency-svc-4f96r [1.988968456s] Mar 16 13:34:03.689: INFO: Created: latency-svc-55zld Mar 16 13:34:03.707: INFO: Got endpoints: latency-svc-55zld [1.911081099s] Mar 16 13:34:03.743: INFO: Created: latency-svc-sbfkn Mar 16 13:34:03.856: INFO: Got endpoints: latency-svc-sbfkn [1.993635519s] Mar 16 13:34:03.879: INFO: Created: latency-svc-8xc5p Mar 16 13:34:03.924: INFO: Got endpoints: latency-svc-8xc5p [1.923813911s] Mar 16 13:34:04.005: INFO: Created: latency-svc-tcswn Mar 16 13:34:04.020: INFO: Got endpoints: latency-svc-tcswn [1.870584354s] Mar 16 13:34:04.055: INFO: Created: latency-svc-8rgpc Mar 16 13:34:04.191: INFO: Got endpoints: latency-svc-8rgpc [1.974873341s] Mar 16 13:34:04.217: INFO: Created: latency-svc-fh4np Mar 16 13:34:04.267: INFO: Got endpoints: latency-svc-fh4np [1.866601231s] Mar 16 13:34:04.420: INFO: Created: latency-svc-v2ssq Mar 16 13:34:04.468: INFO: Got endpoints: latency-svc-v2ssq [1.896400447s] Mar 16 13:34:04.527: INFO: Created: latency-svc-b4p9d Mar 16 13:34:04.561: INFO: Got endpoints: latency-svc-b4p9d [1.824625338s] Mar 16 13:34:04.592: INFO: Created: latency-svc-ftbdv Mar 16 13:34:04.615: INFO: Got endpoints: latency-svc-ftbdv [1.828493026s] Mar 16 13:34:04.697: INFO: Created: latency-svc-hvlhn Mar 16 13:34:04.711: INFO: Got endpoints: latency-svc-hvlhn [1.783224195s] Mar 16 13:34:04.750: INFO: Created: latency-svc-9flz5 Mar 16 13:34:04.808: INFO: Got endpoints: latency-svc-9flz5 [1.793072056s] Mar 16 13:34:04.820: INFO: Created: latency-svc-mq577 Mar 16 13:34:04.844: INFO: Got endpoints: latency-svc-mq577 [1.647237535s] Mar 16 13:34:04.892: INFO: Created: latency-svc-z47cv Mar 16 13:34:04.940: INFO: Got endpoints: latency-svc-z47cv [1.60150135s] Mar 16 13:34:04.988: INFO: Created: latency-svc-ngjn9 Mar 16 13:34:05.012: INFO: Got endpoints: latency-svc-ngjn9 [1.486589568s] Mar 16 13:34:05.084: INFO: Created: latency-svc-62d4v Mar 16 13:34:05.140: INFO: Got endpoints: latency-svc-62d4v [1.475593739s] Mar 16 13:34:05.222: INFO: Created: latency-svc-6pdc5 Mar 16 13:34:05.258: INFO: Got endpoints: latency-svc-6pdc5 [1.550466078s] Mar 16 13:34:05.384: INFO: Created: latency-svc-s6s6g Mar 16 13:34:05.434: INFO: Created: latency-svc-wldnm Mar 16 13:34:05.434: INFO: Got endpoints: latency-svc-s6s6g [1.578313756s] Mar 16 13:34:05.463: INFO: Got endpoints: latency-svc-wldnm [1.538862457s] Mar 16 13:34:05.617: INFO: Created: latency-svc-44zll Mar 16 13:34:05.620: INFO: Got endpoints: latency-svc-44zll [1.599622935s] Mar 16 13:34:05.761: INFO: Created: latency-svc-j5hcx Mar 16 13:34:05.764: INFO: Got endpoints: latency-svc-j5hcx [1.572221794s] Mar 16 13:34:05.830: INFO: Created: latency-svc-54n27 Mar 16 13:34:05.928: INFO: Got endpoints: latency-svc-54n27 [1.661105031s] Mar 16 13:34:05.937: INFO: Created: latency-svc-v989d Mar 16 13:34:05.986: INFO: Got endpoints: latency-svc-v989d [1.517043059s] Mar 16 13:34:06.025: INFO: Created: latency-svc-jbmjc Mar 16 13:34:06.149: INFO: Got endpoints: latency-svc-jbmjc [1.588375558s] Mar 16 13:34:06.163: INFO: Created: latency-svc-8bqsj Mar 16 13:34:06.184: INFO: Got endpoints: latency-svc-8bqsj [1.568633036s] Mar 16 13:34:06.352: INFO: Created: latency-svc-4hb9l Mar 16 13:34:06.376: INFO: Got endpoints: latency-svc-4hb9l [1.664840803s] Mar 16 13:34:06.422: INFO: Created: latency-svc-8mj9r Mar 16 13:34:06.581: INFO: Got endpoints: latency-svc-8mj9r [1.773310418s] Mar 16 13:34:06.584: INFO: Created: latency-svc-l5zt5 Mar 16 13:34:06.592: INFO: Got endpoints: latency-svc-l5zt5 [1.747981998s] Mar 16 13:34:06.645: INFO: Created: latency-svc-cpxck Mar 16 13:34:06.664: INFO: Got endpoints: latency-svc-cpxck [1.724187731s] Mar 16 13:34:06.737: INFO: Created: latency-svc-ldrqk Mar 16 13:34:06.754: INFO: Got endpoints: latency-svc-ldrqk [1.742122202s] Mar 16 13:34:06.905: INFO: Created: latency-svc-bp5q8 Mar 16 13:34:06.935: INFO: Got endpoints: latency-svc-bp5q8 [1.79415256s] Mar 16 13:34:07.108: INFO: Created: latency-svc-qb8pr Mar 16 13:34:07.140: INFO: Got endpoints: latency-svc-qb8pr [1.882059185s] Mar 16 13:34:07.200: INFO: Created: latency-svc-5hdlb Mar 16 13:34:07.347: INFO: Got endpoints: latency-svc-5hdlb [1.912675284s] Mar 16 13:34:07.382: INFO: Created: latency-svc-q2z2d Mar 16 13:34:07.426: INFO: Got endpoints: latency-svc-q2z2d [1.963343579s] Mar 16 13:34:07.575: INFO: Created: latency-svc-hh458 Mar 16 13:34:07.613: INFO: Got endpoints: latency-svc-hh458 [1.993201334s] Mar 16 13:34:07.736: INFO: Created: latency-svc-mtmz7 Mar 16 13:34:07.756: INFO: Got endpoints: latency-svc-mtmz7 [1.99257896s] Mar 16 13:34:07.790: INFO: Created: latency-svc-8xvrx Mar 16 13:34:07.823: INFO: Got endpoints: latency-svc-8xvrx [1.895366946s] Mar 16 13:34:07.892: INFO: Created: latency-svc-hnmzr Mar 16 13:34:07.919: INFO: Got endpoints: latency-svc-hnmzr [1.933761896s] Mar 16 13:34:07.961: INFO: Created: latency-svc-pzg8d Mar 16 13:34:08.048: INFO: Got endpoints: latency-svc-pzg8d [1.898110046s] Mar 16 13:34:08.058: INFO: Created: latency-svc-jdjwq Mar 16 13:34:08.076: INFO: Got endpoints: latency-svc-jdjwq [1.891720601s] Mar 16 13:34:08.131: INFO: Created: latency-svc-lnght Mar 16 13:34:08.179: INFO: Got endpoints: latency-svc-lnght [1.803452867s] Mar 16 13:34:08.197: INFO: Created: latency-svc-zw777 Mar 16 13:34:08.220: INFO: Got endpoints: latency-svc-zw777 [1.638364387s] Mar 16 13:34:08.273: INFO: Created: latency-svc-ztpw6 Mar 16 13:34:08.335: INFO: Got endpoints: latency-svc-ztpw6 [1.742982715s] Mar 16 13:34:08.363: INFO: Created: latency-svc-jg478 Mar 16 13:34:08.412: INFO: Got endpoints: latency-svc-jg478 [1.748391937s] Mar 16 13:34:08.462: INFO: Created: latency-svc-rn7l5 Mar 16 13:34:08.465: INFO: Got endpoints: latency-svc-rn7l5 [1.711010167s] Mar 16 13:34:08.511: INFO: Created: latency-svc-mhlnk Mar 16 13:34:08.527: INFO: Got endpoints: latency-svc-mhlnk [1.592174296s] Mar 16 13:34:08.561: INFO: Created: latency-svc-kk4fc Mar 16 13:34:08.686: INFO: Created: latency-svc-2c7st Mar 16 13:34:08.720: INFO: Got endpoints: latency-svc-kk4fc [1.579502347s] Mar 16 13:34:08.720: INFO: Got endpoints: latency-svc-2c7st [1.372695232s] Mar 16 13:34:08.767: INFO: Created: latency-svc-zsndt Mar 16 13:34:08.826: INFO: Got endpoints: latency-svc-zsndt [1.399767352s] Mar 16 13:34:08.842: INFO: Created: latency-svc-qtlfp Mar 16 13:34:08.863: INFO: Got endpoints: latency-svc-qtlfp [1.250344565s] Mar 16 13:34:08.904: INFO: Created: latency-svc-mtwm5 Mar 16 13:34:09.006: INFO: Got endpoints: latency-svc-mtwm5 [1.24975527s] Mar 16 13:34:09.043: INFO: Created: latency-svc-7t579 Mar 16 13:34:09.056: INFO: Got endpoints: latency-svc-7t579 [1.232464118s] Mar 16 13:34:09.180: INFO: Created: latency-svc-78bdz Mar 16 13:34:09.200: INFO: Got endpoints: latency-svc-78bdz [1.280888541s] Mar 16 13:34:09.245: INFO: Created: latency-svc-w5zrb Mar 16 13:34:09.261: INFO: Got endpoints: latency-svc-w5zrb [1.212819259s] Mar 16 13:34:09.354: INFO: Created: latency-svc-7qwlf Mar 16 13:34:09.358: INFO: Got endpoints: latency-svc-7qwlf [1.28201608s] Mar 16 13:34:09.444: INFO: Created: latency-svc-h7gdd Mar 16 13:34:09.527: INFO: Got endpoints: latency-svc-h7gdd [1.347435005s] Mar 16 13:34:09.580: INFO: Created: latency-svc-lj9fr Mar 16 13:34:09.682: INFO: Got endpoints: latency-svc-lj9fr [1.462411453s] Mar 16 13:34:09.720: INFO: Created: latency-svc-28p92 Mar 16 13:34:09.753: INFO: Got endpoints: latency-svc-28p92 [1.418496337s] Mar 16 13:34:09.820: INFO: Created: latency-svc-zz6d7 Mar 16 13:34:09.906: INFO: Got endpoints: latency-svc-zz6d7 [1.493684244s] Mar 16 13:34:09.994: INFO: Created: latency-svc-28jkb Mar 16 13:34:09.998: INFO: Got endpoints: latency-svc-28jkb [1.532731986s] Mar 16 13:34:10.090: INFO: Created: latency-svc-r779b Mar 16 13:34:10.150: INFO: Got endpoints: latency-svc-r779b [1.62273172s] Mar 16 13:34:10.189: INFO: Created: latency-svc-cqc7w Mar 16 13:34:10.359: INFO: Got endpoints: latency-svc-cqc7w [1.6395671s] Mar 16 13:34:10.362: INFO: Created: latency-svc-p9ll7 Mar 16 13:34:10.393: INFO: Got endpoints: latency-svc-p9ll7 [1.673233s] Mar 16 13:34:10.426: INFO: Created: latency-svc-6k7cm Mar 16 13:34:10.545: INFO: Got endpoints: latency-svc-6k7cm [1.718698438s] Mar 16 13:34:10.554: INFO: Created: latency-svc-zmxqm Mar 16 13:34:10.635: INFO: Got endpoints: latency-svc-zmxqm [1.77183887s] Mar 16 13:34:10.734: INFO: Created: latency-svc-k675m Mar 16 13:34:10.783: INFO: Got endpoints: latency-svc-k675m [1.777077992s] Mar 16 13:34:10.880: INFO: Created: latency-svc-5z4fq Mar 16 13:34:10.897: INFO: Got endpoints: latency-svc-5z4fq [1.841374613s] Mar 16 13:34:10.954: INFO: Created: latency-svc-nhgwv Mar 16 13:34:10.970: INFO: Got endpoints: latency-svc-nhgwv [1.769133585s] Mar 16 13:34:10.970: INFO: Latencies: [161.701992ms 293.387713ms 342.799928ms 491.67247ms 670.678758ms 768.919597ms 854.022386ms 993.814212ms 1.00804453s 1.074933918s 1.21261948s 1.212819259s 1.232464118s 1.245303722s 1.24975527s 1.250344565s 1.276956181s 1.280855751s 1.280888541s 1.28201608s 1.289027227s 1.290206218s 1.293811836s 1.310943724s 1.314582708s 1.321211811s 1.335572946s 1.340742936s 1.34089036s 1.347435005s 1.354407972s 1.357957797s 1.358305537s 1.358696476s 1.359696652s 1.372695232s 1.376320803s 1.381038596s 1.387584208s 1.399767352s 1.402121172s 1.406691431s 1.418496337s 1.435989183s 1.455026167s 1.462411453s 1.473323176s 1.475593739s 1.486589568s 1.493684244s 1.501940522s 1.505212384s 1.514445786s 1.514757895s 1.517043059s 1.532074264s 1.532731986s 1.537960322s 1.538549966s 1.538862457s 1.539068875s 1.550206131s 1.550466078s 1.554423882s 1.562993182s 1.568633036s 1.572221794s 1.578313756s 1.579502347s 1.582660154s 1.585417114s 1.587144827s 1.588375558s 1.592174296s 1.598640898s 1.599622935s 1.60150135s 1.602347881s 1.611716343s 1.620315681s 1.62273172s 1.626722213s 1.638364387s 1.6395671s 1.647237535s 1.661105031s 1.664840803s 1.673233s 1.69407995s 1.711010167s 1.718698438s 1.724187731s 1.730947685s 1.734429933s 1.737028286s 1.742122202s 1.742229163s 1.742982715s 1.747981998s 1.748221s 1.748391937s 1.752019247s 1.759945074s 1.769133585s 1.77183887s 1.773310418s 1.775356086s 1.777077992s 1.780830725s 1.782199838s 1.783224195s 1.793072056s 1.79415256s 1.803452867s 1.809209493s 1.820097642s 1.824625338s 1.828493026s 1.841374613s 1.841993637s 1.849793364s 1.850526227s 1.851367678s 1.85715694s 1.861813684s 1.866601231s 1.870584354s 1.871312378s 1.872483097s 1.873679675s 1.875062531s 1.878362402s 1.882059185s 1.889099409s 1.891720601s 1.895366946s 1.896069666s 1.896400447s 1.898110046s 1.902454737s 1.903504493s 1.911081099s 1.912675284s 1.915132371s 1.918764596s 1.923002825s 1.923813911s 1.927779199s 1.933761896s 1.940522966s 1.941801997s 1.946808026s 1.948089057s 1.951349012s 1.962946215s 1.963343579s 1.974482596s 1.974848062s 1.974873341s 1.985438574s 1.988968456s 1.99257896s 1.993201334s 1.993635519s 1.993902485s 2.013955004s 2.017643515s 2.024044971s 2.028936017s 2.028967764s 2.029602432s 2.029725375s 2.041925353s 2.045109334s 2.051416233s 2.051540566s 2.051770147s 2.053599578s 2.090977064s 2.098712771s 2.103466895s 2.103589564s 2.126534922s 2.131760381s 2.134891481s 2.149149803s 2.152205373s 2.153072718s 2.173557663s 2.200115351s 2.233571869s 2.254928028s 2.314505296s 2.335633074s 2.405064766s 2.424227898s 2.496110625s 2.589638117s 2.627905666s 2.703130127s] Mar 16 13:34:10.970: INFO: 50 %ile: 1.748391937s Mar 16 13:34:10.970: INFO: 90 %ile: 2.103466895s Mar 16 13:34:10.970: INFO: 99 %ile: 2.627905666s Mar 16 13:34:10.970: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 16 13:34:10.970: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svc-latency-662" for this suite. Mar 16 13:35:25.051: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 13:35:25.123: INFO: namespace svc-latency-662 deletion completed in 1m14.087123487s • [SLOW TEST:103.291 seconds] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 16 13:35:25.124: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Performing setup for networking test in namespace pod-network-test-4211 STEP: creating a selector STEP: Creating the service pods in kubernetes Mar 16 13:35:25.298: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Mar 16 13:35:53.582: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.2.175 8081 | grep -v '^\s*$'] Namespace:pod-network-test-4211 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 16 13:35:53.582: INFO: >>> kubeConfig: /root/.kube/config I0316 13:35:53.614548 6 log.go:172] (0xc00115a370) (0xc001a36500) Create stream I0316 13:35:53.614573 6 log.go:172] (0xc00115a370) (0xc001a36500) Stream added, broadcasting: 1 I0316 13:35:53.616406 6 log.go:172] (0xc00115a370) Reply frame received for 1 I0316 13:35:53.616456 6 log.go:172] (0xc00115a370) (0xc0013a2460) Create stream I0316 13:35:53.616473 6 log.go:172] (0xc00115a370) (0xc0013a2460) Stream added, broadcasting: 3 I0316 13:35:53.617727 6 log.go:172] (0xc00115a370) Reply frame received for 3 I0316 13:35:53.617768 6 log.go:172] (0xc00115a370) (0xc0013a2500) Create stream I0316 13:35:53.617782 6 log.go:172] (0xc00115a370) (0xc0013a2500) Stream added, broadcasting: 5 I0316 13:35:53.618672 6 log.go:172] (0xc00115a370) Reply frame received for 5 I0316 13:35:54.676427 6 log.go:172] (0xc00115a370) Data frame received for 3 I0316 13:35:54.676477 6 log.go:172] (0xc00115a370) Data frame received for 5 I0316 13:35:54.676520 6 log.go:172] (0xc0013a2500) (5) Data frame handling I0316 13:35:54.676555 6 log.go:172] (0xc0013a2460) (3) Data frame handling I0316 13:35:54.676689 6 log.go:172] (0xc0013a2460) (3) Data frame sent I0316 13:35:54.676709 6 log.go:172] (0xc00115a370) Data frame received for 3 I0316 13:35:54.676723 6 log.go:172] (0xc0013a2460) (3) Data frame handling I0316 13:35:54.679314 6 log.go:172] (0xc00115a370) Data frame received for 1 I0316 13:35:54.679337 6 log.go:172] (0xc001a36500) (1) Data frame handling I0316 13:35:54.679356 6 log.go:172] (0xc001a36500) (1) Data frame sent I0316 13:35:54.679374 6 log.go:172] (0xc00115a370) (0xc001a36500) Stream removed, broadcasting: 1 I0316 13:35:54.679474 6 log.go:172] (0xc00115a370) (0xc001a36500) Stream removed, broadcasting: 1 I0316 13:35:54.679490 6 log.go:172] (0xc00115a370) (0xc0013a2460) Stream removed, broadcasting: 3 I0316 13:35:54.679505 6 log.go:172] (0xc00115a370) (0xc0013a2500) Stream removed, broadcasting: 5 I0316 13:35:54.679526 6 log.go:172] (0xc00115a370) Go away received Mar 16 13:35:54.679: INFO: Found all expected endpoints: [netserver-0] Mar 16 13:35:54.682: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.1.183 8081 | grep -v '^\s*$'] Namespace:pod-network-test-4211 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 16 13:35:54.682: INFO: >>> kubeConfig: /root/.kube/config I0316 13:35:54.708974 6 log.go:172] (0xc000cd2a50) (0xc002b25180) Create stream I0316 13:35:54.709018 6 log.go:172] (0xc000cd2a50) (0xc002b25180) Stream added, broadcasting: 1 I0316 13:35:54.711205 6 log.go:172] (0xc000cd2a50) Reply frame received for 1 I0316 13:35:54.711233 6 log.go:172] (0xc000cd2a50) (0xc0019aa0a0) Create stream I0316 13:35:54.711246 6 log.go:172] (0xc000cd2a50) (0xc0019aa0a0) Stream added, broadcasting: 3 I0316 13:35:54.711943 6 log.go:172] (0xc000cd2a50) Reply frame received for 3 I0316 13:35:54.711961 6 log.go:172] (0xc000cd2a50) (0xc002b25220) Create stream I0316 13:35:54.711970 6 log.go:172] (0xc000cd2a50) (0xc002b25220) Stream added, broadcasting: 5 I0316 13:35:54.712856 6 log.go:172] (0xc000cd2a50) Reply frame received for 5 I0316 13:35:55.797271 6 log.go:172] (0xc000cd2a50) Data frame received for 3 I0316 13:35:55.797365 6 log.go:172] (0xc000cd2a50) Data frame received for 5 I0316 13:35:55.797401 6 log.go:172] (0xc002b25220) (5) Data frame handling I0316 13:35:55.797423 6 log.go:172] (0xc0019aa0a0) (3) Data frame handling I0316 13:35:55.797457 6 log.go:172] (0xc0019aa0a0) (3) Data frame sent I0316 13:35:55.797481 6 log.go:172] (0xc000cd2a50) Data frame received for 3 I0316 13:35:55.797498 6 log.go:172] (0xc0019aa0a0) (3) Data frame handling I0316 13:35:55.799258 6 log.go:172] (0xc000cd2a50) Data frame received for 1 I0316 13:35:55.799276 6 log.go:172] (0xc002b25180) (1) Data frame handling I0316 13:35:55.799282 6 log.go:172] (0xc002b25180) (1) Data frame sent I0316 13:35:55.799292 6 log.go:172] (0xc000cd2a50) (0xc002b25180) Stream removed, broadcasting: 1 I0316 13:35:55.799299 6 log.go:172] (0xc000cd2a50) Go away received I0316 13:35:55.799499 6 log.go:172] (0xc000cd2a50) (0xc002b25180) Stream removed, broadcasting: 1 I0316 13:35:55.799537 6 log.go:172] (0xc000cd2a50) (0xc0019aa0a0) Stream removed, broadcasting: 3 I0316 13:35:55.799554 6 log.go:172] (0xc000cd2a50) (0xc002b25220) Stream removed, broadcasting: 5 Mar 16 13:35:55.799: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 16 13:35:55.799: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-4211" for this suite. Mar 16 13:36:21.921: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 13:36:21.993: INFO: namespace pod-network-test-4211 deletion completed in 26.18918384s • [SLOW TEST:56.869 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 16 13:36:21.993: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications Mar 16 13:36:22.155: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-1422,SelfLink:/api/v1/namespaces/watch-1422/configmaps/e2e-watch-test-watch-closed,UID:c02c11df-a194-478a-ae4b-3049704fe675,ResourceVersion:162899,Generation:0,CreationTimestamp:2020-03-16 13:36:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} Mar 16 13:36:22.155: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-1422,SelfLink:/api/v1/namespaces/watch-1422/configmaps/e2e-watch-test-watch-closed,UID:c02c11df-a194-478a-ae4b-3049704fe675,ResourceVersion:162900,Generation:0,CreationTimestamp:2020-03-16 13:36:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed Mar 16 13:36:22.184: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-1422,SelfLink:/api/v1/namespaces/watch-1422/configmaps/e2e-watch-test-watch-closed,UID:c02c11df-a194-478a-ae4b-3049704fe675,ResourceVersion:162901,Generation:0,CreationTimestamp:2020-03-16 13:36:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Mar 16 13:36:22.184: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-1422,SelfLink:/api/v1/namespaces/watch-1422/configmaps/e2e-watch-test-watch-closed,UID:c02c11df-a194-478a-ae4b-3049704fe675,ResourceVersion:162902,Generation:0,CreationTimestamp:2020-03-16 13:36:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 16 13:36:22.184: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-1422" for this suite. Mar 16 13:36:28.286: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 13:36:28.374: INFO: namespace watch-1422 deletion completed in 6.172302866s • [SLOW TEST:6.380 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 16 13:36:28.374: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Mar 16 13:36:28.723: INFO: (0) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 6.027375ms) Mar 16 13:36:28.726: INFO: (1) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.15694ms) Mar 16 13:36:28.730: INFO: (2) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.296682ms) Mar 16 13:36:28.733: INFO: (3) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.469629ms) Mar 16 13:36:28.736: INFO: (4) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.218833ms) Mar 16 13:36:28.740: INFO: (5) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.123362ms) Mar 16 13:36:28.742: INFO: (6) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 2.860673ms) Mar 16 13:36:28.745: INFO: (7) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.022739ms) Mar 16 13:36:28.749: INFO: (8) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.044705ms) Mar 16 13:36:28.774: INFO: (9) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 25.954469ms) Mar 16 13:36:28.777: INFO: (10) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 2.842311ms) Mar 16 13:36:28.780: INFO: (11) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 2.837245ms) Mar 16 13:36:28.783: INFO: (12) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 2.728959ms) Mar 16 13:36:28.786: INFO: (13) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 2.83582ms) Mar 16 13:36:28.788: INFO: (14) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 2.58677ms) Mar 16 13:36:28.791: INFO: (15) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 2.842099ms) Mar 16 13:36:28.794: INFO: (16) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 2.763876ms) Mar 16 13:36:28.797: INFO: (17) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 2.872779ms) Mar 16 13:36:28.801: INFO: (18) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.543641ms) Mar 16 13:36:28.804: INFO: (19) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.076346ms) [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 16 13:36:28.804: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-969" for this suite. Mar 16 13:36:34.827: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 13:36:34.887: INFO: namespace proxy-969 deletion completed in 6.079835847s • [SLOW TEST:6.513 seconds] [sig-network] Proxy /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:58 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 16 13:36:34.887: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted STEP: Gathering metrics W0316 13:36:47.039203 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Mar 16 13:36:47.039: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 16 13:36:47.039: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-5101" for this suite. Mar 16 13:36:57.744: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 13:36:57.814: INFO: namespace gc-5101 deletion completed in 10.718947225s • [SLOW TEST:22.927 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 16 13:36:57.815: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update Mar 16 13:36:59.491: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:watch-6958,SelfLink:/api/v1/namespaces/watch-6958/configmaps/e2e-watch-test-resource-version,UID:7a3a3248-f78e-4aac-a182-0d7bc4be6155,ResourceVersion:163166,Generation:0,CreationTimestamp:2020-03-16 13:36:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Mar 16 13:36:59.491: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:watch-6958,SelfLink:/api/v1/namespaces/watch-6958/configmaps/e2e-watch-test-resource-version,UID:7a3a3248-f78e-4aac-a182-0d7bc4be6155,ResourceVersion:163167,Generation:0,CreationTimestamp:2020-03-16 13:36:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 16 13:36:59.491: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-6958" for this suite. Mar 16 13:37:06.378: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 13:37:06.444: INFO: namespace watch-6958 deletion completed in 6.716591387s • [SLOW TEST:8.630 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 16 13:37:06.445: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-7eea0030-26f8-471f-9191-db82dd939b4c STEP: Creating a pod to test consume configMaps Mar 16 13:37:06.697: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-522741a4-1b41-485e-8073-d22e7ffe8a65" in namespace "projected-8670" to be "success or failure" Mar 16 13:37:06.788: INFO: Pod "pod-projected-configmaps-522741a4-1b41-485e-8073-d22e7ffe8a65": Phase="Pending", Reason="", readiness=false. Elapsed: 90.89781ms Mar 16 13:37:08.829: INFO: Pod "pod-projected-configmaps-522741a4-1b41-485e-8073-d22e7ffe8a65": Phase="Pending", Reason="", readiness=false. Elapsed: 2.132331256s Mar 16 13:37:10.833: INFO: Pod "pod-projected-configmaps-522741a4-1b41-485e-8073-d22e7ffe8a65": Phase="Pending", Reason="", readiness=false. Elapsed: 4.13619335s Mar 16 13:37:12.837: INFO: Pod "pod-projected-configmaps-522741a4-1b41-485e-8073-d22e7ffe8a65": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.140525396s STEP: Saw pod success Mar 16 13:37:12.837: INFO: Pod "pod-projected-configmaps-522741a4-1b41-485e-8073-d22e7ffe8a65" satisfied condition "success or failure" Mar 16 13:37:12.840: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-configmaps-522741a4-1b41-485e-8073-d22e7ffe8a65 container projected-configmap-volume-test: STEP: delete the pod Mar 16 13:37:12.874: INFO: Waiting for pod pod-projected-configmaps-522741a4-1b41-485e-8073-d22e7ffe8a65 to disappear Mar 16 13:37:12.891: INFO: Pod pod-projected-configmaps-522741a4-1b41-485e-8073-d22e7ffe8a65 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 16 13:37:12.891: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8670" for this suite. Mar 16 13:37:20.924: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 13:37:21.009: INFO: namespace projected-8670 deletion completed in 8.115219838s • [SLOW TEST:14.564 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 16 13:37:21.010: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-upd-4d4001e7-1123-4d8c-a163-00dd347651b5 STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 16 13:37:30.667: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2602" for this suite. Mar 16 13:37:54.791: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 13:37:54.856: INFO: namespace configmap-2602 deletion completed in 24.185582154s • [SLOW TEST:33.847 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 16 13:37:54.857: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-e325bca1-2092-4b9b-825b-766ba96b4019 STEP: Creating a pod to test consume secrets Mar 16 13:37:55.191: INFO: Waiting up to 5m0s for pod "pod-secrets-e1e4268c-bf49-487a-b7e0-187ee00962b3" in namespace "secrets-1258" to be "success or failure" Mar 16 13:37:55.197: INFO: Pod "pod-secrets-e1e4268c-bf49-487a-b7e0-187ee00962b3": Phase="Pending", Reason="", readiness=false. Elapsed: 5.719047ms Mar 16 13:37:57.201: INFO: Pod "pod-secrets-e1e4268c-bf49-487a-b7e0-187ee00962b3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009520068s Mar 16 13:37:59.206: INFO: Pod "pod-secrets-e1e4268c-bf49-487a-b7e0-187ee00962b3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.01463972s Mar 16 13:38:01.506: INFO: Pod "pod-secrets-e1e4268c-bf49-487a-b7e0-187ee00962b3": Phase="Pending", Reason="", readiness=false. Elapsed: 6.314946053s Mar 16 13:38:03.611: INFO: Pod "pod-secrets-e1e4268c-bf49-487a-b7e0-187ee00962b3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.419793906s STEP: Saw pod success Mar 16 13:38:03.611: INFO: Pod "pod-secrets-e1e4268c-bf49-487a-b7e0-187ee00962b3" satisfied condition "success or failure" Mar 16 13:38:03.946: INFO: Trying to get logs from node iruya-worker pod pod-secrets-e1e4268c-bf49-487a-b7e0-187ee00962b3 container secret-volume-test: STEP: delete the pod Mar 16 13:38:03.964: INFO: Waiting for pod pod-secrets-e1e4268c-bf49-487a-b7e0-187ee00962b3 to disappear Mar 16 13:38:03.987: INFO: Pod pod-secrets-e1e4268c-bf49-487a-b7e0-187ee00962b3 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 16 13:38:03.987: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1258" for this suite. Mar 16 13:38:12.250: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 13:38:12.329: INFO: namespace secrets-1258 deletion completed in 8.265051267s • [SLOW TEST:17.472 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 16 13:38:12.330: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: getting the auto-created API token Mar 16 13:38:13.669: INFO: created pod pod-service-account-defaultsa Mar 16 13:38:13.669: INFO: pod pod-service-account-defaultsa service account token volume mount: true Mar 16 13:38:13.695: INFO: created pod pod-service-account-mountsa Mar 16 13:38:13.695: INFO: pod pod-service-account-mountsa service account token volume mount: true Mar 16 13:38:13.890: INFO: created pod pod-service-account-nomountsa Mar 16 13:38:13.890: INFO: pod pod-service-account-nomountsa service account token volume mount: false Mar 16 13:38:13.905: INFO: created pod pod-service-account-defaultsa-mountspec Mar 16 13:38:13.905: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true Mar 16 13:38:14.771: INFO: created pod pod-service-account-mountsa-mountspec Mar 16 13:38:14.771: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true Mar 16 13:38:15.338: INFO: created pod pod-service-account-nomountsa-mountspec Mar 16 13:38:15.338: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true Mar 16 13:38:15.531: INFO: created pod pod-service-account-defaultsa-nomountspec Mar 16 13:38:15.531: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false Mar 16 13:38:15.794: INFO: created pod pod-service-account-mountsa-nomountspec Mar 16 13:38:15.794: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false Mar 16 13:38:15.885: INFO: created pod pod-service-account-nomountsa-nomountspec Mar 16 13:38:15.885: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 16 13:38:15.885: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-7316" for this suite. Mar 16 13:38:53.436: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 13:38:53.516: INFO: namespace svcaccounts-7316 deletion completed in 36.941064357s • [SLOW TEST:41.187 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-network] DNS should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 16 13:38:53.516: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8117.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8117.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 16 13:39:05.801: INFO: DNS probes using dns-8117/dns-test-239a5d9e-499c-4454-9f6d-6aea70c2f10f succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 16 13:39:05.910: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-8117" for this suite. Mar 16 13:39:14.519: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 13:39:14.594: INFO: namespace dns-8117 deletion completed in 8.60440537s • [SLOW TEST:21.077 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 16 13:39:14.594: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: starting a background goroutine to produce watch events STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 16 13:39:20.223: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-92" for this suite. Mar 16 13:39:26.376: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 13:39:26.438: INFO: namespace watch-92 deletion completed in 6.164592637s • [SLOW TEST:11.845 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 16 13:39:26.438: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Mar 16 13:39:27.002: INFO: Waiting up to 5m0s for pod "downwardapi-volume-cdc921c0-bf3d-484e-be2f-e069d0e26e40" in namespace "projected-5033" to be "success or failure" Mar 16 13:39:27.026: INFO: Pod "downwardapi-volume-cdc921c0-bf3d-484e-be2f-e069d0e26e40": Phase="Pending", Reason="", readiness=false. Elapsed: 23.82805ms Mar 16 13:39:29.215: INFO: Pod "downwardapi-volume-cdc921c0-bf3d-484e-be2f-e069d0e26e40": Phase="Pending", Reason="", readiness=false. Elapsed: 2.212483212s Mar 16 13:39:31.219: INFO: Pod "downwardapi-volume-cdc921c0-bf3d-484e-be2f-e069d0e26e40": Phase="Running", Reason="", readiness=true. Elapsed: 4.216702459s Mar 16 13:39:33.223: INFO: Pod "downwardapi-volume-cdc921c0-bf3d-484e-be2f-e069d0e26e40": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.220844296s STEP: Saw pod success Mar 16 13:39:33.223: INFO: Pod "downwardapi-volume-cdc921c0-bf3d-484e-be2f-e069d0e26e40" satisfied condition "success or failure" Mar 16 13:39:33.226: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-cdc921c0-bf3d-484e-be2f-e069d0e26e40 container client-container: STEP: delete the pod Mar 16 13:39:33.288: INFO: Waiting for pod downwardapi-volume-cdc921c0-bf3d-484e-be2f-e069d0e26e40 to disappear Mar 16 13:39:33.372: INFO: Pod downwardapi-volume-cdc921c0-bf3d-484e-be2f-e069d0e26e40 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 16 13:39:33.372: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5033" for this suite. Mar 16 13:39:41.481: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 13:39:41.552: INFO: namespace projected-5033 deletion completed in 8.17626076s • [SLOW TEST:15.113 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 16 13:39:41.553: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-6d44feeb-b60b-41b4-abd2-2c494c2074c1 STEP: Creating a pod to test consume secrets Mar 16 13:39:41.965: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-c2c4738f-8c5b-4ff4-be1b-f0b23af1719c" in namespace "projected-7681" to be "success or failure" Mar 16 13:39:41.995: INFO: Pod "pod-projected-secrets-c2c4738f-8c5b-4ff4-be1b-f0b23af1719c": Phase="Pending", Reason="", readiness=false. Elapsed: 29.524555ms Mar 16 13:39:43.999: INFO: Pod "pod-projected-secrets-c2c4738f-8c5b-4ff4-be1b-f0b23af1719c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033285984s Mar 16 13:39:46.003: INFO: Pod "pod-projected-secrets-c2c4738f-8c5b-4ff4-be1b-f0b23af1719c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.03738495s Mar 16 13:39:48.006: INFO: Pod "pod-projected-secrets-c2c4738f-8c5b-4ff4-be1b-f0b23af1719c": Phase="Running", Reason="", readiness=true. Elapsed: 6.040804479s Mar 16 13:39:50.209: INFO: Pod "pod-projected-secrets-c2c4738f-8c5b-4ff4-be1b-f0b23af1719c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.243774273s STEP: Saw pod success Mar 16 13:39:50.209: INFO: Pod "pod-projected-secrets-c2c4738f-8c5b-4ff4-be1b-f0b23af1719c" satisfied condition "success or failure" Mar 16 13:39:50.211: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-secrets-c2c4738f-8c5b-4ff4-be1b-f0b23af1719c container projected-secret-volume-test: STEP: delete the pod Mar 16 13:39:50.415: INFO: Waiting for pod pod-projected-secrets-c2c4738f-8c5b-4ff4-be1b-f0b23af1719c to disappear Mar 16 13:39:50.730: INFO: Pod pod-projected-secrets-c2c4738f-8c5b-4ff4-be1b-f0b23af1719c no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 16 13:39:50.730: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7681" for this suite. Mar 16 13:39:57.095: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 13:39:57.165: INFO: namespace projected-7681 deletion completed in 6.348742359s • [SLOW TEST:15.613 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl version should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 16 13:39:57.166: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Mar 16 13:39:57.530: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version' Mar 16 13:39:57.672: INFO: stderr: "" Mar 16 13:39:57.672: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"15\", GitVersion:\"v1.15.10\", GitCommit:\"1bea6c00a7055edef03f1d4bb58b773fa8917f11\", GitTreeState:\"clean\", BuildDate:\"2020-03-09T11:07:06Z\", GoVersion:\"go1.12.14\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"15\", GitVersion:\"v1.15.7\", GitCommit:\"6c143d35bb11d74970e7bc0b6c45b6bfdffc0bd4\", GitTreeState:\"clean\", BuildDate:\"2020-01-14T00:28:37Z\", GoVersion:\"go1.12.12\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 16 13:39:57.672: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9456" for this suite. Mar 16 13:40:03.716: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 13:40:03.792: INFO: namespace kubectl-9456 deletion completed in 6.115573848s • [SLOW TEST:6.626 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl version /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 16 13:40:03.793: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Mar 16 13:40:03.880: INFO: (0) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.297091ms) Mar 16 13:40:03.883: INFO: (1) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.362103ms) Mar 16 13:40:03.889: INFO: (2) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 6.115829ms) Mar 16 13:40:03.892: INFO: (3) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.537252ms) Mar 16 13:40:03.894: INFO: (4) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.219634ms) Mar 16 13:40:03.896: INFO: (5) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.193941ms) Mar 16 13:40:03.898: INFO: (6) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.101386ms) Mar 16 13:40:03.900: INFO: (7) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 1.942496ms) Mar 16 13:40:03.902: INFO: (8) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.054877ms) Mar 16 13:40:03.905: INFO: (9) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.173448ms) Mar 16 13:40:03.907: INFO: (10) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.287981ms) Mar 16 13:40:03.909: INFO: (11) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.195516ms) Mar 16 13:40:03.911: INFO: (12) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.356896ms) Mar 16 13:40:03.914: INFO: (13) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.569679ms) Mar 16 13:40:03.917: INFO: (14) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.706593ms) Mar 16 13:40:03.919: INFO: (15) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.579725ms) Mar 16 13:40:03.922: INFO: (16) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.605821ms) Mar 16 13:40:03.925: INFO: (17) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.914791ms) Mar 16 13:40:03.928: INFO: (18) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.769307ms) Mar 16 13:40:03.931: INFO: (19) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.936638ms) [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 16 13:40:03.931: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-5315" for this suite. Mar 16 13:40:10.017: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 13:40:10.094: INFO: namespace proxy-5315 deletion completed in 6.123465314s • [SLOW TEST:6.301 seconds] [sig-network] Proxy /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:58 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 16 13:40:10.094: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-map-c23938d3-c582-43f3-9a8e-f54f1f20b2cd STEP: Creating a pod to test consume configMaps Mar 16 13:40:10.280: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-1b39680c-947d-4f4f-9253-7c056b5d3780" in namespace "projected-2468" to be "success or failure" Mar 16 13:40:10.377: INFO: Pod "pod-projected-configmaps-1b39680c-947d-4f4f-9253-7c056b5d3780": Phase="Pending", Reason="", readiness=false. Elapsed: 96.387089ms Mar 16 13:40:12.381: INFO: Pod "pod-projected-configmaps-1b39680c-947d-4f4f-9253-7c056b5d3780": Phase="Pending", Reason="", readiness=false. Elapsed: 2.100590019s Mar 16 13:40:14.460: INFO: Pod "pod-projected-configmaps-1b39680c-947d-4f4f-9253-7c056b5d3780": Phase="Pending", Reason="", readiness=false. Elapsed: 4.179785954s Mar 16 13:40:16.465: INFO: Pod "pod-projected-configmaps-1b39680c-947d-4f4f-9253-7c056b5d3780": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.184256629s STEP: Saw pod success Mar 16 13:40:16.465: INFO: Pod "pod-projected-configmaps-1b39680c-947d-4f4f-9253-7c056b5d3780" satisfied condition "success or failure" Mar 16 13:40:16.468: INFO: Trying to get logs from node iruya-worker pod pod-projected-configmaps-1b39680c-947d-4f4f-9253-7c056b5d3780 container projected-configmap-volume-test: STEP: delete the pod Mar 16 13:40:16.494: INFO: Waiting for pod pod-projected-configmaps-1b39680c-947d-4f4f-9253-7c056b5d3780 to disappear Mar 16 13:40:16.505: INFO: Pod pod-projected-configmaps-1b39680c-947d-4f4f-9253-7c056b5d3780 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 16 13:40:16.505: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2468" for this suite. Mar 16 13:40:22.625: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 13:40:22.693: INFO: namespace projected-2468 deletion completed in 6.185828268s • [SLOW TEST:12.600 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 16 13:40:22.695: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0666 on node default medium Mar 16 13:40:22.910: INFO: Waiting up to 5m0s for pod "pod-0ee8410f-75ef-4bbb-97b2-cc6f25d4d0f5" in namespace "emptydir-8116" to be "success or failure" Mar 16 13:40:22.988: INFO: Pod "pod-0ee8410f-75ef-4bbb-97b2-cc6f25d4d0f5": Phase="Pending", Reason="", readiness=false. Elapsed: 77.496399ms Mar 16 13:40:25.186: INFO: Pod "pod-0ee8410f-75ef-4bbb-97b2-cc6f25d4d0f5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.275525151s Mar 16 13:40:27.190: INFO: Pod "pod-0ee8410f-75ef-4bbb-97b2-cc6f25d4d0f5": Phase="Running", Reason="", readiness=true. Elapsed: 4.279895462s Mar 16 13:40:29.194: INFO: Pod "pod-0ee8410f-75ef-4bbb-97b2-cc6f25d4d0f5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.284090909s STEP: Saw pod success Mar 16 13:40:29.194: INFO: Pod "pod-0ee8410f-75ef-4bbb-97b2-cc6f25d4d0f5" satisfied condition "success or failure" Mar 16 13:40:29.197: INFO: Trying to get logs from node iruya-worker2 pod pod-0ee8410f-75ef-4bbb-97b2-cc6f25d4d0f5 container test-container: STEP: delete the pod Mar 16 13:40:29.380: INFO: Waiting for pod pod-0ee8410f-75ef-4bbb-97b2-cc6f25d4d0f5 to disappear Mar 16 13:40:29.409: INFO: Pod pod-0ee8410f-75ef-4bbb-97b2-cc6f25d4d0f5 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 16 13:40:29.409: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8116" for this suite. Mar 16 13:40:37.566: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 13:40:37.638: INFO: namespace emptydir-8116 deletion completed in 8.223204082s • [SLOW TEST:14.943 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 16 13:40:37.638: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0644 on node default medium Mar 16 13:40:38.554: INFO: Waiting up to 5m0s for pod "pod-87058adc-9eed-4751-bd7e-375773765997" in namespace "emptydir-7102" to be "success or failure" Mar 16 13:40:38.953: INFO: Pod "pod-87058adc-9eed-4751-bd7e-375773765997": Phase="Pending", Reason="", readiness=false. Elapsed: 399.295396ms Mar 16 13:40:41.060: INFO: Pod "pod-87058adc-9eed-4751-bd7e-375773765997": Phase="Pending", Reason="", readiness=false. Elapsed: 2.506173488s Mar 16 13:40:43.064: INFO: Pod "pod-87058adc-9eed-4751-bd7e-375773765997": Phase="Pending", Reason="", readiness=false. Elapsed: 4.510010953s Mar 16 13:40:45.068: INFO: Pod "pod-87058adc-9eed-4751-bd7e-375773765997": Phase="Pending", Reason="", readiness=false. Elapsed: 6.513929835s Mar 16 13:40:47.179: INFO: Pod "pod-87058adc-9eed-4751-bd7e-375773765997": Phase="Running", Reason="", readiness=true. Elapsed: 8.625792063s Mar 16 13:40:49.183: INFO: Pod "pod-87058adc-9eed-4751-bd7e-375773765997": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.629840717s STEP: Saw pod success Mar 16 13:40:49.183: INFO: Pod "pod-87058adc-9eed-4751-bd7e-375773765997" satisfied condition "success or failure" Mar 16 13:40:49.186: INFO: Trying to get logs from node iruya-worker2 pod pod-87058adc-9eed-4751-bd7e-375773765997 container test-container: STEP: delete the pod Mar 16 13:40:49.213: INFO: Waiting for pod pod-87058adc-9eed-4751-bd7e-375773765997 to disappear Mar 16 13:40:49.275: INFO: Pod pod-87058adc-9eed-4751-bd7e-375773765997 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 16 13:40:49.275: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7102" for this suite. Mar 16 13:40:57.371: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 13:40:57.438: INFO: namespace emptydir-7102 deletion completed in 8.15976912s • [SLOW TEST:19.800 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 16 13:40:57.439: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0644 on tmpfs Mar 16 13:40:58.152: INFO: Waiting up to 5m0s for pod "pod-fdf4198e-3259-42e2-bde5-3682b62ad865" in namespace "emptydir-6386" to be "success or failure" Mar 16 13:40:58.207: INFO: Pod "pod-fdf4198e-3259-42e2-bde5-3682b62ad865": Phase="Pending", Reason="", readiness=false. Elapsed: 55.156228ms Mar 16 13:41:00.210: INFO: Pod "pod-fdf4198e-3259-42e2-bde5-3682b62ad865": Phase="Pending", Reason="", readiness=false. Elapsed: 2.058460048s Mar 16 13:41:02.215: INFO: Pod "pod-fdf4198e-3259-42e2-bde5-3682b62ad865": Phase="Running", Reason="", readiness=true. Elapsed: 4.062755343s Mar 16 13:41:04.384: INFO: Pod "pod-fdf4198e-3259-42e2-bde5-3682b62ad865": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.231924414s STEP: Saw pod success Mar 16 13:41:04.384: INFO: Pod "pod-fdf4198e-3259-42e2-bde5-3682b62ad865" satisfied condition "success or failure" Mar 16 13:41:04.386: INFO: Trying to get logs from node iruya-worker2 pod pod-fdf4198e-3259-42e2-bde5-3682b62ad865 container test-container: STEP: delete the pod Mar 16 13:41:04.537: INFO: Waiting for pod pod-fdf4198e-3259-42e2-bde5-3682b62ad865 to disappear Mar 16 13:41:04.566: INFO: Pod pod-fdf4198e-3259-42e2-bde5-3682b62ad865 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 16 13:41:04.566: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6386" for this suite. Mar 16 13:41:10.593: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 13:41:10.666: INFO: namespace emptydir-6386 deletion completed in 6.096595671s • [SLOW TEST:13.227 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run pod should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 16 13:41:10.666: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1685 [It] should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Mar 16 13:41:10.888: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-7428' Mar 16 13:41:16.036: INFO: stderr: "" Mar 16 13:41:16.036: INFO: stdout: "pod/e2e-test-nginx-pod created\n" STEP: verifying the pod e2e-test-nginx-pod was created [AfterEach] [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1690 Mar 16 13:41:16.223: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=kubectl-7428' Mar 16 13:41:22.169: INFO: stderr: "" Mar 16 13:41:22.169: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 16 13:41:22.169: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7428" for this suite. Mar 16 13:41:30.240: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 13:41:30.312: INFO: namespace kubectl-7428 deletion completed in 8.126712042s • [SLOW TEST:19.646 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 16 13:41:30.313: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod busybox-458e2bb0-c5a0-40cc-b755-cb626a0ab095 in namespace container-probe-1837 Mar 16 13:41:36.424: INFO: Started pod busybox-458e2bb0-c5a0-40cc-b755-cb626a0ab095 in namespace container-probe-1837 STEP: checking the pod's current state and verifying that restartCount is present Mar 16 13:41:36.426: INFO: Initial restart count of pod busybox-458e2bb0-c5a0-40cc-b755-cb626a0ab095 is 0 Mar 16 13:42:22.609: INFO: Restart count of pod container-probe-1837/busybox-458e2bb0-c5a0-40cc-b755-cb626a0ab095 is now 1 (46.183389249s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 16 13:42:22.647: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-1837" for this suite. Mar 16 13:42:28.666: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 13:42:28.744: INFO: namespace container-probe-1837 deletion completed in 6.091810135s • [SLOW TEST:58.431 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 16 13:42:28.744: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 16 13:42:55.034: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-5488" for this suite. Mar 16 13:43:01.049: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 13:43:01.129: INFO: namespace namespaces-5488 deletion completed in 6.091371419s STEP: Destroying namespace "nsdeletetest-4481" for this suite. Mar 16 13:43:01.132: INFO: Namespace nsdeletetest-4481 was already deleted STEP: Destroying namespace "nsdeletetest-9900" for this suite. Mar 16 13:43:07.145: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 13:43:07.215: INFO: namespace nsdeletetest-9900 deletion completed in 6.083111528s • [SLOW TEST:38.471 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 16 13:43:07.215: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Mar 16 13:43:07.337: INFO: Waiting up to 5m0s for pod "downwardapi-volume-495d567b-ed7e-43d8-962f-7aefccefa0e1" in namespace "downward-api-6521" to be "success or failure" Mar 16 13:43:07.346: INFO: Pod "downwardapi-volume-495d567b-ed7e-43d8-962f-7aefccefa0e1": Phase="Pending", Reason="", readiness=false. Elapsed: 8.673145ms Mar 16 13:43:09.385: INFO: Pod "downwardapi-volume-495d567b-ed7e-43d8-962f-7aefccefa0e1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.048066073s Mar 16 13:43:11.390: INFO: Pod "downwardapi-volume-495d567b-ed7e-43d8-962f-7aefccefa0e1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.052523865s STEP: Saw pod success Mar 16 13:43:11.390: INFO: Pod "downwardapi-volume-495d567b-ed7e-43d8-962f-7aefccefa0e1" satisfied condition "success or failure" Mar 16 13:43:11.393: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-495d567b-ed7e-43d8-962f-7aefccefa0e1 container client-container: STEP: delete the pod Mar 16 13:43:11.415: INFO: Waiting for pod downwardapi-volume-495d567b-ed7e-43d8-962f-7aefccefa0e1 to disappear Mar 16 13:43:11.453: INFO: Pod downwardapi-volume-495d567b-ed7e-43d8-962f-7aefccefa0e1 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 16 13:43:11.453: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6521" for this suite. Mar 16 13:43:17.476: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 13:43:17.558: INFO: namespace downward-api-6521 deletion completed in 6.101214187s • [SLOW TEST:10.343 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 16 13:43:17.559: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Mar 16 13:43:17.676: INFO: Pod name cleanup-pod: Found 0 pods out of 1 Mar 16 13:43:22.681: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Mar 16 13:43:22.681: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 Mar 16 13:43:22.707: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment,GenerateName:,Namespace:deployment-1843,SelfLink:/apis/apps/v1/namespaces/deployment-1843/deployments/test-cleanup-deployment,UID:f900fad4-ad25-4d65-8063-f53d5efca979,ResourceVersion:164530,Generation:1,CreationTimestamp:2020-03-16 13:43:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[],ReadyReplicas:0,CollisionCount:nil,},} Mar 16 13:43:22.713: INFO: New ReplicaSet "test-cleanup-deployment-55bbcbc84c" of Deployment "test-cleanup-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment-55bbcbc84c,GenerateName:,Namespace:deployment-1843,SelfLink:/apis/apps/v1/namespaces/deployment-1843/replicasets/test-cleanup-deployment-55bbcbc84c,UID:8c4b84ad-33cd-4161-8257-186ee9e2e01a,ResourceVersion:164532,Generation:1,CreationTimestamp:2020-03-16 13:43:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-cleanup-deployment f900fad4-ad25-4d65-8063-f53d5efca979 0xc0019e1647 0xc0019e1648}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:0,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Mar 16 13:43:22.713: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment": Mar 16 13:43:22.713: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller,GenerateName:,Namespace:deployment-1843,SelfLink:/apis/apps/v1/namespaces/deployment-1843/replicasets/test-cleanup-controller,UID:f96107b3-cbf0-47e1-b8a0-e3a39afa410b,ResourceVersion:164531,Generation:1,CreationTimestamp:2020-03-16 13:43:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 Deployment test-cleanup-deployment f900fad4-ad25-4d65-8063-f53d5efca979 0xc0019e1557 0xc0019e1558}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} Mar 16 13:43:22.730: INFO: Pod "test-cleanup-controller-fq8x7" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller-fq8x7,GenerateName:test-cleanup-controller-,Namespace:deployment-1843,SelfLink:/api/v1/namespaces/deployment-1843/pods/test-cleanup-controller-fq8x7,UID:bfc1ef68-d4c2-4bd0-b6ba-4b1519ea0005,ResourceVersion:164524,Generation:0,CreationTimestamp:2020-03-16 13:43:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-cleanup-controller f96107b3-cbf0-47e1-b8a0-e3a39afa410b 0xc002ed6287 0xc002ed6288}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-pxvxv {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-pxvxv,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-pxvxv true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002ed6300} {node.kubernetes.io/unreachable Exists NoExecute 0xc002ed6320}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-16 13:43:17 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-03-16 13:43:20 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-03-16 13:43:20 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-16 13:43:17 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:10.244.2.192,StartTime:2020-03-16 13:43:17 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-03-16 13:43:19 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://5c9e60fb68fe77a35a9f311731118e405b345fb9c7889ef428d9f74e2d6eb335}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 16 13:43:22.730: INFO: Pod "test-cleanup-deployment-55bbcbc84c-n78bp" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment-55bbcbc84c-n78bp,GenerateName:test-cleanup-deployment-55bbcbc84c-,Namespace:deployment-1843,SelfLink:/api/v1/namespaces/deployment-1843/pods/test-cleanup-deployment-55bbcbc84c-n78bp,UID:d28cb61b-73dc-4017-aad1-9e0ec955cc4f,ResourceVersion:164536,Generation:0,CreationTimestamp:2020-03-16 13:43:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-cleanup-deployment-55bbcbc84c 8c4b84ad-33cd-4161-8257-186ee9e2e01a 0xc002ed63f7 0xc002ed63f8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-pxvxv {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-pxvxv,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-pxvxv true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002ed6470} {node.kubernetes.io/unreachable Exists NoExecute 0xc002ed6490}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-16 13:43:22 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 16 13:43:22.730: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-1843" for this suite. Mar 16 13:43:28.817: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 13:43:28.897: INFO: namespace deployment-1843 deletion completed in 6.110549281s • [SLOW TEST:11.338 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 16 13:43:28.897: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W0316 13:43:38.979916 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Mar 16 13:43:38.980: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 16 13:43:38.980: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-4971" for this suite. Mar 16 13:43:45.011: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 13:43:45.235: INFO: namespace gc-4971 deletion completed in 6.251461794s • [SLOW TEST:16.338 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 16 13:43:45.235: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44 [It] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod Mar 16 13:43:45.355: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 16 13:43:54.837: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-7155" for this suite. Mar 16 13:44:00.885: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 13:44:00.963: INFO: namespace init-container-7155 deletion completed in 6.096074396s • [SLOW TEST:15.728 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 16 13:44:00.964: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0777 on node default medium Mar 16 13:44:01.029: INFO: Waiting up to 5m0s for pod "pod-6fc98a9f-7c37-4bbe-b9da-a13174a31130" in namespace "emptydir-8103" to be "success or failure" Mar 16 13:44:01.033: INFO: Pod "pod-6fc98a9f-7c37-4bbe-b9da-a13174a31130": Phase="Pending", Reason="", readiness=false. Elapsed: 3.776333ms Mar 16 13:44:03.037: INFO: Pod "pod-6fc98a9f-7c37-4bbe-b9da-a13174a31130": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007822516s Mar 16 13:44:05.041: INFO: Pod "pod-6fc98a9f-7c37-4bbe-b9da-a13174a31130": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011822244s STEP: Saw pod success Mar 16 13:44:05.041: INFO: Pod "pod-6fc98a9f-7c37-4bbe-b9da-a13174a31130" satisfied condition "success or failure" Mar 16 13:44:05.044: INFO: Trying to get logs from node iruya-worker2 pod pod-6fc98a9f-7c37-4bbe-b9da-a13174a31130 container test-container: STEP: delete the pod Mar 16 13:44:05.099: INFO: Waiting for pod pod-6fc98a9f-7c37-4bbe-b9da-a13174a31130 to disappear Mar 16 13:44:05.107: INFO: Pod pod-6fc98a9f-7c37-4bbe-b9da-a13174a31130 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 16 13:44:05.107: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8103" for this suite. Mar 16 13:44:11.129: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 13:44:11.225: INFO: namespace emptydir-8103 deletion completed in 6.113780467s • [SLOW TEST:10.261 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 16 13:44:11.225: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Mar 16 13:44:19.403: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 16 13:44:19.408: INFO: Pod pod-with-poststart-exec-hook still exists Mar 16 13:44:21.408: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 16 13:44:21.412: INFO: Pod pod-with-poststart-exec-hook still exists Mar 16 13:44:23.408: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 16 13:44:23.413: INFO: Pod pod-with-poststart-exec-hook still exists Mar 16 13:44:25.408: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 16 13:44:25.413: INFO: Pod pod-with-poststart-exec-hook still exists Mar 16 13:44:27.408: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 16 13:44:27.411: INFO: Pod pod-with-poststart-exec-hook still exists Mar 16 13:44:29.408: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 16 13:44:29.413: INFO: Pod pod-with-poststart-exec-hook still exists Mar 16 13:44:31.408: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 16 13:44:31.412: INFO: Pod pod-with-poststart-exec-hook still exists Mar 16 13:44:33.408: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 16 13:44:33.413: INFO: Pod pod-with-poststart-exec-hook still exists Mar 16 13:44:35.408: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 16 13:44:35.412: INFO: Pod pod-with-poststart-exec-hook still exists Mar 16 13:44:37.408: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 16 13:44:37.412: INFO: Pod pod-with-poststart-exec-hook still exists Mar 16 13:44:39.408: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 16 13:44:39.412: INFO: Pod pod-with-poststart-exec-hook still exists Mar 16 13:44:41.408: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 16 13:44:41.413: INFO: Pod pod-with-poststart-exec-hook still exists Mar 16 13:44:43.408: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 16 13:44:43.412: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 16 13:44:43.412: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-2508" for this suite. Mar 16 13:45:05.430: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 13:45:05.514: INFO: namespace container-lifecycle-hook-2508 deletion completed in 22.09754198s • [SLOW TEST:54.288 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 16 13:45:05.514: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap that has name configmap-test-emptyKey-4debd6b6-4516-4531-af0c-60e519e1da4d [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 16 13:45:05.560: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8116" for this suite. Mar 16 13:45:11.574: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 13:45:11.654: INFO: namespace configmap-8116 deletion completed in 6.090081492s • [SLOW TEST:6.140 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl logs should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 16 13:45:11.655: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1292 STEP: creating an rc Mar 16 13:45:11.855: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1821' Mar 16 13:45:12.120: INFO: stderr: "" Mar 16 13:45:12.120: INFO: stdout: "replicationcontroller/redis-master created\n" [It] should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Waiting for Redis master to start. Mar 16 13:45:13.125: INFO: Selector matched 1 pods for map[app:redis] Mar 16 13:45:13.125: INFO: Found 0 / 1 Mar 16 13:45:14.126: INFO: Selector matched 1 pods for map[app:redis] Mar 16 13:45:14.126: INFO: Found 0 / 1 Mar 16 13:45:15.210: INFO: Selector matched 1 pods for map[app:redis] Mar 16 13:45:15.210: INFO: Found 0 / 1 Mar 16 13:45:16.126: INFO: Selector matched 1 pods for map[app:redis] Mar 16 13:45:16.126: INFO: Found 0 / 1 Mar 16 13:45:17.126: INFO: Selector matched 1 pods for map[app:redis] Mar 16 13:45:17.126: INFO: Found 1 / 1 Mar 16 13:45:17.126: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Mar 16 13:45:17.129: INFO: Selector matched 1 pods for map[app:redis] Mar 16 13:45:17.129: INFO: ForEach: Found 1 pods from the filter. Now looping through them. STEP: checking for a matching strings Mar 16 13:45:17.130: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-hbz4n redis-master --namespace=kubectl-1821' Mar 16 13:45:17.232: INFO: stderr: "" Mar 16 13:45:17.232: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 16 Mar 13:45:16.200 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 16 Mar 13:45:16.200 # Server started, Redis version 3.2.12\n1:M 16 Mar 13:45:16.200 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 16 Mar 13:45:16.200 * The server is now ready to accept connections on port 6379\n" STEP: limiting log lines Mar 16 13:45:17.232: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-hbz4n redis-master --namespace=kubectl-1821 --tail=1' Mar 16 13:45:17.350: INFO: stderr: "" Mar 16 13:45:17.350: INFO: stdout: "1:M 16 Mar 13:45:16.200 * The server is now ready to accept connections on port 6379\n" STEP: limiting log bytes Mar 16 13:45:17.350: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-hbz4n redis-master --namespace=kubectl-1821 --limit-bytes=1' Mar 16 13:45:17.447: INFO: stderr: "" Mar 16 13:45:17.447: INFO: stdout: " " STEP: exposing timestamps Mar 16 13:45:17.447: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-hbz4n redis-master --namespace=kubectl-1821 --tail=1 --timestamps' Mar 16 13:45:17.571: INFO: stderr: "" Mar 16 13:45:17.571: INFO: stdout: "2020-03-16T13:45:16.200580152Z 1:M 16 Mar 13:45:16.200 * The server is now ready to accept connections on port 6379\n" STEP: restricting to a time range Mar 16 13:45:20.071: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-hbz4n redis-master --namespace=kubectl-1821 --since=1s' Mar 16 13:45:20.170: INFO: stderr: "" Mar 16 13:45:20.170: INFO: stdout: "" Mar 16 13:45:20.171: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-hbz4n redis-master --namespace=kubectl-1821 --since=24h' Mar 16 13:45:20.275: INFO: stderr: "" Mar 16 13:45:20.275: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 16 Mar 13:45:16.200 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 16 Mar 13:45:16.200 # Server started, Redis version 3.2.12\n1:M 16 Mar 13:45:16.200 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 16 Mar 13:45:16.200 * The server is now ready to accept connections on port 6379\n" [AfterEach] [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1298 STEP: using delete to clean up resources Mar 16 13:45:20.275: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1821' Mar 16 13:45:20.399: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 16 13:45:20.399: INFO: stdout: "replicationcontroller \"redis-master\" force deleted\n" Mar 16 13:45:20.399: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=nginx --no-headers --namespace=kubectl-1821' Mar 16 13:45:20.494: INFO: stderr: "No resources found.\n" Mar 16 13:45:20.494: INFO: stdout: "" Mar 16 13:45:20.494: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=nginx --namespace=kubectl-1821 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Mar 16 13:45:20.601: INFO: stderr: "" Mar 16 13:45:20.601: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 16 13:45:20.601: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1821" for this suite. Mar 16 13:45:42.617: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 13:45:42.692: INFO: namespace kubectl-1821 deletion completed in 22.087535982s • [SLOW TEST:31.038 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 16 13:45:42.693: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Mar 16 13:45:42.820: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7e7f6491-5a71-401c-adfd-dfe2cc6871a8" in namespace "downward-api-4425" to be "success or failure" Mar 16 13:45:42.879: INFO: Pod "downwardapi-volume-7e7f6491-5a71-401c-adfd-dfe2cc6871a8": Phase="Pending", Reason="", readiness=false. Elapsed: 58.725678ms Mar 16 13:45:44.883: INFO: Pod "downwardapi-volume-7e7f6491-5a71-401c-adfd-dfe2cc6871a8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.062872491s Mar 16 13:45:46.887: INFO: Pod "downwardapi-volume-7e7f6491-5a71-401c-adfd-dfe2cc6871a8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.067156115s STEP: Saw pod success Mar 16 13:45:46.887: INFO: Pod "downwardapi-volume-7e7f6491-5a71-401c-adfd-dfe2cc6871a8" satisfied condition "success or failure" Mar 16 13:45:46.890: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-7e7f6491-5a71-401c-adfd-dfe2cc6871a8 container client-container: STEP: delete the pod Mar 16 13:45:46.925: INFO: Waiting for pod downwardapi-volume-7e7f6491-5a71-401c-adfd-dfe2cc6871a8 to disappear Mar 16 13:45:46.943: INFO: Pod downwardapi-volume-7e7f6491-5a71-401c-adfd-dfe2cc6871a8 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 16 13:45:46.943: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4425" for this suite. Mar 16 13:45:52.958: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 13:45:53.038: INFO: namespace downward-api-4425 deletion completed in 6.091560075s • [SLOW TEST:10.345 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 16 13:45:53.038: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Mar 16 13:45:56.110: INFO: Expected: &{} to match Container's Termination Message: -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 16 13:45:56.537: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-8279" for this suite. Mar 16 13:46:02.751: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 13:46:02.824: INFO: namespace container-runtime-8279 deletion completed in 6.283423246s • [SLOW TEST:9.786 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129 should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 16 13:46:02.825: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-c5fd6412-8331-4236-96a7-a5ae2ee0a0f1 STEP: Creating a pod to test consume secrets Mar 16 13:46:03.421: INFO: Waiting up to 5m0s for pod "pod-secrets-04fd4d4a-4d91-42cf-a48e-0476e3c1a614" in namespace "secrets-8319" to be "success or failure" Mar 16 13:46:03.432: INFO: Pod "pod-secrets-04fd4d4a-4d91-42cf-a48e-0476e3c1a614": Phase="Pending", Reason="", readiness=false. Elapsed: 11.266314ms Mar 16 13:46:05.436: INFO: Pod "pod-secrets-04fd4d4a-4d91-42cf-a48e-0476e3c1a614": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015300491s Mar 16 13:46:07.439: INFO: Pod "pod-secrets-04fd4d4a-4d91-42cf-a48e-0476e3c1a614": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018604528s STEP: Saw pod success Mar 16 13:46:07.439: INFO: Pod "pod-secrets-04fd4d4a-4d91-42cf-a48e-0476e3c1a614" satisfied condition "success or failure" Mar 16 13:46:07.442: INFO: Trying to get logs from node iruya-worker pod pod-secrets-04fd4d4a-4d91-42cf-a48e-0476e3c1a614 container secret-volume-test: STEP: delete the pod Mar 16 13:46:07.482: INFO: Waiting for pod pod-secrets-04fd4d4a-4d91-42cf-a48e-0476e3c1a614 to disappear Mar 16 13:46:07.494: INFO: Pod pod-secrets-04fd4d4a-4d91-42cf-a48e-0476e3c1a614 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 16 13:46:07.494: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-8319" for this suite. Mar 16 13:46:13.509: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 13:46:13.588: INFO: namespace secrets-8319 deletion completed in 6.089769916s STEP: Destroying namespace "secret-namespace-5269" for this suite. Mar 16 13:46:19.609: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 13:46:19.688: INFO: namespace secret-namespace-5269 deletion completed in 6.100218815s • [SLOW TEST:16.863 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-apps] Deployment deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 16 13:46:19.688: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Mar 16 13:46:20.454: INFO: Pod name rollover-pod: Found 0 pods out of 1 Mar 16 13:46:25.459: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Mar 16 13:46:25.459: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready Mar 16 13:46:27.463: INFO: Creating deployment "test-rollover-deployment" Mar 16 13:46:27.495: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations Mar 16 13:46:29.523: INFO: Check revision of new replica set for deployment "test-rollover-deployment" Mar 16 13:46:29.574: INFO: Ensure that both replica sets have 1 created replica Mar 16 13:46:29.578: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update Mar 16 13:46:29.583: INFO: Updating deployment test-rollover-deployment Mar 16 13:46:29.583: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller Mar 16 13:46:31.768: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 Mar 16 13:46:31.773: INFO: Make sure deployment "test-rollover-deployment" is complete Mar 16 13:46:31.935: INFO: all replica sets need to contain the pod-template-hash label Mar 16 13:46:31.935: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719963187, loc:(*time.Location)(0x7ea78c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719963187, loc:(*time.Location)(0x7ea78c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719963190, loc:(*time.Location)(0x7ea78c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719963187, loc:(*time.Location)(0x7ea78c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 16 13:46:33.945: INFO: all replica sets need to contain the pod-template-hash label Mar 16 13:46:33.945: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719963187, loc:(*time.Location)(0x7ea78c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719963187, loc:(*time.Location)(0x7ea78c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719963190, loc:(*time.Location)(0x7ea78c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719963187, loc:(*time.Location)(0x7ea78c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 16 13:46:36.049: INFO: all replica sets need to contain the pod-template-hash label Mar 16 13:46:36.049: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719963187, loc:(*time.Location)(0x7ea78c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719963187, loc:(*time.Location)(0x7ea78c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719963194, loc:(*time.Location)(0x7ea78c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719963187, loc:(*time.Location)(0x7ea78c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 16 13:46:37.945: INFO: all replica sets need to contain the pod-template-hash label Mar 16 13:46:37.945: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719963187, loc:(*time.Location)(0x7ea78c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719963187, loc:(*time.Location)(0x7ea78c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719963194, loc:(*time.Location)(0x7ea78c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719963187, loc:(*time.Location)(0x7ea78c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 16 13:46:40.263: INFO: all replica sets need to contain the pod-template-hash label Mar 16 13:46:40.263: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719963187, loc:(*time.Location)(0x7ea78c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719963187, loc:(*time.Location)(0x7ea78c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719963194, loc:(*time.Location)(0x7ea78c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719963187, loc:(*time.Location)(0x7ea78c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 16 13:46:41.943: INFO: all replica sets need to contain the pod-template-hash label Mar 16 13:46:41.943: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719963187, loc:(*time.Location)(0x7ea78c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719963187, loc:(*time.Location)(0x7ea78c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719963194, loc:(*time.Location)(0x7ea78c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719963187, loc:(*time.Location)(0x7ea78c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 16 13:46:43.944: INFO: all replica sets need to contain the pod-template-hash label Mar 16 13:46:43.944: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719963187, loc:(*time.Location)(0x7ea78c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719963187, loc:(*time.Location)(0x7ea78c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719963194, loc:(*time.Location)(0x7ea78c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719963187, loc:(*time.Location)(0x7ea78c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 16 13:46:45.943: INFO: Mar 16 13:46:45.943: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 Mar 16 13:46:45.951: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment,GenerateName:,Namespace:deployment-1917,SelfLink:/apis/apps/v1/namespaces/deployment-1917/deployments/test-rollover-deployment,UID:afd65a61-4b37-4a40-9344-a1d74fe38b3f,ResourceVersion:165320,Generation:2,CreationTimestamp:2020-03-16 13:46:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-03-16 13:46:27 +0000 UTC 2020-03-16 13:46:27 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-03-16 13:46:44 +0000 UTC 2020-03-16 13:46:27 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rollover-deployment-854595fc44" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},} Mar 16 13:46:45.954: INFO: New ReplicaSet "test-rollover-deployment-854595fc44" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-854595fc44,GenerateName:,Namespace:deployment-1917,SelfLink:/apis/apps/v1/namespaces/deployment-1917/replicasets/test-rollover-deployment-854595fc44,UID:c585f711-aaf5-4520-aa11-867800bfba24,ResourceVersion:165308,Generation:2,CreationTimestamp:2020-03-16 13:46:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment afd65a61-4b37-4a40-9344-a1d74fe38b3f 0xc002936387 0xc002936388}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} Mar 16 13:46:45.954: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": Mar 16 13:46:45.955: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-controller,GenerateName:,Namespace:deployment-1917,SelfLink:/apis/apps/v1/namespaces/deployment-1917/replicasets/test-rollover-controller,UID:69e8385d-d899-46d9-9149-5a0f7d4a6731,ResourceVersion:165319,Generation:2,CreationTimestamp:2020-03-16 13:46:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment afd65a61-4b37-4a40-9344-a1d74fe38b3f 0xc0029362b7 0xc0029362b8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Mar 16 13:46:45.955: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-9b8b997cf,GenerateName:,Namespace:deployment-1917,SelfLink:/apis/apps/v1/namespaces/deployment-1917/replicasets/test-rollover-deployment-9b8b997cf,UID:d41ed746-0996-4c2f-b172-05b5dc32d811,ResourceVersion:165264,Generation:2,CreationTimestamp:2020-03-16 13:46:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment afd65a61-4b37-4a40-9344-a1d74fe38b3f 0xc002936450 0xc002936451}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Mar 16 13:46:45.958: INFO: Pod "test-rollover-deployment-854595fc44-zzfkw" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-854595fc44-zzfkw,GenerateName:test-rollover-deployment-854595fc44-,Namespace:deployment-1917,SelfLink:/api/v1/namespaces/deployment-1917/pods/test-rollover-deployment-854595fc44-zzfkw,UID:650ec413-b938-45ad-a757-51a63221a00b,ResourceVersion:165286,Generation:0,CreationTimestamp:2020-03-16 13:46:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rollover-deployment-854595fc44 c585f711-aaf5-4520-aa11-867800bfba24 0xc002ca44b7 0xc002ca44b8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-jqx9q {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-jqx9q,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-jqx9q true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002ca4530} {node.kubernetes.io/unreachable Exists NoExecute 0xc002ca4550}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-16 13:46:30 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-03-16 13:46:34 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-03-16 13:46:34 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-16 13:46:30 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:10.244.2.199,StartTime:2020-03-16 13:46:30 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-03-16 13:46:33 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 containerd://0c1cdf385e9ae4b3d574f54b0a832e8667072df301aa8991ad4b845e87751859}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 16 13:46:45.958: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-1917" for this suite. Mar 16 13:46:54.037: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 13:46:54.130: INFO: namespace deployment-1917 deletion completed in 8.16886103s • [SLOW TEST:34.442 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 16 13:46:54.130: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Mar 16 13:46:54.178: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 16 13:46:55.311: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-1592" for this suite. Mar 16 13:47:01.464: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 13:47:01.541: INFO: namespace custom-resource-definition-1592 deletion completed in 6.226565719s • [SLOW TEST:7.411 seconds] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Simple CustomResourceDefinition /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:35 creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 16 13:47:01.542: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars Mar 16 13:47:01.628: INFO: Waiting up to 5m0s for pod "downward-api-3292b42d-bcea-4051-ae59-2d02c7440a23" in namespace "downward-api-4706" to be "success or failure" Mar 16 13:47:01.631: INFO: Pod "downward-api-3292b42d-bcea-4051-ae59-2d02c7440a23": Phase="Pending", Reason="", readiness=false. Elapsed: 2.643507ms Mar 16 13:47:03.658: INFO: Pod "downward-api-3292b42d-bcea-4051-ae59-2d02c7440a23": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030261748s Mar 16 13:47:05.663: INFO: Pod "downward-api-3292b42d-bcea-4051-ae59-2d02c7440a23": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.034617474s STEP: Saw pod success Mar 16 13:47:05.663: INFO: Pod "downward-api-3292b42d-bcea-4051-ae59-2d02c7440a23" satisfied condition "success or failure" Mar 16 13:47:05.666: INFO: Trying to get logs from node iruya-worker2 pod downward-api-3292b42d-bcea-4051-ae59-2d02c7440a23 container dapi-container: STEP: delete the pod Mar 16 13:47:05.686: INFO: Waiting for pod downward-api-3292b42d-bcea-4051-ae59-2d02c7440a23 to disappear Mar 16 13:47:05.690: INFO: Pod downward-api-3292b42d-bcea-4051-ae59-2d02c7440a23 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 16 13:47:05.690: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4706" for this suite. Mar 16 13:47:11.714: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 13:47:11.798: INFO: namespace downward-api-4706 deletion completed in 6.104845539s • [SLOW TEST:10.256 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 16 13:47:11.798: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-map-0e8934d6-7cc2-47e2-a933-5b0397cdf2f6 STEP: Creating a pod to test consume configMaps Mar 16 13:47:11.860: INFO: Waiting up to 5m0s for pod "pod-configmaps-44471c5f-74e3-457b-ad28-f3093ae3007f" in namespace "configmap-5149" to be "success or failure" Mar 16 13:47:11.864: INFO: Pod "pod-configmaps-44471c5f-74e3-457b-ad28-f3093ae3007f": Phase="Pending", Reason="", readiness=false. Elapsed: 3.939471ms Mar 16 13:47:13.869: INFO: Pod "pod-configmaps-44471c5f-74e3-457b-ad28-f3093ae3007f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008066191s Mar 16 13:47:15.873: INFO: Pod "pod-configmaps-44471c5f-74e3-457b-ad28-f3093ae3007f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012855195s STEP: Saw pod success Mar 16 13:47:15.873: INFO: Pod "pod-configmaps-44471c5f-74e3-457b-ad28-f3093ae3007f" satisfied condition "success or failure" Mar 16 13:47:15.876: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-44471c5f-74e3-457b-ad28-f3093ae3007f container configmap-volume-test: STEP: delete the pod Mar 16 13:47:15.938: INFO: Waiting for pod pod-configmaps-44471c5f-74e3-457b-ad28-f3093ae3007f to disappear Mar 16 13:47:16.023: INFO: Pod pod-configmaps-44471c5f-74e3-457b-ad28-f3093ae3007f no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 16 13:47:16.023: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5149" for this suite. Mar 16 13:47:22.042: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 13:47:22.119: INFO: namespace configmap-5149 deletion completed in 6.09108163s • [SLOW TEST:10.321 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 16 13:47:22.119: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Mar 16 13:47:22.277: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 16 13:47:26.350: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-6135" for this suite. Mar 16 13:48:04.368: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 13:48:04.449: INFO: namespace pods-6135 deletion completed in 38.097248878s • [SLOW TEST:42.330 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 16 13:48:04.450: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-de1f9049-e3f4-40f0-ac53-9a1fc7d6bef7 STEP: Creating a pod to test consume configMaps Mar 16 13:48:04.526: INFO: Waiting up to 5m0s for pod "pod-configmaps-f4fb3644-226a-49e7-a318-7f723ec81458" in namespace "configmap-8780" to be "success or failure" Mar 16 13:48:04.530: INFO: Pod "pod-configmaps-f4fb3644-226a-49e7-a318-7f723ec81458": Phase="Pending", Reason="", readiness=false. Elapsed: 4.222027ms Mar 16 13:48:06.534: INFO: Pod "pod-configmaps-f4fb3644-226a-49e7-a318-7f723ec81458": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007752623s Mar 16 13:48:08.538: INFO: Pod "pod-configmaps-f4fb3644-226a-49e7-a318-7f723ec81458": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012232876s STEP: Saw pod success Mar 16 13:48:08.538: INFO: Pod "pod-configmaps-f4fb3644-226a-49e7-a318-7f723ec81458" satisfied condition "success or failure" Mar 16 13:48:08.541: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-f4fb3644-226a-49e7-a318-7f723ec81458 container configmap-volume-test: STEP: delete the pod Mar 16 13:48:08.578: INFO: Waiting for pod pod-configmaps-f4fb3644-226a-49e7-a318-7f723ec81458 to disappear Mar 16 13:48:08.605: INFO: Pod pod-configmaps-f4fb3644-226a-49e7-a318-7f723ec81458 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 16 13:48:08.605: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8780" for this suite. Mar 16 13:48:14.618: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 13:48:14.715: INFO: namespace configmap-8780 deletion completed in 6.107279802s • [SLOW TEST:10.266 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 16 13:48:14.716: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name cm-test-opt-del-b1d8b01c-bc7b-4d69-803f-f62dd2b81237 STEP: Creating configMap with name cm-test-opt-upd-6521b942-6ea1-4ccb-85f7-0c9cc40543f7 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-b1d8b01c-bc7b-4d69-803f-f62dd2b81237 STEP: Updating configmap cm-test-opt-upd-6521b942-6ea1-4ccb-85f7-0c9cc40543f7 STEP: Creating configMap with name cm-test-opt-create-a6cd7b47-266d-4b99-8581-0662464c96e2 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 16 13:48:22.993: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2131" for this suite. Mar 16 13:48:45.010: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 13:48:45.084: INFO: namespace projected-2131 deletion completed in 22.087543819s • [SLOW TEST:30.368 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 16 13:48:45.085: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-map-f24e052e-594d-4bde-b4f7-36415ba9e151 STEP: Creating a pod to test consume configMaps Mar 16 13:48:45.147: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-66b89b32-eccd-47c7-a49c-bc14e4e073ce" in namespace "projected-55" to be "success or failure" Mar 16 13:48:45.154: INFO: Pod "pod-projected-configmaps-66b89b32-eccd-47c7-a49c-bc14e4e073ce": Phase="Pending", Reason="", readiness=false. Elapsed: 6.821705ms Mar 16 13:48:47.158: INFO: Pod "pod-projected-configmaps-66b89b32-eccd-47c7-a49c-bc14e4e073ce": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010857901s Mar 16 13:48:49.162: INFO: Pod "pod-projected-configmaps-66b89b32-eccd-47c7-a49c-bc14e4e073ce": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.015079748s STEP: Saw pod success Mar 16 13:48:49.162: INFO: Pod "pod-projected-configmaps-66b89b32-eccd-47c7-a49c-bc14e4e073ce" satisfied condition "success or failure" Mar 16 13:48:49.166: INFO: Trying to get logs from node iruya-worker pod pod-projected-configmaps-66b89b32-eccd-47c7-a49c-bc14e4e073ce container projected-configmap-volume-test: STEP: delete the pod Mar 16 13:48:49.191: INFO: Waiting for pod pod-projected-configmaps-66b89b32-eccd-47c7-a49c-bc14e4e073ce to disappear Mar 16 13:48:49.195: INFO: Pod pod-projected-configmaps-66b89b32-eccd-47c7-a49c-bc14e4e073ce no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 16 13:48:49.195: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-55" for this suite. Mar 16 13:48:55.226: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 13:48:55.298: INFO: namespace projected-55 deletion completed in 6.098879109s • [SLOW TEST:10.213 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 16 13:48:55.298: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [It] should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a replication controller Mar 16 13:48:55.324: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2739' Mar 16 13:48:55.589: INFO: stderr: "" Mar 16 13:48:55.589: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Mar 16 13:48:55.589: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2739' Mar 16 13:48:55.737: INFO: stderr: "" Mar 16 13:48:55.737: INFO: stdout: "update-demo-nautilus-dnm22 update-demo-nautilus-rvpdl " Mar 16 13:48:55.737: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-dnm22 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2739' Mar 16 13:48:55.844: INFO: stderr: "" Mar 16 13:48:55.844: INFO: stdout: "" Mar 16 13:48:55.844: INFO: update-demo-nautilus-dnm22 is created but not running Mar 16 13:49:00.844: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2739' Mar 16 13:49:00.939: INFO: stderr: "" Mar 16 13:49:00.939: INFO: stdout: "update-demo-nautilus-dnm22 update-demo-nautilus-rvpdl " Mar 16 13:49:00.939: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-dnm22 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2739' Mar 16 13:49:01.043: INFO: stderr: "" Mar 16 13:49:01.043: INFO: stdout: "true" Mar 16 13:49:01.043: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-dnm22 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-2739' Mar 16 13:49:01.136: INFO: stderr: "" Mar 16 13:49:01.136: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 16 13:49:01.136: INFO: validating pod update-demo-nautilus-dnm22 Mar 16 13:49:01.141: INFO: got data: { "image": "nautilus.jpg" } Mar 16 13:49:01.141: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 16 13:49:01.141: INFO: update-demo-nautilus-dnm22 is verified up and running Mar 16 13:49:01.141: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-rvpdl -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2739' Mar 16 13:49:01.233: INFO: stderr: "" Mar 16 13:49:01.233: INFO: stdout: "true" Mar 16 13:49:01.233: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-rvpdl -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-2739' Mar 16 13:49:01.326: INFO: stderr: "" Mar 16 13:49:01.326: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 16 13:49:01.326: INFO: validating pod update-demo-nautilus-rvpdl Mar 16 13:49:01.330: INFO: got data: { "image": "nautilus.jpg" } Mar 16 13:49:01.330: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 16 13:49:01.330: INFO: update-demo-nautilus-rvpdl is verified up and running STEP: using delete to clean up resources Mar 16 13:49:01.330: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2739' Mar 16 13:49:01.435: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 16 13:49:01.435: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Mar 16 13:49:01.435: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-2739' Mar 16 13:49:01.535: INFO: stderr: "No resources found.\n" Mar 16 13:49:01.535: INFO: stdout: "" Mar 16 13:49:01.535: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-2739 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Mar 16 13:49:01.632: INFO: stderr: "" Mar 16 13:49:01.632: INFO: stdout: "update-demo-nautilus-dnm22\nupdate-demo-nautilus-rvpdl\n" Mar 16 13:49:02.133: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-2739' Mar 16 13:49:02.225: INFO: stderr: "No resources found.\n" Mar 16 13:49:02.225: INFO: stdout: "" Mar 16 13:49:02.225: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-2739 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Mar 16 13:49:02.447: INFO: stderr: "" Mar 16 13:49:02.447: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 16 13:49:02.447: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2739" for this suite. Mar 16 13:49:08.552: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 13:49:08.627: INFO: namespace kubectl-2739 deletion completed in 6.176100704s • [SLOW TEST:13.329 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 16 13:49:08.628: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename hostpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37 [It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test hostPath mode Mar 16 13:49:08.930: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-2948" to be "success or failure" Mar 16 13:49:08.935: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 5.220932ms Mar 16 13:49:11.001: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.070836206s Mar 16 13:49:13.037: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.106968354s Mar 16 13:49:15.041: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.110547666s STEP: Saw pod success Mar 16 13:49:15.041: INFO: Pod "pod-host-path-test" satisfied condition "success or failure" Mar 16 13:49:15.044: INFO: Trying to get logs from node iruya-worker2 pod pod-host-path-test container test-container-1: STEP: delete the pod Mar 16 13:49:15.065: INFO: Waiting for pod pod-host-path-test to disappear Mar 16 13:49:15.076: INFO: Pod pod-host-path-test no longer exists [AfterEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 16 13:49:15.076: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "hostpath-2948" for this suite. Mar 16 13:49:21.107: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 13:49:21.190: INFO: namespace hostpath-2948 deletion completed in 6.112376892s • [SLOW TEST:12.562 seconds] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34 should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 16 13:49:21.191: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Mar 16 13:49:29.273: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Mar 16 13:49:29.280: INFO: Pod pod-with-prestop-http-hook still exists Mar 16 13:49:31.281: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Mar 16 13:49:31.284: INFO: Pod pod-with-prestop-http-hook still exists Mar 16 13:49:33.281: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Mar 16 13:49:33.285: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 16 13:49:33.291: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-337" for this suite. Mar 16 13:49:45.310: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 13:49:45.384: INFO: namespace container-lifecycle-hook-337 deletion completed in 12.089329687s • [SLOW TEST:24.194 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 16 13:49:45.385: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Mar 16 13:49:45.436: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f26612fd-65aa-4355-9017-32a0dcdcc693" in namespace "projected-4153" to be "success or failure" Mar 16 13:49:45.439: INFO: Pod "downwardapi-volume-f26612fd-65aa-4355-9017-32a0dcdcc693": Phase="Pending", Reason="", readiness=false. Elapsed: 2.982489ms Mar 16 13:49:47.442: INFO: Pod "downwardapi-volume-f26612fd-65aa-4355-9017-32a0dcdcc693": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006424678s Mar 16 13:49:49.446: INFO: Pod "downwardapi-volume-f26612fd-65aa-4355-9017-32a0dcdcc693": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010214089s STEP: Saw pod success Mar 16 13:49:49.446: INFO: Pod "downwardapi-volume-f26612fd-65aa-4355-9017-32a0dcdcc693" satisfied condition "success or failure" Mar 16 13:49:49.449: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-f26612fd-65aa-4355-9017-32a0dcdcc693 container client-container: STEP: delete the pod Mar 16 13:49:49.484: INFO: Waiting for pod downwardapi-volume-f26612fd-65aa-4355-9017-32a0dcdcc693 to disappear Mar 16 13:49:49.498: INFO: Pod downwardapi-volume-f26612fd-65aa-4355-9017-32a0dcdcc693 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 16 13:49:49.498: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4153" for this suite. Mar 16 13:49:55.515: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 13:49:55.597: INFO: namespace projected-4153 deletion completed in 6.095607348s • [SLOW TEST:10.212 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 16 13:49:55.597: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test env composition Mar 16 13:49:55.654: INFO: Waiting up to 5m0s for pod "var-expansion-b4024633-76fa-4076-b532-a683e2b8b16d" in namespace "var-expansion-8758" to be "success or failure" Mar 16 13:49:55.690: INFO: Pod "var-expansion-b4024633-76fa-4076-b532-a683e2b8b16d": Phase="Pending", Reason="", readiness=false. Elapsed: 35.723675ms Mar 16 13:49:57.694: INFO: Pod "var-expansion-b4024633-76fa-4076-b532-a683e2b8b16d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.040133069s Mar 16 13:49:59.699: INFO: Pod "var-expansion-b4024633-76fa-4076-b532-a683e2b8b16d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.044752658s STEP: Saw pod success Mar 16 13:49:59.699: INFO: Pod "var-expansion-b4024633-76fa-4076-b532-a683e2b8b16d" satisfied condition "success or failure" Mar 16 13:49:59.702: INFO: Trying to get logs from node iruya-worker pod var-expansion-b4024633-76fa-4076-b532-a683e2b8b16d container dapi-container: STEP: delete the pod Mar 16 13:49:59.764: INFO: Waiting for pod var-expansion-b4024633-76fa-4076-b532-a683e2b8b16d to disappear Mar 16 13:49:59.767: INFO: Pod var-expansion-b4024633-76fa-4076-b532-a683e2b8b16d no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 16 13:49:59.767: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-8758" for this suite. Mar 16 13:50:05.782: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 13:50:05.863: INFO: namespace var-expansion-8758 deletion completed in 6.093387722s • [SLOW TEST:10.266 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 16 13:50:05.864: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name projected-secret-test-3b289d3b-425f-4bf1-9e5c-df9c6d34e792 STEP: Creating a pod to test consume secrets Mar 16 13:50:05.959: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-5d0c6160-7989-4073-9e2e-b74e3b151079" in namespace "projected-6925" to be "success or failure" Mar 16 13:50:05.983: INFO: Pod "pod-projected-secrets-5d0c6160-7989-4073-9e2e-b74e3b151079": Phase="Pending", Reason="", readiness=false. Elapsed: 23.139865ms Mar 16 13:50:07.988: INFO: Pod "pod-projected-secrets-5d0c6160-7989-4073-9e2e-b74e3b151079": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028130553s Mar 16 13:50:09.992: INFO: Pod "pod-projected-secrets-5d0c6160-7989-4073-9e2e-b74e3b151079": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.032387072s STEP: Saw pod success Mar 16 13:50:09.992: INFO: Pod "pod-projected-secrets-5d0c6160-7989-4073-9e2e-b74e3b151079" satisfied condition "success or failure" Mar 16 13:50:09.995: INFO: Trying to get logs from node iruya-worker pod pod-projected-secrets-5d0c6160-7989-4073-9e2e-b74e3b151079 container secret-volume-test: STEP: delete the pod Mar 16 13:50:10.035: INFO: Waiting for pod pod-projected-secrets-5d0c6160-7989-4073-9e2e-b74e3b151079 to disappear Mar 16 13:50:10.041: INFO: Pod pod-projected-secrets-5d0c6160-7989-4073-9e2e-b74e3b151079 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 16 13:50:10.041: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6925" for this suite. Mar 16 13:50:16.075: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 13:50:16.155: INFO: namespace projected-6925 deletion completed in 6.093255072s • [SLOW TEST:10.291 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 16 13:50:16.156: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Mar 16 13:50:16.185: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3624' Mar 16 13:50:16.436: INFO: stderr: "" Mar 16 13:50:16.436: INFO: stdout: "replicationcontroller/redis-master created\n" Mar 16 13:50:16.436: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3624' Mar 16 13:50:16.727: INFO: stderr: "" Mar 16 13:50:16.727: INFO: stdout: "service/redis-master created\n" STEP: Waiting for Redis master to start. Mar 16 13:50:17.745: INFO: Selector matched 1 pods for map[app:redis] Mar 16 13:50:17.745: INFO: Found 0 / 1 Mar 16 13:50:18.731: INFO: Selector matched 1 pods for map[app:redis] Mar 16 13:50:18.731: INFO: Found 0 / 1 Mar 16 13:50:19.731: INFO: Selector matched 1 pods for map[app:redis] Mar 16 13:50:19.731: INFO: Found 1 / 1 Mar 16 13:50:19.731: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Mar 16 13:50:19.733: INFO: Selector matched 1 pods for map[app:redis] Mar 16 13:50:19.733: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Mar 16 13:50:19.733: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe pod redis-master-f44rs --namespace=kubectl-3624' Mar 16 13:50:19.848: INFO: stderr: "" Mar 16 13:50:19.848: INFO: stdout: "Name: redis-master-f44rs\nNamespace: kubectl-3624\nPriority: 0\nNode: iruya-worker2/172.17.0.5\nStart Time: Mon, 16 Mar 2020 13:50:16 +0000\nLabels: app=redis\n role=master\nAnnotations: \nStatus: Running\nIP: 10.244.1.216\nControlled By: ReplicationController/redis-master\nContainers:\n redis-master:\n Container ID: containerd://53de9937c56b2e046a0dd26333b3603f1af650b7ad2c954a81c4c2b6469e2eda\n Image: gcr.io/kubernetes-e2e-test-images/redis:1.0\n Image ID: gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Mon, 16 Mar 2020 13:50:18 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-gsf9b (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-gsf9b:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-gsf9b\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute for 300s\n node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 3s default-scheduler Successfully assigned kubectl-3624/redis-master-f44rs to iruya-worker2\n Normal Pulled 2s kubelet, iruya-worker2 Container image \"gcr.io/kubernetes-e2e-test-images/redis:1.0\" already present on machine\n Normal Created 1s kubelet, iruya-worker2 Created container redis-master\n Normal Started 1s kubelet, iruya-worker2 Started container redis-master\n" Mar 16 13:50:19.849: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe rc redis-master --namespace=kubectl-3624' Mar 16 13:50:19.967: INFO: stderr: "" Mar 16 13:50:19.967: INFO: stdout: "Name: redis-master\nNamespace: kubectl-3624\nSelector: app=redis,role=master\nLabels: app=redis\n role=master\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=redis\n role=master\n Containers:\n redis-master:\n Image: gcr.io/kubernetes-e2e-test-images/redis:1.0\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 3s replication-controller Created pod: redis-master-f44rs\n" Mar 16 13:50:19.967: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe service redis-master --namespace=kubectl-3624' Mar 16 13:50:20.066: INFO: stderr: "" Mar 16 13:50:20.066: INFO: stdout: "Name: redis-master\nNamespace: kubectl-3624\nLabels: app=redis\n role=master\nAnnotations: \nSelector: app=redis,role=master\nType: ClusterIP\nIP: 10.96.80.28\nPort: 6379/TCP\nTargetPort: redis-server/TCP\nEndpoints: 10.244.1.216:6379\nSession Affinity: None\nEvents: \n" Mar 16 13:50:20.069: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe node iruya-control-plane' Mar 16 13:50:20.195: INFO: stderr: "" Mar 16 13:50:20.195: INFO: stdout: "Name: iruya-control-plane\nRoles: master\nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=iruya-control-plane\n kubernetes.io/os=linux\n node-role.kubernetes.io/master=\nAnnotations: kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Sun, 15 Mar 2020 18:24:20 +0000\nTaints: node-role.kubernetes.io/master:NoSchedule\nUnschedulable: false\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n MemoryPressure False Mon, 16 Mar 2020 13:49:47 +0000 Sun, 15 Mar 2020 18:24:20 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Mon, 16 Mar 2020 13:49:47 +0000 Sun, 15 Mar 2020 18:24:20 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Mon, 16 Mar 2020 13:49:47 +0000 Sun, 15 Mar 2020 18:24:20 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Mon, 16 Mar 2020 13:49:47 +0000 Sun, 15 Mar 2020 18:25:00 +0000 KubeletReady kubelet is posting ready status\nAddresses:\n InternalIP: 172.17.0.7\n Hostname: iruya-control-plane\nCapacity:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nAllocatable:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nSystem Info:\n Machine ID: 09f14f6f4d1640fcaab2243401c9f154\n System UUID: 7c6ca533-492e-400c-b058-c282f97a69ec\n Boot ID: ca2aa731-f890-4956-92a1-ff8c7560d571\n Kernel Version: 4.15.0-88-generic\n OS Image: Ubuntu 19.10\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: containerd://1.3.2\n Kubelet Version: v1.15.7\n Kube-Proxy Version: v1.15.7\nPodCIDR: 10.244.0.0/24\nNon-terminated Pods: (7 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system etcd-iruya-control-plane 0 (0%) 0 (0%) 0 (0%) 0 (0%) 19h\n kube-system kindnet-zn8sx 100m (0%) 100m (0%) 50Mi (0%) 50Mi (0%) 19h\n kube-system kube-apiserver-iruya-control-plane 250m (1%) 0 (0%) 0 (0%) 0 (0%) 19h\n kube-system kube-controller-manager-iruya-control-plane 200m (1%) 0 (0%) 0 (0%) 0 (0%) 19h\n kube-system kube-proxy-46nsr 0 (0%) 0 (0%) 0 (0%) 0 (0%) 19h\n kube-system kube-scheduler-iruya-control-plane 100m (0%) 0 (0%) 0 (0%) 0 (0%) 19h\n local-path-storage local-path-provisioner-d4947b89c-72frh 0 (0%) 0 (0%) 0 (0%) 0 (0%) 19h\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 650m (4%) 100m (0%)\n memory 50Mi (0%) 50Mi (0%)\n ephemeral-storage 0 (0%) 0 (0%)\nEvents: \n" Mar 16 13:50:20.195: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe namespace kubectl-3624' Mar 16 13:50:20.297: INFO: stderr: "" Mar 16 13:50:20.297: INFO: stdout: "Name: kubectl-3624\nLabels: e2e-framework=kubectl\n e2e-run=f7b00008-4236-4f7c-ae18-1b0ba274c8a3\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo resource limits.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 16 13:50:20.297: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3624" for this suite. Mar 16 13:50:42.324: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 13:50:42.398: INFO: namespace kubectl-3624 deletion completed in 22.091830613s • [SLOW TEST:26.241 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl describe /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 16 13:50:42.398: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-cdf98047-998c-450a-ab1c-d2839d6c3352 STEP: Creating a pod to test consume configMaps Mar 16 13:50:42.505: INFO: Waiting up to 5m0s for pod "pod-configmaps-31ceb8a5-29d7-4d74-9c3f-484ddd244c30" in namespace "configmap-9209" to be "success or failure" Mar 16 13:50:42.565: INFO: Pod "pod-configmaps-31ceb8a5-29d7-4d74-9c3f-484ddd244c30": Phase="Pending", Reason="", readiness=false. Elapsed: 60.191507ms Mar 16 13:50:44.570: INFO: Pod "pod-configmaps-31ceb8a5-29d7-4d74-9c3f-484ddd244c30": Phase="Pending", Reason="", readiness=false. Elapsed: 2.064498118s Mar 16 13:50:46.573: INFO: Pod "pod-configmaps-31ceb8a5-29d7-4d74-9c3f-484ddd244c30": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.068226226s STEP: Saw pod success Mar 16 13:50:46.573: INFO: Pod "pod-configmaps-31ceb8a5-29d7-4d74-9c3f-484ddd244c30" satisfied condition "success or failure" Mar 16 13:50:46.577: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-31ceb8a5-29d7-4d74-9c3f-484ddd244c30 container configmap-volume-test: STEP: delete the pod Mar 16 13:50:46.594: INFO: Waiting for pod pod-configmaps-31ceb8a5-29d7-4d74-9c3f-484ddd244c30 to disappear Mar 16 13:50:46.598: INFO: Pod pod-configmaps-31ceb8a5-29d7-4d74-9c3f-484ddd244c30 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 16 13:50:46.598: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9209" for this suite. Mar 16 13:50:52.614: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 13:50:52.693: INFO: namespace configmap-9209 deletion completed in 6.091485361s • [SLOW TEST:10.295 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 16 13:50:52.693: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Mar 16 13:51:00.837: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Mar 16 13:51:00.842: INFO: Pod pod-with-poststart-http-hook still exists Mar 16 13:51:02.842: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Mar 16 13:51:02.846: INFO: Pod pod-with-poststart-http-hook still exists Mar 16 13:51:04.842: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Mar 16 13:51:04.846: INFO: Pod pod-with-poststart-http-hook still exists Mar 16 13:51:06.842: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Mar 16 13:51:06.847: INFO: Pod pod-with-poststart-http-hook still exists Mar 16 13:51:08.842: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Mar 16 13:51:08.847: INFO: Pod pod-with-poststart-http-hook still exists Mar 16 13:51:10.842: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Mar 16 13:51:10.846: INFO: Pod pod-with-poststart-http-hook still exists Mar 16 13:51:12.842: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Mar 16 13:51:12.846: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 16 13:51:12.847: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-4732" for this suite. Mar 16 13:51:34.863: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 13:51:34.945: INFO: namespace container-lifecycle-hook-4732 deletion completed in 22.095185521s • [SLOW TEST:42.252 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 16 13:51:34.946: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Delete Grace Period /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:47 [It] should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: setting up selector STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes Mar 16 13:51:39.049: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0' STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice Mar 16 13:51:54.137: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 16 13:51:54.141: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-9510" for this suite. Mar 16 13:52:00.160: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 13:52:00.239: INFO: namespace pods-9510 deletion completed in 6.093875028s • [SLOW TEST:25.293 seconds] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 [k8s.io] Delete Grace Period /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl expose should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 16 13:52:00.240: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating Redis RC Mar 16 13:52:00.302: INFO: namespace kubectl-7824 Mar 16 13:52:00.302: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7824' Mar 16 13:52:02.773: INFO: stderr: "" Mar 16 13:52:02.773: INFO: stdout: "replicationcontroller/redis-master created\n" STEP: Waiting for Redis master to start. Mar 16 13:52:03.818: INFO: Selector matched 1 pods for map[app:redis] Mar 16 13:52:03.818: INFO: Found 0 / 1 Mar 16 13:52:04.800: INFO: Selector matched 1 pods for map[app:redis] Mar 16 13:52:04.800: INFO: Found 0 / 1 Mar 16 13:52:05.777: INFO: Selector matched 1 pods for map[app:redis] Mar 16 13:52:05.777: INFO: Found 1 / 1 Mar 16 13:52:05.777: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Mar 16 13:52:05.781: INFO: Selector matched 1 pods for map[app:redis] Mar 16 13:52:05.781: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Mar 16 13:52:05.781: INFO: wait on redis-master startup in kubectl-7824 Mar 16 13:52:05.781: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-rm7jq redis-master --namespace=kubectl-7824' Mar 16 13:52:05.880: INFO: stderr: "" Mar 16 13:52:05.880: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 16 Mar 13:52:05.215 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 16 Mar 13:52:05.215 # Server started, Redis version 3.2.12\n1:M 16 Mar 13:52:05.215 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 16 Mar 13:52:05.215 * The server is now ready to accept connections on port 6379\n" STEP: exposing RC Mar 16 13:52:05.880: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc redis-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-7824' Mar 16 13:52:06.087: INFO: stderr: "" Mar 16 13:52:06.087: INFO: stdout: "service/rm2 exposed\n" Mar 16 13:52:06.091: INFO: Service rm2 in namespace kubectl-7824 found. STEP: exposing service Mar 16 13:52:08.100: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-7824' Mar 16 13:52:08.222: INFO: stderr: "" Mar 16 13:52:08.222: INFO: stdout: "service/rm3 exposed\n" Mar 16 13:52:08.229: INFO: Service rm3 in namespace kubectl-7824 found. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 16 13:52:10.237: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7824" for this suite. Mar 16 13:52:32.263: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 13:52:32.342: INFO: namespace kubectl-7824 deletion completed in 22.100838575s • [SLOW TEST:32.102 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl expose /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 16 13:52:32.342: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name s-test-opt-del-7dbb6751-e0b3-46ad-a408-78532aa903ff STEP: Creating secret with name s-test-opt-upd-fa5b0bec-ce13-45f0-a6f8-b9dfbf55e695 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-7dbb6751-e0b3-46ad-a408-78532aa903ff STEP: Updating secret s-test-opt-upd-fa5b0bec-ce13-45f0-a6f8-b9dfbf55e695 STEP: Creating secret with name s-test-opt-create-1b85830e-a38d-4fd0-81ca-f44124024565 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 16 13:52:40.501: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-3677" for this suite. Mar 16 13:53:02.519: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 13:53:02.603: INFO: namespace secrets-3677 deletion completed in 22.099259271s • [SLOW TEST:30.261 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl rolling-update should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 16 13:53:02.604: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1516 [It] should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Mar 16 13:53:02.676: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=kubectl-2485' Mar 16 13:53:02.789: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Mar 16 13:53:02.790: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n" STEP: verifying the rc e2e-test-nginx-rc was created STEP: rolling-update to same image controller Mar 16 13:53:02.797: INFO: scanned /root for discovery docs: Mar 16 13:53:02.797: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-nginx-rc --update-period=1s --image=docker.io/library/nginx:1.14-alpine --image-pull-policy=IfNotPresent --namespace=kubectl-2485' Mar 16 13:53:18.845: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" Mar 16 13:53:18.845: INFO: stdout: "Created e2e-test-nginx-rc-0bcf57d0dcb3c09782b1c0062851f7a0\nScaling up e2e-test-nginx-rc-0bcf57d0dcb3c09782b1c0062851f7a0 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-0bcf57d0dcb3c09782b1c0062851f7a0 up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-0bcf57d0dcb3c09782b1c0062851f7a0 to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n" Mar 16 13:53:18.845: INFO: stdout: "Created e2e-test-nginx-rc-0bcf57d0dcb3c09782b1c0062851f7a0\nScaling up e2e-test-nginx-rc-0bcf57d0dcb3c09782b1c0062851f7a0 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-0bcf57d0dcb3c09782b1c0062851f7a0 up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-0bcf57d0dcb3c09782b1c0062851f7a0 to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n" STEP: waiting for all containers in run=e2e-test-nginx-rc pods to come up. Mar 16 13:53:18.846: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-2485' Mar 16 13:53:18.970: INFO: stderr: "" Mar 16 13:53:18.970: INFO: stdout: "e2e-test-nginx-rc-0bcf57d0dcb3c09782b1c0062851f7a0-gtb5b " Mar 16 13:53:18.971: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-0bcf57d0dcb3c09782b1c0062851f7a0-gtb5b -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-nginx-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2485' Mar 16 13:53:19.055: INFO: stderr: "" Mar 16 13:53:19.055: INFO: stdout: "true" Mar 16 13:53:19.055: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-0bcf57d0dcb3c09782b1c0062851f7a0-gtb5b -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-nginx-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-2485' Mar 16 13:53:19.150: INFO: stderr: "" Mar 16 13:53:19.150: INFO: stdout: "docker.io/library/nginx:1.14-alpine" Mar 16 13:53:19.150: INFO: e2e-test-nginx-rc-0bcf57d0dcb3c09782b1c0062851f7a0-gtb5b is verified up and running [AfterEach] [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1522 Mar 16 13:53:19.150: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=kubectl-2485' Mar 16 13:53:19.249: INFO: stderr: "" Mar 16 13:53:19.249: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 16 13:53:19.249: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2485" for this suite. Mar 16 13:53:41.317: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 13:53:41.395: INFO: namespace kubectl-2485 deletion completed in 22.111472671s • [SLOW TEST:38.791 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 16 13:53:41.396: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-c810f582-13f1-4efb-bf13-56823c558931 STEP: Creating a pod to test consume secrets Mar 16 13:53:41.482: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-49b05ace-f773-4b0e-8105-f4e257c113a8" in namespace "projected-2423" to be "success or failure" Mar 16 13:53:41.494: INFO: Pod "pod-projected-secrets-49b05ace-f773-4b0e-8105-f4e257c113a8": Phase="Pending", Reason="", readiness=false. Elapsed: 12.103434ms Mar 16 13:53:43.727: INFO: Pod "pod-projected-secrets-49b05ace-f773-4b0e-8105-f4e257c113a8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.244673839s Mar 16 13:53:45.730: INFO: Pod "pod-projected-secrets-49b05ace-f773-4b0e-8105-f4e257c113a8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.247966827s STEP: Saw pod success Mar 16 13:53:45.730: INFO: Pod "pod-projected-secrets-49b05ace-f773-4b0e-8105-f4e257c113a8" satisfied condition "success or failure" Mar 16 13:53:45.732: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-secrets-49b05ace-f773-4b0e-8105-f4e257c113a8 container projected-secret-volume-test: STEP: delete the pod Mar 16 13:53:45.944: INFO: Waiting for pod pod-projected-secrets-49b05ace-f773-4b0e-8105-f4e257c113a8 to disappear Mar 16 13:53:46.065: INFO: Pod pod-projected-secrets-49b05ace-f773-4b0e-8105-f4e257c113a8 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 16 13:53:46.065: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2423" for this suite. Mar 16 13:53:52.079: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 13:53:52.154: INFO: namespace projected-2423 deletion completed in 6.085516241s • [SLOW TEST:10.758 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 16 13:53:52.155: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Mar 16 13:53:57.548: INFO: Expected: &{OK} to match Container's Termination Message: OK -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 16 13:53:57.583: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-3932" for this suite. Mar 16 13:54:03.599: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 13:54:03.740: INFO: namespace container-runtime-3932 deletion completed in 6.152619649s • [SLOW TEST:11.585 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129 should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 16 13:54:03.741: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-map-948fd35a-0c61-40bf-b51e-ae23a9132842 STEP: Creating a pod to test consume configMaps Mar 16 13:54:03.819: INFO: Waiting up to 5m0s for pod "pod-configmaps-173b9c56-c6aa-404b-8190-809660f09eab" in namespace "configmap-5784" to be "success or failure" Mar 16 13:54:03.830: INFO: Pod "pod-configmaps-173b9c56-c6aa-404b-8190-809660f09eab": Phase="Pending", Reason="", readiness=false. Elapsed: 10.910945ms Mar 16 13:54:05.834: INFO: Pod "pod-configmaps-173b9c56-c6aa-404b-8190-809660f09eab": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014625812s Mar 16 13:54:07.839: INFO: Pod "pod-configmaps-173b9c56-c6aa-404b-8190-809660f09eab": Phase="Pending", Reason="", readiness=false. Elapsed: 4.019476708s Mar 16 13:54:09.844: INFO: Pod "pod-configmaps-173b9c56-c6aa-404b-8190-809660f09eab": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.024383465s STEP: Saw pod success Mar 16 13:54:09.844: INFO: Pod "pod-configmaps-173b9c56-c6aa-404b-8190-809660f09eab" satisfied condition "success or failure" Mar 16 13:54:09.848: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-173b9c56-c6aa-404b-8190-809660f09eab container configmap-volume-test: STEP: delete the pod Mar 16 13:54:09.878: INFO: Waiting for pod pod-configmaps-173b9c56-c6aa-404b-8190-809660f09eab to disappear Mar 16 13:54:09.889: INFO: Pod pod-configmaps-173b9c56-c6aa-404b-8190-809660f09eab no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 16 13:54:09.889: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5784" for this suite. Mar 16 13:54:15.935: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 13:54:16.030: INFO: namespace configmap-5784 deletion completed in 6.138470437s • [SLOW TEST:12.289 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 16 13:54:16.030: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-projected-all-test-volume-746f3845-e2bb-4ca7-a4dc-b8b531edaf4a STEP: Creating secret with name secret-projected-all-test-volume-c04754a1-5c5a-4410-8ad9-6beee35085bd STEP: Creating a pod to test Check all projections for projected volume plugin Mar 16 13:54:16.113: INFO: Waiting up to 5m0s for pod "projected-volume-818e0cc7-aea4-45d2-b43b-fd40a199201d" in namespace "projected-1143" to be "success or failure" Mar 16 13:54:16.130: INFO: Pod "projected-volume-818e0cc7-aea4-45d2-b43b-fd40a199201d": Phase="Pending", Reason="", readiness=false. Elapsed: 15.993144ms Mar 16 13:54:18.149: INFO: Pod "projected-volume-818e0cc7-aea4-45d2-b43b-fd40a199201d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035645779s Mar 16 13:54:20.153: INFO: Pod "projected-volume-818e0cc7-aea4-45d2-b43b-fd40a199201d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.039558073s STEP: Saw pod success Mar 16 13:54:20.153: INFO: Pod "projected-volume-818e0cc7-aea4-45d2-b43b-fd40a199201d" satisfied condition "success or failure" Mar 16 13:54:20.156: INFO: Trying to get logs from node iruya-worker2 pod projected-volume-818e0cc7-aea4-45d2-b43b-fd40a199201d container projected-all-volume-test: STEP: delete the pod Mar 16 13:54:20.210: INFO: Waiting for pod projected-volume-818e0cc7-aea4-45d2-b43b-fd40a199201d to disappear Mar 16 13:54:20.220: INFO: Pod projected-volume-818e0cc7-aea4-45d2-b43b-fd40a199201d no longer exists [AfterEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 16 13:54:20.220: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1143" for this suite. Mar 16 13:54:26.235: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 13:54:26.317: INFO: namespace projected-1143 deletion completed in 6.094283961s • [SLOW TEST:10.287 seconds] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:31 should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 16 13:54:26.317: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0644 on tmpfs Mar 16 13:54:26.395: INFO: Waiting up to 5m0s for pod "pod-9fd4af18-f861-4146-86bc-1bc47e36fe85" in namespace "emptydir-5014" to be "success or failure" Mar 16 13:54:26.397: INFO: Pod "pod-9fd4af18-f861-4146-86bc-1bc47e36fe85": Phase="Pending", Reason="", readiness=false. Elapsed: 2.563397ms Mar 16 13:54:28.527: INFO: Pod "pod-9fd4af18-f861-4146-86bc-1bc47e36fe85": Phase="Pending", Reason="", readiness=false. Elapsed: 2.132290792s Mar 16 13:54:30.531: INFO: Pod "pod-9fd4af18-f861-4146-86bc-1bc47e36fe85": Phase="Pending", Reason="", readiness=false. Elapsed: 4.136154078s Mar 16 13:54:32.569: INFO: Pod "pod-9fd4af18-f861-4146-86bc-1bc47e36fe85": Phase="Running", Reason="", readiness=true. Elapsed: 6.174891253s Mar 16 13:54:34.623: INFO: Pod "pod-9fd4af18-f861-4146-86bc-1bc47e36fe85": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.227925589s STEP: Saw pod success Mar 16 13:54:34.623: INFO: Pod "pod-9fd4af18-f861-4146-86bc-1bc47e36fe85" satisfied condition "success or failure" Mar 16 13:54:34.625: INFO: Trying to get logs from node iruya-worker2 pod pod-9fd4af18-f861-4146-86bc-1bc47e36fe85 container test-container: STEP: delete the pod Mar 16 13:54:35.072: INFO: Waiting for pod pod-9fd4af18-f861-4146-86bc-1bc47e36fe85 to disappear Mar 16 13:54:35.197: INFO: Pod pod-9fd4af18-f861-4146-86bc-1bc47e36fe85 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 16 13:54:35.197: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5014" for this suite. Mar 16 13:54:41.271: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 13:54:41.355: INFO: namespace emptydir-5014 deletion completed in 6.155737503s • [SLOW TEST:15.038 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 16 13:54:41.356: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-3693 [It] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a new StatefulSet Mar 16 13:54:41.473: INFO: Found 0 stateful pods, waiting for 3 Mar 16 13:54:51.478: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Mar 16 13:54:51.478: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Mar 16 13:54:51.478: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Mar 16 13:55:01.479: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Mar 16 13:55:01.479: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Mar 16 13:55:01.479: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true Mar 16 13:55:01.490: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3693 ss2-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Mar 16 13:55:01.724: INFO: stderr: "I0316 13:55:01.616878 2584 log.go:172] (0xc000a08630) (0xc000602be0) Create stream\nI0316 13:55:01.616941 2584 log.go:172] (0xc000a08630) (0xc000602be0) Stream added, broadcasting: 1\nI0316 13:55:01.619923 2584 log.go:172] (0xc000a08630) Reply frame received for 1\nI0316 13:55:01.619972 2584 log.go:172] (0xc000a08630) (0xc000602c80) Create stream\nI0316 13:55:01.619987 2584 log.go:172] (0xc000a08630) (0xc000602c80) Stream added, broadcasting: 3\nI0316 13:55:01.621259 2584 log.go:172] (0xc000a08630) Reply frame received for 3\nI0316 13:55:01.621316 2584 log.go:172] (0xc000a08630) (0xc000804000) Create stream\nI0316 13:55:01.621337 2584 log.go:172] (0xc000a08630) (0xc000804000) Stream added, broadcasting: 5\nI0316 13:55:01.622389 2584 log.go:172] (0xc000a08630) Reply frame received for 5\nI0316 13:55:01.691940 2584 log.go:172] (0xc000a08630) Data frame received for 5\nI0316 13:55:01.691968 2584 log.go:172] (0xc000804000) (5) Data frame handling\nI0316 13:55:01.691990 2584 log.go:172] (0xc000804000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0316 13:55:01.717273 2584 log.go:172] (0xc000a08630) Data frame received for 3\nI0316 13:55:01.717297 2584 log.go:172] (0xc000602c80) (3) Data frame handling\nI0316 13:55:01.717306 2584 log.go:172] (0xc000602c80) (3) Data frame sent\nI0316 13:55:01.717311 2584 log.go:172] (0xc000a08630) Data frame received for 3\nI0316 13:55:01.717315 2584 log.go:172] (0xc000602c80) (3) Data frame handling\nI0316 13:55:01.717462 2584 log.go:172] (0xc000a08630) Data frame received for 5\nI0316 13:55:01.717473 2584 log.go:172] (0xc000804000) (5) Data frame handling\nI0316 13:55:01.719977 2584 log.go:172] (0xc000a08630) Data frame received for 1\nI0316 13:55:01.720121 2584 log.go:172] (0xc000602be0) (1) Data frame handling\nI0316 13:55:01.720218 2584 log.go:172] (0xc000602be0) (1) Data frame sent\nI0316 13:55:01.720322 2584 log.go:172] (0xc000a08630) (0xc000602be0) Stream removed, broadcasting: 1\nI0316 13:55:01.720443 2584 log.go:172] (0xc000a08630) Go away received\nI0316 13:55:01.721321 2584 log.go:172] (0xc000a08630) (0xc000602be0) Stream removed, broadcasting: 1\nI0316 13:55:01.721348 2584 log.go:172] (0xc000a08630) (0xc000602c80) Stream removed, broadcasting: 3\nI0316 13:55:01.721358 2584 log.go:172] (0xc000a08630) (0xc000804000) Stream removed, broadcasting: 5\n" Mar 16 13:55:01.725: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Mar 16 13:55:01.725: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine Mar 16 13:55:11.757: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order Mar 16 13:55:21.773: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3693 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 16 13:55:21.997: INFO: stderr: "I0316 13:55:21.901485 2604 log.go:172] (0xc00097e370) (0xc000974640) Create stream\nI0316 13:55:21.901541 2604 log.go:172] (0xc00097e370) (0xc000974640) Stream added, broadcasting: 1\nI0316 13:55:21.903617 2604 log.go:172] (0xc00097e370) Reply frame received for 1\nI0316 13:55:21.903661 2604 log.go:172] (0xc00097e370) (0xc000916000) Create stream\nI0316 13:55:21.903678 2604 log.go:172] (0xc00097e370) (0xc000916000) Stream added, broadcasting: 3\nI0316 13:55:21.904796 2604 log.go:172] (0xc00097e370) Reply frame received for 3\nI0316 13:55:21.904843 2604 log.go:172] (0xc00097e370) (0xc0009746e0) Create stream\nI0316 13:55:21.904858 2604 log.go:172] (0xc00097e370) (0xc0009746e0) Stream added, broadcasting: 5\nI0316 13:55:21.906102 2604 log.go:172] (0xc00097e370) Reply frame received for 5\nI0316 13:55:21.992598 2604 log.go:172] (0xc00097e370) Data frame received for 5\nI0316 13:55:21.992633 2604 log.go:172] (0xc0009746e0) (5) Data frame handling\nI0316 13:55:21.992644 2604 log.go:172] (0xc0009746e0) (5) Data frame sent\nI0316 13:55:21.992652 2604 log.go:172] (0xc00097e370) Data frame received for 5\nI0316 13:55:21.992667 2604 log.go:172] (0xc0009746e0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0316 13:55:21.992700 2604 log.go:172] (0xc00097e370) Data frame received for 3\nI0316 13:55:21.992722 2604 log.go:172] (0xc000916000) (3) Data frame handling\nI0316 13:55:21.992740 2604 log.go:172] (0xc000916000) (3) Data frame sent\nI0316 13:55:21.992757 2604 log.go:172] (0xc00097e370) Data frame received for 3\nI0316 13:55:21.992769 2604 log.go:172] (0xc000916000) (3) Data frame handling\nI0316 13:55:21.994200 2604 log.go:172] (0xc00097e370) Data frame received for 1\nI0316 13:55:21.994225 2604 log.go:172] (0xc000974640) (1) Data frame handling\nI0316 13:55:21.994236 2604 log.go:172] (0xc000974640) (1) Data frame sent\nI0316 13:55:21.994247 2604 log.go:172] (0xc00097e370) (0xc000974640) Stream removed, broadcasting: 1\nI0316 13:55:21.994269 2604 log.go:172] (0xc00097e370) Go away received\nI0316 13:55:21.994575 2604 log.go:172] (0xc00097e370) (0xc000974640) Stream removed, broadcasting: 1\nI0316 13:55:21.994596 2604 log.go:172] (0xc00097e370) (0xc000916000) Stream removed, broadcasting: 3\nI0316 13:55:21.994604 2604 log.go:172] (0xc00097e370) (0xc0009746e0) Stream removed, broadcasting: 5\n" Mar 16 13:55:21.997: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Mar 16 13:55:21.997: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Mar 16 13:55:42.016: INFO: Waiting for StatefulSet statefulset-3693/ss2 to complete update STEP: Rolling back to a previous revision Mar 16 13:55:52.026: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3693 ss2-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Mar 16 13:55:52.236: INFO: stderr: "I0316 13:55:52.156876 2625 log.go:172] (0xc00084a000) (0xc000954000) Create stream\nI0316 13:55:52.156947 2625 log.go:172] (0xc00084a000) (0xc000954000) Stream added, broadcasting: 1\nI0316 13:55:52.159293 2625 log.go:172] (0xc00084a000) Reply frame received for 1\nI0316 13:55:52.159329 2625 log.go:172] (0xc00084a000) (0xc000954140) Create stream\nI0316 13:55:52.159339 2625 log.go:172] (0xc00084a000) (0xc000954140) Stream added, broadcasting: 3\nI0316 13:55:52.160295 2625 log.go:172] (0xc00084a000) Reply frame received for 3\nI0316 13:55:52.160322 2625 log.go:172] (0xc00084a000) (0xc0005d8280) Create stream\nI0316 13:55:52.160331 2625 log.go:172] (0xc00084a000) (0xc0005d8280) Stream added, broadcasting: 5\nI0316 13:55:52.161068 2625 log.go:172] (0xc00084a000) Reply frame received for 5\nI0316 13:55:52.201529 2625 log.go:172] (0xc00084a000) Data frame received for 5\nI0316 13:55:52.201551 2625 log.go:172] (0xc0005d8280) (5) Data frame handling\nI0316 13:55:52.201566 2625 log.go:172] (0xc0005d8280) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0316 13:55:52.229787 2625 log.go:172] (0xc00084a000) Data frame received for 3\nI0316 13:55:52.229818 2625 log.go:172] (0xc000954140) (3) Data frame handling\nI0316 13:55:52.229831 2625 log.go:172] (0xc000954140) (3) Data frame sent\nI0316 13:55:52.229840 2625 log.go:172] (0xc00084a000) Data frame received for 3\nI0316 13:55:52.229848 2625 log.go:172] (0xc000954140) (3) Data frame handling\nI0316 13:55:52.229887 2625 log.go:172] (0xc00084a000) Data frame received for 5\nI0316 13:55:52.229932 2625 log.go:172] (0xc0005d8280) (5) Data frame handling\nI0316 13:55:52.232006 2625 log.go:172] (0xc00084a000) Data frame received for 1\nI0316 13:55:52.232046 2625 log.go:172] (0xc000954000) (1) Data frame handling\nI0316 13:55:52.232067 2625 log.go:172] (0xc000954000) (1) Data frame sent\nI0316 13:55:52.232082 2625 log.go:172] (0xc00084a000) (0xc000954000) Stream removed, broadcasting: 1\nI0316 13:55:52.232348 2625 log.go:172] (0xc00084a000) Go away received\nI0316 13:55:52.232646 2625 log.go:172] (0xc00084a000) (0xc000954000) Stream removed, broadcasting: 1\nI0316 13:55:52.232673 2625 log.go:172] (0xc00084a000) (0xc000954140) Stream removed, broadcasting: 3\nI0316 13:55:52.232687 2625 log.go:172] (0xc00084a000) (0xc0005d8280) Stream removed, broadcasting: 5\n" Mar 16 13:55:52.236: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Mar 16 13:55:52.236: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Mar 16 13:56:02.292: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order Mar 16 13:56:12.317: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3693 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 16 13:56:12.531: INFO: stderr: "I0316 13:56:12.440985 2645 log.go:172] (0xc0001166e0) (0xc000810640) Create stream\nI0316 13:56:12.441038 2645 log.go:172] (0xc0001166e0) (0xc000810640) Stream added, broadcasting: 1\nI0316 13:56:12.445285 2645 log.go:172] (0xc0001166e0) Reply frame received for 1\nI0316 13:56:12.445351 2645 log.go:172] (0xc0001166e0) (0xc000932000) Create stream\nI0316 13:56:12.445367 2645 log.go:172] (0xc0001166e0) (0xc000932000) Stream added, broadcasting: 3\nI0316 13:56:12.447328 2645 log.go:172] (0xc0001166e0) Reply frame received for 3\nI0316 13:56:12.447365 2645 log.go:172] (0xc0001166e0) (0xc0009320a0) Create stream\nI0316 13:56:12.447376 2645 log.go:172] (0xc0001166e0) (0xc0009320a0) Stream added, broadcasting: 5\nI0316 13:56:12.448417 2645 log.go:172] (0xc0001166e0) Reply frame received for 5\nI0316 13:56:12.524621 2645 log.go:172] (0xc0001166e0) Data frame received for 3\nI0316 13:56:12.524673 2645 log.go:172] (0xc000932000) (3) Data frame handling\nI0316 13:56:12.524712 2645 log.go:172] (0xc000932000) (3) Data frame sent\nI0316 13:56:12.524732 2645 log.go:172] (0xc0001166e0) Data frame received for 3\nI0316 13:56:12.524744 2645 log.go:172] (0xc000932000) (3) Data frame handling\nI0316 13:56:12.524827 2645 log.go:172] (0xc0001166e0) Data frame received for 5\nI0316 13:56:12.524917 2645 log.go:172] (0xc0009320a0) (5) Data frame handling\nI0316 13:56:12.524946 2645 log.go:172] (0xc0009320a0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0316 13:56:12.524971 2645 log.go:172] (0xc0001166e0) Data frame received for 5\nI0316 13:56:12.524998 2645 log.go:172] (0xc0009320a0) (5) Data frame handling\nI0316 13:56:12.526792 2645 log.go:172] (0xc0001166e0) Data frame received for 1\nI0316 13:56:12.526821 2645 log.go:172] (0xc000810640) (1) Data frame handling\nI0316 13:56:12.526843 2645 log.go:172] (0xc000810640) (1) Data frame sent\nI0316 13:56:12.526903 2645 log.go:172] (0xc0001166e0) (0xc000810640) Stream removed, broadcasting: 1\nI0316 13:56:12.526943 2645 log.go:172] (0xc0001166e0) Go away received\nI0316 13:56:12.527366 2645 log.go:172] (0xc0001166e0) (0xc000810640) Stream removed, broadcasting: 1\nI0316 13:56:12.527390 2645 log.go:172] (0xc0001166e0) (0xc000932000) Stream removed, broadcasting: 3\nI0316 13:56:12.527412 2645 log.go:172] (0xc0001166e0) (0xc0009320a0) Stream removed, broadcasting: 5\n" Mar 16 13:56:12.531: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Mar 16 13:56:12.531: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 Mar 16 13:56:32.552: INFO: Deleting all statefulset in ns statefulset-3693 Mar 16 13:56:32.554: INFO: Scaling statefulset ss2 to 0 Mar 16 13:56:52.586: INFO: Waiting for statefulset status.replicas updated to 0 Mar 16 13:56:52.590: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 16 13:56:52.601: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-3693" for this suite. Mar 16 13:56:58.615: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 13:56:58.700: INFO: namespace statefulset-3693 deletion completed in 6.095242486s • [SLOW TEST:137.344 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 16 13:56:58.701: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-16d17862-78b0-44f2-ac09-96a6f07677b7 STEP: Creating a pod to test consume secrets Mar 16 13:56:58.769: INFO: Waiting up to 5m0s for pod "pod-secrets-a6419fe4-5792-4ce6-a9bd-e770f3083ab8" in namespace "secrets-6158" to be "success or failure" Mar 16 13:56:58.773: INFO: Pod "pod-secrets-a6419fe4-5792-4ce6-a9bd-e770f3083ab8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.127377ms Mar 16 13:57:00.778: INFO: Pod "pod-secrets-a6419fe4-5792-4ce6-a9bd-e770f3083ab8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00833236s Mar 16 13:57:02.782: INFO: Pod "pod-secrets-a6419fe4-5792-4ce6-a9bd-e770f3083ab8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012420994s STEP: Saw pod success Mar 16 13:57:02.782: INFO: Pod "pod-secrets-a6419fe4-5792-4ce6-a9bd-e770f3083ab8" satisfied condition "success or failure" Mar 16 13:57:02.785: INFO: Trying to get logs from node iruya-worker pod pod-secrets-a6419fe4-5792-4ce6-a9bd-e770f3083ab8 container secret-volume-test: STEP: delete the pod Mar 16 13:57:02.840: INFO: Waiting for pod pod-secrets-a6419fe4-5792-4ce6-a9bd-e770f3083ab8 to disappear Mar 16 13:57:02.857: INFO: Pod pod-secrets-a6419fe4-5792-4ce6-a9bd-e770f3083ab8 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 16 13:57:02.857: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-6158" for this suite. Mar 16 13:57:08.887: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 13:57:08.962: INFO: namespace secrets-6158 deletion completed in 6.102152163s • [SLOW TEST:10.262 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 16 13:57:08.962: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod liveness-d2b67ed6-cb3f-47ad-8132-e906b658ce6e in namespace container-probe-2791 Mar 16 13:57:13.029: INFO: Started pod liveness-d2b67ed6-cb3f-47ad-8132-e906b658ce6e in namespace container-probe-2791 STEP: checking the pod's current state and verifying that restartCount is present Mar 16 13:57:13.032: INFO: Initial restart count of pod liveness-d2b67ed6-cb3f-47ad-8132-e906b658ce6e is 0 Mar 16 13:57:25.060: INFO: Restart count of pod container-probe-2791/liveness-d2b67ed6-cb3f-47ad-8132-e906b658ce6e is now 1 (12.027665088s elapsed) Mar 16 13:57:45.102: INFO: Restart count of pod container-probe-2791/liveness-d2b67ed6-cb3f-47ad-8132-e906b658ce6e is now 2 (32.069704298s elapsed) Mar 16 13:58:05.143: INFO: Restart count of pod container-probe-2791/liveness-d2b67ed6-cb3f-47ad-8132-e906b658ce6e is now 3 (52.111314212s elapsed) Mar 16 13:58:25.190: INFO: Restart count of pod container-probe-2791/liveness-d2b67ed6-cb3f-47ad-8132-e906b658ce6e is now 4 (1m12.157880078s elapsed) Mar 16 13:59:37.519: INFO: Restart count of pod container-probe-2791/liveness-d2b67ed6-cb3f-47ad-8132-e906b658ce6e is now 5 (2m24.486920768s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 16 13:59:37.531: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-2791" for this suite. Mar 16 13:59:43.582: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 13:59:43.656: INFO: namespace container-probe-2791 deletion completed in 6.094117981s • [SLOW TEST:154.693 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 16 13:59:43.656: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-0b2e90ad-7c21-490c-8d05-7222c3bf7b22 STEP: Creating a pod to test consume configMaps Mar 16 13:59:43.736: INFO: Waiting up to 5m0s for pod "pod-configmaps-3320e9fd-a553-4d89-b33c-91d2e7b7558d" in namespace "configmap-5016" to be "success or failure" Mar 16 13:59:43.749: INFO: Pod "pod-configmaps-3320e9fd-a553-4d89-b33c-91d2e7b7558d": Phase="Pending", Reason="", readiness=false. Elapsed: 13.347285ms Mar 16 13:59:45.753: INFO: Pod "pod-configmaps-3320e9fd-a553-4d89-b33c-91d2e7b7558d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017256168s Mar 16 13:59:47.757: INFO: Pod "pod-configmaps-3320e9fd-a553-4d89-b33c-91d2e7b7558d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.021385414s STEP: Saw pod success Mar 16 13:59:47.757: INFO: Pod "pod-configmaps-3320e9fd-a553-4d89-b33c-91d2e7b7558d" satisfied condition "success or failure" Mar 16 13:59:47.760: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-3320e9fd-a553-4d89-b33c-91d2e7b7558d container configmap-volume-test: STEP: delete the pod Mar 16 13:59:47.795: INFO: Waiting for pod pod-configmaps-3320e9fd-a553-4d89-b33c-91d2e7b7558d to disappear Mar 16 13:59:47.806: INFO: Pod pod-configmaps-3320e9fd-a553-4d89-b33c-91d2e7b7558d no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 16 13:59:47.806: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5016" for this suite. Mar 16 13:59:53.840: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 13:59:53.910: INFO: namespace configmap-5016 deletion completed in 6.099467477s • [SLOW TEST:10.254 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 16 13:59:53.911: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name cm-test-opt-del-a77d2dc9-7153-41ed-ade7-d7baddfcc04c STEP: Creating configMap with name cm-test-opt-upd-1b433b95-5aa6-40d4-b383-69ae2373dc95 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-a77d2dc9-7153-41ed-ade7-d7baddfcc04c STEP: Updating configmap cm-test-opt-upd-1b433b95-5aa6-40d4-b383-69ae2373dc95 STEP: Creating configMap with name cm-test-opt-create-daf71035-d87e-4e9e-95c7-6b67eb071440 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 16 14:00:04.124: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5479" for this suite. Mar 16 14:00:26.149: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 14:00:26.247: INFO: namespace configmap-5479 deletion completed in 22.120363779s • [SLOW TEST:32.336 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 16 14:00:26.247: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating replication controller my-hostname-basic-b5c10a51-827f-4256-bf78-5b2eeed594c5 Mar 16 14:00:26.345: INFO: Pod name my-hostname-basic-b5c10a51-827f-4256-bf78-5b2eeed594c5: Found 0 pods out of 1 Mar 16 14:00:31.349: INFO: Pod name my-hostname-basic-b5c10a51-827f-4256-bf78-5b2eeed594c5: Found 1 pods out of 1 Mar 16 14:00:31.349: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-b5c10a51-827f-4256-bf78-5b2eeed594c5" are running Mar 16 14:00:31.352: INFO: Pod "my-hostname-basic-b5c10a51-827f-4256-bf78-5b2eeed594c5-6xqc8" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-16 14:00:26 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-16 14:00:28 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-16 14:00:28 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-16 14:00:26 +0000 UTC Reason: Message:}]) Mar 16 14:00:31.353: INFO: Trying to dial the pod Mar 16 14:00:36.364: INFO: Controller my-hostname-basic-b5c10a51-827f-4256-bf78-5b2eeed594c5: Got expected result from replica 1 [my-hostname-basic-b5c10a51-827f-4256-bf78-5b2eeed594c5-6xqc8]: "my-hostname-basic-b5c10a51-827f-4256-bf78-5b2eeed594c5-6xqc8", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 16 14:00:36.364: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-9097" for this suite. Mar 16 14:00:44.472: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 14:00:44.544: INFO: namespace replication-controller-9097 deletion completed in 8.176792676s • [SLOW TEST:18.296 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 16 14:00:44.544: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:167 [It] should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating server pod server in namespace prestop-1969 STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace prestop-1969 STEP: Deleting pre-stop pod Mar 16 14:01:00.232: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 16 14:01:00.237: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "prestop-1969" for this suite. Mar 16 14:01:38.331: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 14:01:38.420: INFO: namespace prestop-1969 deletion completed in 38.163037592s • [SLOW TEST:53.876 seconds] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 16 14:01:38.421: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod Mar 16 14:01:42.706: INFO: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:send-events-40f85974-6008-4a68-9704-9f257d0bc2d3,GenerateName:,Namespace:events-2640,SelfLink:/api/v1/namespaces/events-2640/pods/send-events-40f85974-6008-4a68-9704-9f257d0bc2d3,UID:c6b8ca33-2786-4faf-81d1-245328217e6f,ResourceVersion:168481,Generation:0,CreationTimestamp:2020-03-16 14:01:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: foo,time: 661713866,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-nbvc9 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-nbvc9,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{p gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 [] [] [{ 0 80 TCP }] [] [] {map[] map[]} [{default-token-nbvc9 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002049690} {node.kubernetes.io/unreachable Exists NoExecute 0xc0020496b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-16 14:01:38 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-03-16 14:01:41 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-03-16 14:01:41 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-16 14:01:38 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:10.244.1.234,StartTime:2020-03-16 14:01:38 +0000 UTC,ContainerStatuses:[{p {nil ContainerStateRunning{StartedAt:2020-03-16 14:01:40 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 containerd://1ae0d1b8df94a83ac35a3e5991045e0a11f48f449ee16daa3937c7bc5edc87ec}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} STEP: checking for scheduler event about the pod Mar 16 14:01:44.712: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod Mar 16 14:01:46.716: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 16 14:01:46.722: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-2640" for this suite. Mar 16 14:02:24.869: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 14:02:24.943: INFO: namespace events-2640 deletion completed in 38.102393711s • [SLOW TEST:46.522 seconds] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 16 14:02:24.943: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-map-f609e4a4-e3dd-49b6-a02e-cecc9d1e6448 STEP: Creating a pod to test consume secrets Mar 16 14:02:25.057: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-090fa416-4edd-4bbf-b41b-39ec8c876b62" in namespace "projected-6134" to be "success or failure" Mar 16 14:02:25.061: INFO: Pod "pod-projected-secrets-090fa416-4edd-4bbf-b41b-39ec8c876b62": Phase="Pending", Reason="", readiness=false. Elapsed: 3.967165ms Mar 16 14:02:27.065: INFO: Pod "pod-projected-secrets-090fa416-4edd-4bbf-b41b-39ec8c876b62": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00784944s Mar 16 14:02:29.069: INFO: Pod "pod-projected-secrets-090fa416-4edd-4bbf-b41b-39ec8c876b62": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012017197s STEP: Saw pod success Mar 16 14:02:29.069: INFO: Pod "pod-projected-secrets-090fa416-4edd-4bbf-b41b-39ec8c876b62" satisfied condition "success or failure" Mar 16 14:02:29.072: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-secrets-090fa416-4edd-4bbf-b41b-39ec8c876b62 container projected-secret-volume-test: STEP: delete the pod Mar 16 14:02:29.097: INFO: Waiting for pod pod-projected-secrets-090fa416-4edd-4bbf-b41b-39ec8c876b62 to disappear Mar 16 14:02:29.099: INFO: Pod pod-projected-secrets-090fa416-4edd-4bbf-b41b-39ec8c876b62 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 16 14:02:29.099: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6134" for this suite. Mar 16 14:02:35.113: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 14:02:35.196: INFO: namespace projected-6134 deletion completed in 6.094052547s • [SLOW TEST:10.252 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl replace should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 16 14:02:35.196: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1721 [It] should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Mar 16 14:02:35.271: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --labels=run=e2e-test-nginx-pod --namespace=kubectl-800' Mar 16 14:02:37.652: INFO: stderr: "" Mar 16 14:02:37.652: INFO: stdout: "pod/e2e-test-nginx-pod created\n" STEP: verifying the pod e2e-test-nginx-pod is running STEP: verifying the pod e2e-test-nginx-pod was created Mar 16 14:02:42.702: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-nginx-pod --namespace=kubectl-800 -o json' Mar 16 14:02:42.797: INFO: stderr: "" Mar 16 14:02:42.797: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2020-03-16T14:02:37Z\",\n \"labels\": {\n \"run\": \"e2e-test-nginx-pod\"\n },\n \"name\": \"e2e-test-nginx-pod\",\n \"namespace\": \"kubectl-800\",\n \"resourceVersion\": \"168647\",\n \"selfLink\": \"/api/v1/namespaces/kubectl-800/pods/e2e-test-nginx-pod\",\n \"uid\": \"ce114668-116c-4f61-9fe8-98d048287dc7\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/nginx:1.14-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-nginx-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-qwvpm\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"iruya-worker2\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-qwvpm\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-qwvpm\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-03-16T14:02:37Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-03-16T14:02:40Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-03-16T14:02:40Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-03-16T14:02:37Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"containerd://c36ddc817860c9e8f41028ae08b9c8fa6ae59067ffb59561d666092c1a6163b6\",\n \"image\": \"docker.io/library/nginx:1.14-alpine\",\n \"imageID\": \"docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7\",\n \"lastState\": {},\n \"name\": \"e2e-test-nginx-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2020-03-16T14:02:39Z\"\n }\n }\n }\n ],\n \"hostIP\": \"172.17.0.5\",\n \"phase\": \"Running\",\n \"podIP\": \"10.244.1.236\",\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2020-03-16T14:02:37Z\"\n }\n}\n" STEP: replace the image in the pod Mar 16 14:02:42.797: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-800' Mar 16 14:02:43.049: INFO: stderr: "" Mar 16 14:02:43.049: INFO: stdout: "pod/e2e-test-nginx-pod replaced\n" STEP: verifying the pod e2e-test-nginx-pod has the right image docker.io/library/busybox:1.29 [AfterEach] [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1726 Mar 16 14:02:43.060: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=kubectl-800' Mar 16 14:02:46.119: INFO: stderr: "" Mar 16 14:02:46.119: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 16 14:02:46.119: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-800" for this suite. Mar 16 14:02:52.141: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 14:02:52.219: INFO: namespace kubectl-800 deletion completed in 6.086526672s • [SLOW TEST:17.023 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 16 14:02:52.219: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-5285bdf1-5cf5-4b66-a17d-9af1351c08e9 STEP: Creating a pod to test consume configMaps Mar 16 14:02:52.303: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-00248f65-4c15-40b6-b044-87b9826b8f3c" in namespace "projected-8318" to be "success or failure" Mar 16 14:02:52.326: INFO: Pod "pod-projected-configmaps-00248f65-4c15-40b6-b044-87b9826b8f3c": Phase="Pending", Reason="", readiness=false. Elapsed: 22.289432ms Mar 16 14:02:54.330: INFO: Pod "pod-projected-configmaps-00248f65-4c15-40b6-b044-87b9826b8f3c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026476703s Mar 16 14:02:56.334: INFO: Pod "pod-projected-configmaps-00248f65-4c15-40b6-b044-87b9826b8f3c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.030427742s STEP: Saw pod success Mar 16 14:02:56.334: INFO: Pod "pod-projected-configmaps-00248f65-4c15-40b6-b044-87b9826b8f3c" satisfied condition "success or failure" Mar 16 14:02:56.338: INFO: Trying to get logs from node iruya-worker pod pod-projected-configmaps-00248f65-4c15-40b6-b044-87b9826b8f3c container projected-configmap-volume-test: STEP: delete the pod Mar 16 14:02:56.433: INFO: Waiting for pod pod-projected-configmaps-00248f65-4c15-40b6-b044-87b9826b8f3c to disappear Mar 16 14:02:56.436: INFO: Pod pod-projected-configmaps-00248f65-4c15-40b6-b044-87b9826b8f3c no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 16 14:02:56.436: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8318" for this suite. Mar 16 14:03:02.502: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 14:03:02.577: INFO: namespace projected-8318 deletion completed in 6.138005925s • [SLOW TEST:10.358 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 16 14:03:02.577: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: Gathering metrics W0316 14:03:03.837653 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Mar 16 14:03:03.837: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 16 14:03:03.837: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-7171" for this suite. Mar 16 14:03:09.856: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 14:03:09.940: INFO: namespace gc-7171 deletion completed in 6.099369816s • [SLOW TEST:7.363 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 16 14:03:09.941: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0777 on node default medium Mar 16 14:03:09.992: INFO: Waiting up to 5m0s for pod "pod-853981b1-c77c-4ccc-91d3-0a23a9882108" in namespace "emptydir-8530" to be "success or failure" Mar 16 14:03:10.002: INFO: Pod "pod-853981b1-c77c-4ccc-91d3-0a23a9882108": Phase="Pending", Reason="", readiness=false. Elapsed: 10.413036ms Mar 16 14:03:12.006: INFO: Pod "pod-853981b1-c77c-4ccc-91d3-0a23a9882108": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014424312s Mar 16 14:03:14.010: INFO: Pod "pod-853981b1-c77c-4ccc-91d3-0a23a9882108": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018489836s STEP: Saw pod success Mar 16 14:03:14.010: INFO: Pod "pod-853981b1-c77c-4ccc-91d3-0a23a9882108" satisfied condition "success or failure" Mar 16 14:03:14.013: INFO: Trying to get logs from node iruya-worker2 pod pod-853981b1-c77c-4ccc-91d3-0a23a9882108 container test-container: STEP: delete the pod Mar 16 14:03:14.034: INFO: Waiting for pod pod-853981b1-c77c-4ccc-91d3-0a23a9882108 to disappear Mar 16 14:03:14.038: INFO: Pod pod-853981b1-c77c-4ccc-91d3-0a23a9882108 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 16 14:03:14.038: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8530" for this suite. Mar 16 14:03:20.054: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 14:03:20.153: INFO: namespace emptydir-8530 deletion completed in 6.110604709s • [SLOW TEST:10.212 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 16 14:03:20.153: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods STEP: Gathering metrics W0316 14:04:00.355848 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Mar 16 14:04:00.355: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 16 14:04:00.355: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-8853" for this suite. Mar 16 14:04:10.376: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 14:04:10.457: INFO: namespace gc-8853 deletion completed in 10.09798158s • [SLOW TEST:50.304 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 16 14:04:10.458: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Mar 16 14:04:10.539: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. Mar 16 14:04:10.545: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 16 14:04:10.564: INFO: Number of nodes with available pods: 0 Mar 16 14:04:10.564: INFO: Node iruya-worker is running more than one daemon pod Mar 16 14:04:11.569: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 16 14:04:11.572: INFO: Number of nodes with available pods: 0 Mar 16 14:04:11.572: INFO: Node iruya-worker is running more than one daemon pod Mar 16 14:04:12.569: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 16 14:04:12.572: INFO: Number of nodes with available pods: 0 Mar 16 14:04:12.572: INFO: Node iruya-worker is running more than one daemon pod Mar 16 14:04:13.574: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 16 14:04:13.579: INFO: Number of nodes with available pods: 0 Mar 16 14:04:13.579: INFO: Node iruya-worker is running more than one daemon pod Mar 16 14:04:14.572: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 16 14:04:14.575: INFO: Number of nodes with available pods: 1 Mar 16 14:04:14.575: INFO: Node iruya-worker2 is running more than one daemon pod Mar 16 14:04:15.569: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 16 14:04:15.573: INFO: Number of nodes with available pods: 2 Mar 16 14:04:15.573: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. Mar 16 14:04:15.633: INFO: Wrong image for pod: daemon-set-d7w7l. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 16 14:04:15.633: INFO: Wrong image for pod: daemon-set-s9zfh. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 16 14:04:15.639: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 16 14:04:16.643: INFO: Wrong image for pod: daemon-set-d7w7l. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 16 14:04:16.643: INFO: Wrong image for pod: daemon-set-s9zfh. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 16 14:04:16.646: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 16 14:04:17.644: INFO: Wrong image for pod: daemon-set-d7w7l. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 16 14:04:17.644: INFO: Wrong image for pod: daemon-set-s9zfh. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 16 14:04:17.648: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 16 14:04:18.644: INFO: Wrong image for pod: daemon-set-d7w7l. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 16 14:04:18.644: INFO: Wrong image for pod: daemon-set-s9zfh. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 16 14:04:18.648: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 16 14:04:19.644: INFO: Wrong image for pod: daemon-set-d7w7l. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 16 14:04:19.644: INFO: Wrong image for pod: daemon-set-s9zfh. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 16 14:04:19.644: INFO: Pod daemon-set-s9zfh is not available Mar 16 14:04:19.648: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 16 14:04:20.644: INFO: Wrong image for pod: daemon-set-d7w7l. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 16 14:04:20.644: INFO: Wrong image for pod: daemon-set-s9zfh. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 16 14:04:20.644: INFO: Pod daemon-set-s9zfh is not available Mar 16 14:04:20.648: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 16 14:04:21.644: INFO: Wrong image for pod: daemon-set-d7w7l. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 16 14:04:21.644: INFO: Wrong image for pod: daemon-set-s9zfh. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 16 14:04:21.644: INFO: Pod daemon-set-s9zfh is not available Mar 16 14:04:21.648: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 16 14:04:22.644: INFO: Wrong image for pod: daemon-set-d7w7l. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 16 14:04:22.644: INFO: Pod daemon-set-gqw8v is not available Mar 16 14:04:22.648: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 16 14:04:23.644: INFO: Wrong image for pod: daemon-set-d7w7l. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 16 14:04:23.644: INFO: Pod daemon-set-gqw8v is not available Mar 16 14:04:23.648: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 16 14:04:24.644: INFO: Wrong image for pod: daemon-set-d7w7l. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 16 14:04:24.644: INFO: Pod daemon-set-gqw8v is not available Mar 16 14:04:24.648: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 16 14:04:25.644: INFO: Wrong image for pod: daemon-set-d7w7l. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 16 14:04:25.647: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 16 14:04:26.661: INFO: Wrong image for pod: daemon-set-d7w7l. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 16 14:04:26.661: INFO: Pod daemon-set-d7w7l is not available Mar 16 14:04:26.665: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 16 14:04:27.644: INFO: Wrong image for pod: daemon-set-d7w7l. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 16 14:04:27.644: INFO: Pod daemon-set-d7w7l is not available Mar 16 14:04:27.648: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 16 14:04:28.644: INFO: Wrong image for pod: daemon-set-d7w7l. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 16 14:04:28.644: INFO: Pod daemon-set-d7w7l is not available Mar 16 14:04:28.647: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 16 14:04:29.644: INFO: Wrong image for pod: daemon-set-d7w7l. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 16 14:04:29.644: INFO: Pod daemon-set-d7w7l is not available Mar 16 14:04:29.648: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 16 14:04:30.643: INFO: Wrong image for pod: daemon-set-d7w7l. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 16 14:04:30.643: INFO: Pod daemon-set-d7w7l is not available Mar 16 14:04:30.647: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 16 14:04:31.645: INFO: Wrong image for pod: daemon-set-d7w7l. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 16 14:04:31.645: INFO: Pod daemon-set-d7w7l is not available Mar 16 14:04:31.648: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 16 14:04:32.643: INFO: Pod daemon-set-qxbr4 is not available Mar 16 14:04:32.647: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node STEP: Check that daemon pods are still running on every node of the cluster. Mar 16 14:04:32.650: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 16 14:04:32.653: INFO: Number of nodes with available pods: 1 Mar 16 14:04:32.653: INFO: Node iruya-worker is running more than one daemon pod Mar 16 14:04:33.658: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 16 14:04:33.661: INFO: Number of nodes with available pods: 1 Mar 16 14:04:33.661: INFO: Node iruya-worker is running more than one daemon pod Mar 16 14:04:34.658: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 16 14:04:34.660: INFO: Number of nodes with available pods: 1 Mar 16 14:04:34.660: INFO: Node iruya-worker is running more than one daemon pod Mar 16 14:04:35.658: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 16 14:04:35.662: INFO: Number of nodes with available pods: 2 Mar 16 14:04:35.662: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-9125, will wait for the garbage collector to delete the pods Mar 16 14:04:35.737: INFO: Deleting DaemonSet.extensions daemon-set took: 5.984123ms Mar 16 14:04:36.037: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.221297ms Mar 16 14:04:42.240: INFO: Number of nodes with available pods: 0 Mar 16 14:04:42.240: INFO: Number of running nodes: 0, number of available pods: 0 Mar 16 14:04:42.242: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-9125/daemonsets","resourceVersion":"169257"},"items":null} Mar 16 14:04:42.245: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-9125/pods","resourceVersion":"169257"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 16 14:04:42.255: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-9125" for this suite. Mar 16 14:04:48.272: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 14:04:48.380: INFO: namespace daemonsets-9125 deletion completed in 6.121223249s • [SLOW TEST:37.922 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 16 14:04:48.380: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0666 on tmpfs Mar 16 14:04:48.439: INFO: Waiting up to 5m0s for pod "pod-18c2a2f5-4a90-4e26-8397-a0e1c830cda8" in namespace "emptydir-5580" to be "success or failure" Mar 16 14:04:48.444: INFO: Pod "pod-18c2a2f5-4a90-4e26-8397-a0e1c830cda8": Phase="Pending", Reason="", readiness=false. Elapsed: 5.075086ms Mar 16 14:04:50.448: INFO: Pod "pod-18c2a2f5-4a90-4e26-8397-a0e1c830cda8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009303925s Mar 16 14:04:52.451: INFO: Pod "pod-18c2a2f5-4a90-4e26-8397-a0e1c830cda8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012631976s STEP: Saw pod success Mar 16 14:04:52.451: INFO: Pod "pod-18c2a2f5-4a90-4e26-8397-a0e1c830cda8" satisfied condition "success or failure" Mar 16 14:04:52.454: INFO: Trying to get logs from node iruya-worker pod pod-18c2a2f5-4a90-4e26-8397-a0e1c830cda8 container test-container: STEP: delete the pod Mar 16 14:04:52.506: INFO: Waiting for pod pod-18c2a2f5-4a90-4e26-8397-a0e1c830cda8 to disappear Mar 16 14:04:52.526: INFO: Pod pod-18c2a2f5-4a90-4e26-8397-a0e1c830cda8 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 16 14:04:52.526: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5580" for this suite. Mar 16 14:04:58.542: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 14:04:58.620: INFO: namespace emptydir-5580 deletion completed in 6.090371571s • [SLOW TEST:10.240 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 16 14:04:58.620: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with configMap that has name projected-configmap-test-upd-e4341ada-e1d0-4039-9575-e4a4db60da0a STEP: Creating the pod STEP: Updating configmap projected-configmap-test-upd-e4341ada-e1d0-4039-9575-e4a4db60da0a STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 16 14:06:27.146: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9041" for this suite. Mar 16 14:06:49.162: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 14:06:49.287: INFO: namespace projected-9041 deletion completed in 22.137286498s • [SLOW TEST:110.667 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 16 14:06:49.288: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Mar 16 14:06:49.328: INFO: Waiting up to 5m0s for pod "downwardapi-volume-51971f93-ea27-4a0c-a1cf-312ea4272eb1" in namespace "projected-2973" to be "success or failure" Mar 16 14:06:49.348: INFO: Pod "downwardapi-volume-51971f93-ea27-4a0c-a1cf-312ea4272eb1": Phase="Pending", Reason="", readiness=false. Elapsed: 19.753251ms Mar 16 14:06:51.355: INFO: Pod "downwardapi-volume-51971f93-ea27-4a0c-a1cf-312ea4272eb1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026339208s Mar 16 14:06:53.359: INFO: Pod "downwardapi-volume-51971f93-ea27-4a0c-a1cf-312ea4272eb1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.030531306s STEP: Saw pod success Mar 16 14:06:53.359: INFO: Pod "downwardapi-volume-51971f93-ea27-4a0c-a1cf-312ea4272eb1" satisfied condition "success or failure" Mar 16 14:06:53.362: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-51971f93-ea27-4a0c-a1cf-312ea4272eb1 container client-container: STEP: delete the pod Mar 16 14:06:53.381: INFO: Waiting for pod downwardapi-volume-51971f93-ea27-4a0c-a1cf-312ea4272eb1 to disappear Mar 16 14:06:53.384: INFO: Pod downwardapi-volume-51971f93-ea27-4a0c-a1cf-312ea4272eb1 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 16 14:06:53.384: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2973" for this suite. Mar 16 14:06:59.400: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 14:06:59.475: INFO: namespace projected-2973 deletion completed in 6.087086758s • [SLOW TEST:10.187 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 16 14:06:59.475: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating the pod Mar 16 14:07:04.129: INFO: Successfully updated pod "labelsupdatee8865f83-89f3-4a7f-9c20-57254a764f65" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 16 14:07:06.150: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1475" for this suite. Mar 16 14:07:28.167: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 14:07:28.274: INFO: namespace projected-1475 deletion completed in 22.120571904s • [SLOW TEST:28.799 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 16 14:07:28.274: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object Mar 16 14:07:28.402: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-5654,SelfLink:/api/v1/namespaces/watch-5654/configmaps/e2e-watch-test-label-changed,UID:0ae9beca-9295-447e-b86e-3528802831e4,ResourceVersion:169715,Generation:0,CreationTimestamp:2020-03-16 14:07:28 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} Mar 16 14:07:28.402: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-5654,SelfLink:/api/v1/namespaces/watch-5654/configmaps/e2e-watch-test-label-changed,UID:0ae9beca-9295-447e-b86e-3528802831e4,ResourceVersion:169716,Generation:0,CreationTimestamp:2020-03-16 14:07:28 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} Mar 16 14:07:28.402: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-5654,SelfLink:/api/v1/namespaces/watch-5654/configmaps/e2e-watch-test-label-changed,UID:0ae9beca-9295-447e-b86e-3528802831e4,ResourceVersion:169717,Generation:0,CreationTimestamp:2020-03-16 14:07:28 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored Mar 16 14:07:38.431: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-5654,SelfLink:/api/v1/namespaces/watch-5654/configmaps/e2e-watch-test-label-changed,UID:0ae9beca-9295-447e-b86e-3528802831e4,ResourceVersion:169738,Generation:0,CreationTimestamp:2020-03-16 14:07:28 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Mar 16 14:07:38.431: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-5654,SelfLink:/api/v1/namespaces/watch-5654/configmaps/e2e-watch-test-label-changed,UID:0ae9beca-9295-447e-b86e-3528802831e4,ResourceVersion:169739,Generation:0,CreationTimestamp:2020-03-16 14:07:28 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} Mar 16 14:07:38.431: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-5654,SelfLink:/api/v1/namespaces/watch-5654/configmaps/e2e-watch-test-label-changed,UID:0ae9beca-9295-447e-b86e-3528802831e4,ResourceVersion:169740,Generation:0,CreationTimestamp:2020-03-16 14:07:28 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 16 14:07:38.431: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-5654" for this suite. Mar 16 14:07:44.461: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 14:07:44.541: INFO: namespace watch-5654 deletion completed in 6.105707966s • [SLOW TEST:16.267 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 16 14:07:44.542: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Given a Pod with a 'name' label pod-adoption-release is created STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change Mar 16 14:07:49.650: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 16 14:07:50.676: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-7437" for this suite. Mar 16 14:08:12.713: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 14:08:12.793: INFO: namespace replicaset-7437 deletion completed in 22.113045185s • [SLOW TEST:28.251 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 16 14:08:12.793: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W0316 14:08:43.393625 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Mar 16 14:08:43.393: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 16 14:08:43.393: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-5776" for this suite. Mar 16 14:08:49.433: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 14:08:49.507: INFO: namespace gc-5776 deletion completed in 6.110818714s • [SLOW TEST:36.714 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 16 14:08:49.507: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-dd332401-e54f-475e-bbb0-b8b2f618fb36 STEP: Creating a pod to test consume configMaps Mar 16 14:08:49.576: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-fb4a9b5c-0ee4-45e2-a947-06af67f4b298" in namespace "projected-2012" to be "success or failure" Mar 16 14:08:49.579: INFO: Pod "pod-projected-configmaps-fb4a9b5c-0ee4-45e2-a947-06af67f4b298": Phase="Pending", Reason="", readiness=false. Elapsed: 3.313329ms Mar 16 14:08:51.583: INFO: Pod "pod-projected-configmaps-fb4a9b5c-0ee4-45e2-a947-06af67f4b298": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007729311s Mar 16 14:08:53.588: INFO: Pod "pod-projected-configmaps-fb4a9b5c-0ee4-45e2-a947-06af67f4b298": Phase="Running", Reason="", readiness=true. Elapsed: 4.011933039s Mar 16 14:08:55.592: INFO: Pod "pod-projected-configmaps-fb4a9b5c-0ee4-45e2-a947-06af67f4b298": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.016302096s STEP: Saw pod success Mar 16 14:08:55.592: INFO: Pod "pod-projected-configmaps-fb4a9b5c-0ee4-45e2-a947-06af67f4b298" satisfied condition "success or failure" Mar 16 14:08:55.595: INFO: Trying to get logs from node iruya-worker pod pod-projected-configmaps-fb4a9b5c-0ee4-45e2-a947-06af67f4b298 container projected-configmap-volume-test: STEP: delete the pod Mar 16 14:08:55.625: INFO: Waiting for pod pod-projected-configmaps-fb4a9b5c-0ee4-45e2-a947-06af67f4b298 to disappear Mar 16 14:08:55.639: INFO: Pod pod-projected-configmaps-fb4a9b5c-0ee4-45e2-a947-06af67f4b298 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 16 14:08:55.640: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2012" for this suite. Mar 16 14:09:01.662: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 14:09:01.727: INFO: namespace projected-2012 deletion completed in 6.083198316s • [SLOW TEST:12.219 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 16 14:09:01.727: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test override command Mar 16 14:09:01.835: INFO: Waiting up to 5m0s for pod "client-containers-280d94c7-ecb7-4ba2-82d8-d7511ba648ff" in namespace "containers-3018" to be "success or failure" Mar 16 14:09:01.859: INFO: Pod "client-containers-280d94c7-ecb7-4ba2-82d8-d7511ba648ff": Phase="Pending", Reason="", readiness=false. Elapsed: 24.020985ms Mar 16 14:09:03.863: INFO: Pod "client-containers-280d94c7-ecb7-4ba2-82d8-d7511ba648ff": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028604442s Mar 16 14:09:05.867: INFO: Pod "client-containers-280d94c7-ecb7-4ba2-82d8-d7511ba648ff": Phase="Running", Reason="", readiness=true. Elapsed: 4.032865912s Mar 16 14:09:07.871: INFO: Pod "client-containers-280d94c7-ecb7-4ba2-82d8-d7511ba648ff": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.036579911s STEP: Saw pod success Mar 16 14:09:07.871: INFO: Pod "client-containers-280d94c7-ecb7-4ba2-82d8-d7511ba648ff" satisfied condition "success or failure" Mar 16 14:09:07.874: INFO: Trying to get logs from node iruya-worker2 pod client-containers-280d94c7-ecb7-4ba2-82d8-d7511ba648ff container test-container: STEP: delete the pod Mar 16 14:09:07.906: INFO: Waiting for pod client-containers-280d94c7-ecb7-4ba2-82d8-d7511ba648ff to disappear Mar 16 14:09:07.927: INFO: Pod client-containers-280d94c7-ecb7-4ba2-82d8-d7511ba648ff no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 16 14:09:07.927: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-3018" for this suite. Mar 16 14:09:13.973: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 14:09:14.076: INFO: namespace containers-3018 deletion completed in 6.145248148s • [SLOW TEST:12.349 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 16 14:09:14.076: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 16 14:09:20.431: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-5491" for this suite. Mar 16 14:09:26.446: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 14:09:26.569: INFO: namespace namespaces-5491 deletion completed in 6.135629185s STEP: Destroying namespace "nsdeletetest-7946" for this suite. Mar 16 14:09:26.571: INFO: Namespace nsdeletetest-7946 was already deleted STEP: Destroying namespace "nsdeletetest-7522" for this suite. Mar 16 14:09:32.596: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 14:09:32.675: INFO: namespace nsdeletetest-7522 deletion completed in 6.10419269s • [SLOW TEST:18.599 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 16 14:09:32.676: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 Mar 16 14:09:32.775: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Mar 16 14:09:32.782: INFO: Waiting for terminating namespaces to be deleted... Mar 16 14:09:32.785: INFO: Logging pods the kubelet thinks is on node iruya-worker before test Mar 16 14:09:32.790: INFO: kube-proxy-pmz4p from kube-system started at 2020-03-15 18:24:55 +0000 UTC (1 container statuses recorded) Mar 16 14:09:32.790: INFO: Container kube-proxy ready: true, restart count 0 Mar 16 14:09:32.790: INFO: kindnet-gwz5g from kube-system started at 2020-03-15 18:24:55 +0000 UTC (1 container statuses recorded) Mar 16 14:09:32.790: INFO: Container kindnet-cni ready: true, restart count 0 Mar 16 14:09:32.790: INFO: Logging pods the kubelet thinks is on node iruya-worker2 before test Mar 16 14:09:32.795: INFO: kindnet-mgd8b from kube-system started at 2020-03-15 18:24:43 +0000 UTC (1 container statuses recorded) Mar 16 14:09:32.795: INFO: Container kindnet-cni ready: true, restart count 0 Mar 16 14:09:32.795: INFO: kube-proxy-vwbcj from kube-system started at 2020-03-15 18:24:42 +0000 UTC (1 container statuses recorded) Mar 16 14:09:32.795: INFO: Container kube-proxy ready: true, restart count 0 Mar 16 14:09:32.795: INFO: coredns-5d4dd4b4db-6jcgz from kube-system started at 2020-03-15 18:24:54 +0000 UTC (1 container statuses recorded) Mar 16 14:09:32.795: INFO: Container coredns ready: true, restart count 0 Mar 16 14:09:32.795: INFO: coredns-5d4dd4b4db-gm7vr from kube-system started at 2020-03-15 18:24:52 +0000 UTC (1 container statuses recorded) Mar 16 14:09:32.795: INFO: Container coredns ready: true, restart count 0 [It] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: verifying the node has the label node iruya-worker STEP: verifying the node has the label node iruya-worker2 Mar 16 14:09:32.856: INFO: Pod coredns-5d4dd4b4db-6jcgz requesting resource cpu=100m on Node iruya-worker2 Mar 16 14:09:32.856: INFO: Pod coredns-5d4dd4b4db-gm7vr requesting resource cpu=100m on Node iruya-worker2 Mar 16 14:09:32.856: INFO: Pod kindnet-gwz5g requesting resource cpu=100m on Node iruya-worker Mar 16 14:09:32.856: INFO: Pod kindnet-mgd8b requesting resource cpu=100m on Node iruya-worker2 Mar 16 14:09:32.856: INFO: Pod kube-proxy-pmz4p requesting resource cpu=0m on Node iruya-worker Mar 16 14:09:32.856: INFO: Pod kube-proxy-vwbcj requesting resource cpu=0m on Node iruya-worker2 STEP: Starting Pods to consume most of the cluster CPU. STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-a7361184-1960-4452-88e2-93aadbd39478.15fcce185a4a2a56], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4903/filler-pod-a7361184-1960-4452-88e2-93aadbd39478 to iruya-worker2] STEP: Considering event: Type = [Normal], Name = [filler-pod-a7361184-1960-4452-88e2-93aadbd39478.15fcce18d7acdedc], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-a7361184-1960-4452-88e2-93aadbd39478.15fcce191a247218], Reason = [Created], Message = [Created container filler-pod-a7361184-1960-4452-88e2-93aadbd39478] STEP: Considering event: Type = [Normal], Name = [filler-pod-a7361184-1960-4452-88e2-93aadbd39478.15fcce19337431c6], Reason = [Started], Message = [Started container filler-pod-a7361184-1960-4452-88e2-93aadbd39478] STEP: Considering event: Type = [Normal], Name = [filler-pod-e4a4b9fa-1a9f-41b5-97f7-2cced0af6f98.15fcce1857678c9d], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4903/filler-pod-e4a4b9fa-1a9f-41b5-97f7-2cced0af6f98 to iruya-worker] STEP: Considering event: Type = [Normal], Name = [filler-pod-e4a4b9fa-1a9f-41b5-97f7-2cced0af6f98.15fcce18a2779bb8], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-e4a4b9fa-1a9f-41b5-97f7-2cced0af6f98.15fcce1904287e4a], Reason = [Created], Message = [Created container filler-pod-e4a4b9fa-1a9f-41b5-97f7-2cced0af6f98] STEP: Considering event: Type = [Normal], Name = [filler-pod-e4a4b9fa-1a9f-41b5-97f7-2cced0af6f98.15fcce1923324024], Reason = [Started], Message = [Started container filler-pod-e4a4b9fa-1a9f-41b5-97f7-2cced0af6f98] STEP: Considering event: Type = [Warning], Name = [additional-pod.15fcce19c18fe52a], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taints that the pod didn't tolerate, 2 Insufficient cpu.] STEP: removing the label node off the node iruya-worker STEP: verifying the node doesn't have the label node STEP: removing the label node off the node iruya-worker2 STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 16 14:09:40.030: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-4903" for this suite. Mar 16 14:09:48.060: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 14:09:48.120: INFO: namespace sched-pred-4903 deletion completed in 8.086859122s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72 • [SLOW TEST:15.445 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23 validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 16 14:09:48.122: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-projected-76m6 STEP: Creating a pod to test atomic-volume-subpath Mar 16 14:09:48.266: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-76m6" in namespace "subpath-508" to be "success or failure" Mar 16 14:09:48.354: INFO: Pod "pod-subpath-test-projected-76m6": Phase="Pending", Reason="", readiness=false. Elapsed: 88.110065ms Mar 16 14:09:50.359: INFO: Pod "pod-subpath-test-projected-76m6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.092831796s Mar 16 14:09:52.363: INFO: Pod "pod-subpath-test-projected-76m6": Phase="Running", Reason="", readiness=true. Elapsed: 4.096950032s Mar 16 14:09:54.367: INFO: Pod "pod-subpath-test-projected-76m6": Phase="Running", Reason="", readiness=true. Elapsed: 6.10106259s Mar 16 14:09:56.370: INFO: Pod "pod-subpath-test-projected-76m6": Phase="Running", Reason="", readiness=true. Elapsed: 8.104270348s Mar 16 14:09:58.373: INFO: Pod "pod-subpath-test-projected-76m6": Phase="Running", Reason="", readiness=true. Elapsed: 10.107268884s Mar 16 14:10:00.377: INFO: Pod "pod-subpath-test-projected-76m6": Phase="Running", Reason="", readiness=true. Elapsed: 12.111063424s Mar 16 14:10:02.380: INFO: Pod "pod-subpath-test-projected-76m6": Phase="Running", Reason="", readiness=true. Elapsed: 14.114179984s Mar 16 14:10:04.384: INFO: Pod "pod-subpath-test-projected-76m6": Phase="Running", Reason="", readiness=true. Elapsed: 16.118514332s Mar 16 14:10:06.389: INFO: Pod "pod-subpath-test-projected-76m6": Phase="Running", Reason="", readiness=true. Elapsed: 18.123405653s Mar 16 14:10:08.392: INFO: Pod "pod-subpath-test-projected-76m6": Phase="Running", Reason="", readiness=true. Elapsed: 20.126598446s Mar 16 14:10:10.397: INFO: Pod "pod-subpath-test-projected-76m6": Phase="Running", Reason="", readiness=true. Elapsed: 22.131081894s Mar 16 14:10:12.401: INFO: Pod "pod-subpath-test-projected-76m6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.135480846s STEP: Saw pod success Mar 16 14:10:12.401: INFO: Pod "pod-subpath-test-projected-76m6" satisfied condition "success or failure" Mar 16 14:10:12.404: INFO: Trying to get logs from node iruya-worker pod pod-subpath-test-projected-76m6 container test-container-subpath-projected-76m6: STEP: delete the pod Mar 16 14:10:12.445: INFO: Waiting for pod pod-subpath-test-projected-76m6 to disappear Mar 16 14:10:12.449: INFO: Pod pod-subpath-test-projected-76m6 no longer exists STEP: Deleting pod pod-subpath-test-projected-76m6 Mar 16 14:10:12.449: INFO: Deleting pod "pod-subpath-test-projected-76m6" in namespace "subpath-508" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 16 14:10:12.451: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-508" for this suite. Mar 16 14:10:18.480: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 14:10:18.627: INFO: namespace subpath-508 deletion completed in 6.173043663s • [SLOW TEST:30.506 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 16 14:10:18.628: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-configmap-ldxw STEP: Creating a pod to test atomic-volume-subpath Mar 16 14:10:18.715: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-ldxw" in namespace "subpath-7665" to be "success or failure" Mar 16 14:10:18.719: INFO: Pod "pod-subpath-test-configmap-ldxw": Phase="Pending", Reason="", readiness=false. Elapsed: 4.347779ms Mar 16 14:10:20.723: INFO: Pod "pod-subpath-test-configmap-ldxw": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008449308s Mar 16 14:10:22.728: INFO: Pod "pod-subpath-test-configmap-ldxw": Phase="Running", Reason="", readiness=true. Elapsed: 4.01288719s Mar 16 14:10:24.731: INFO: Pod "pod-subpath-test-configmap-ldxw": Phase="Running", Reason="", readiness=true. Elapsed: 6.016710871s Mar 16 14:10:26.735: INFO: Pod "pod-subpath-test-configmap-ldxw": Phase="Running", Reason="", readiness=true. Elapsed: 8.020361202s Mar 16 14:10:28.739: INFO: Pod "pod-subpath-test-configmap-ldxw": Phase="Running", Reason="", readiness=true. Elapsed: 10.024388237s Mar 16 14:10:30.744: INFO: Pod "pod-subpath-test-configmap-ldxw": Phase="Running", Reason="", readiness=true. Elapsed: 12.028849386s Mar 16 14:10:32.747: INFO: Pod "pod-subpath-test-configmap-ldxw": Phase="Running", Reason="", readiness=true. Elapsed: 14.032143515s Mar 16 14:10:34.751: INFO: Pod "pod-subpath-test-configmap-ldxw": Phase="Running", Reason="", readiness=true. Elapsed: 16.036517547s Mar 16 14:10:36.755: INFO: Pod "pod-subpath-test-configmap-ldxw": Phase="Running", Reason="", readiness=true. Elapsed: 18.040573431s Mar 16 14:10:38.760: INFO: Pod "pod-subpath-test-configmap-ldxw": Phase="Running", Reason="", readiness=true. Elapsed: 20.044927016s Mar 16 14:10:40.764: INFO: Pod "pod-subpath-test-configmap-ldxw": Phase="Running", Reason="", readiness=true. Elapsed: 22.049481384s Mar 16 14:10:42.768: INFO: Pod "pod-subpath-test-configmap-ldxw": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.053021345s STEP: Saw pod success Mar 16 14:10:42.768: INFO: Pod "pod-subpath-test-configmap-ldxw" satisfied condition "success or failure" Mar 16 14:10:42.771: INFO: Trying to get logs from node iruya-worker2 pod pod-subpath-test-configmap-ldxw container test-container-subpath-configmap-ldxw: STEP: delete the pod Mar 16 14:10:42.792: INFO: Waiting for pod pod-subpath-test-configmap-ldxw to disappear Mar 16 14:10:42.796: INFO: Pod pod-subpath-test-configmap-ldxw no longer exists STEP: Deleting pod pod-subpath-test-configmap-ldxw Mar 16 14:10:42.796: INFO: Deleting pod "pod-subpath-test-configmap-ldxw" in namespace "subpath-7665" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 16 14:10:42.798: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-7665" for this suite. Mar 16 14:10:48.826: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 14:10:48.900: INFO: namespace subpath-7665 deletion completed in 6.09957735s • [SLOW TEST:30.272 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 16 14:10:48.900: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test override arguments Mar 16 14:10:49.110: INFO: Waiting up to 5m0s for pod "client-containers-f6d467bd-b28f-4620-9949-8f5fc5c3e595" in namespace "containers-7527" to be "success or failure" Mar 16 14:10:49.139: INFO: Pod "client-containers-f6d467bd-b28f-4620-9949-8f5fc5c3e595": Phase="Pending", Reason="", readiness=false. Elapsed: 28.014943ms Mar 16 14:10:51.421: INFO: Pod "client-containers-f6d467bd-b28f-4620-9949-8f5fc5c3e595": Phase="Pending", Reason="", readiness=false. Elapsed: 2.310958982s Mar 16 14:10:53.426: INFO: Pod "client-containers-f6d467bd-b28f-4620-9949-8f5fc5c3e595": Phase="Running", Reason="", readiness=true. Elapsed: 4.315344575s Mar 16 14:10:55.430: INFO: Pod "client-containers-f6d467bd-b28f-4620-9949-8f5fc5c3e595": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.319824284s STEP: Saw pod success Mar 16 14:10:55.430: INFO: Pod "client-containers-f6d467bd-b28f-4620-9949-8f5fc5c3e595" satisfied condition "success or failure" Mar 16 14:10:55.434: INFO: Trying to get logs from node iruya-worker pod client-containers-f6d467bd-b28f-4620-9949-8f5fc5c3e595 container test-container: STEP: delete the pod Mar 16 14:10:55.451: INFO: Waiting for pod client-containers-f6d467bd-b28f-4620-9949-8f5fc5c3e595 to disappear Mar 16 14:10:55.456: INFO: Pod client-containers-f6d467bd-b28f-4620-9949-8f5fc5c3e595 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 16 14:10:55.456: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-7527" for this suite. Mar 16 14:11:01.471: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 14:11:01.582: INFO: namespace containers-7527 deletion completed in 6.123473751s • [SLOW TEST:12.682 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 16 14:11:01.583: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars Mar 16 14:11:01.668: INFO: Waiting up to 5m0s for pod "downward-api-e3fd6e04-6c7c-4eff-981d-6c4e1da96a21" in namespace "downward-api-1848" to be "success or failure" Mar 16 14:11:01.672: INFO: Pod "downward-api-e3fd6e04-6c7c-4eff-981d-6c4e1da96a21": Phase="Pending", Reason="", readiness=false. Elapsed: 3.849076ms Mar 16 14:11:03.702: INFO: Pod "downward-api-e3fd6e04-6c7c-4eff-981d-6c4e1da96a21": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034017807s Mar 16 14:11:05.706: INFO: Pod "downward-api-e3fd6e04-6c7c-4eff-981d-6c4e1da96a21": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.038525403s STEP: Saw pod success Mar 16 14:11:05.706: INFO: Pod "downward-api-e3fd6e04-6c7c-4eff-981d-6c4e1da96a21" satisfied condition "success or failure" Mar 16 14:11:05.710: INFO: Trying to get logs from node iruya-worker2 pod downward-api-e3fd6e04-6c7c-4eff-981d-6c4e1da96a21 container dapi-container: STEP: delete the pod Mar 16 14:11:05.830: INFO: Waiting for pod downward-api-e3fd6e04-6c7c-4eff-981d-6c4e1da96a21 to disappear Mar 16 14:11:05.845: INFO: Pod downward-api-e3fd6e04-6c7c-4eff-981d-6c4e1da96a21 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 16 14:11:05.845: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1848" for this suite. Mar 16 14:11:11.884: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 14:11:11.953: INFO: namespace downward-api-1848 deletion completed in 6.103573113s • [SLOW TEST:10.370 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 16 14:11:11.953: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Mar 16 14:11:12.130: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b5386760-da42-4053-8ebe-e5b800059505" in namespace "projected-9149" to be "success or failure" Mar 16 14:11:12.355: INFO: Pod "downwardapi-volume-b5386760-da42-4053-8ebe-e5b800059505": Phase="Pending", Reason="", readiness=false. Elapsed: 225.142866ms Mar 16 14:11:14.359: INFO: Pod "downwardapi-volume-b5386760-da42-4053-8ebe-e5b800059505": Phase="Pending", Reason="", readiness=false. Elapsed: 2.2288998s Mar 16 14:11:16.363: INFO: Pod "downwardapi-volume-b5386760-da42-4053-8ebe-e5b800059505": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.233246792s STEP: Saw pod success Mar 16 14:11:16.363: INFO: Pod "downwardapi-volume-b5386760-da42-4053-8ebe-e5b800059505" satisfied condition "success or failure" Mar 16 14:11:16.366: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-b5386760-da42-4053-8ebe-e5b800059505 container client-container: STEP: delete the pod Mar 16 14:11:16.404: INFO: Waiting for pod downwardapi-volume-b5386760-da42-4053-8ebe-e5b800059505 to disappear Mar 16 14:11:16.432: INFO: Pod downwardapi-volume-b5386760-da42-4053-8ebe-e5b800059505 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 16 14:11:16.432: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9149" for this suite. Mar 16 14:11:22.482: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 14:11:22.565: INFO: namespace projected-9149 deletion completed in 6.129924251s • [SLOW TEST:10.612 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 16 14:11:22.566: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating pod Mar 16 14:11:26.663: INFO: Pod pod-hostip-3ca2a77c-3940-40b9-ab3b-58b141d83dc8 has hostIP: 172.17.0.5 [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 16 14:11:26.663: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-7181" for this suite. Mar 16 14:11:48.693: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 14:11:48.763: INFO: namespace pods-7181 deletion completed in 22.095852851s • [SLOW TEST:26.197 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 16 14:11:48.764: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test override all Mar 16 14:11:48.830: INFO: Waiting up to 5m0s for pod "client-containers-0e3b26c2-02af-470c-887b-6ec8e438572c" in namespace "containers-6962" to be "success or failure" Mar 16 14:11:48.835: INFO: Pod "client-containers-0e3b26c2-02af-470c-887b-6ec8e438572c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.204725ms Mar 16 14:11:50.838: INFO: Pod "client-containers-0e3b26c2-02af-470c-887b-6ec8e438572c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007592264s Mar 16 14:11:52.843: INFO: Pod "client-containers-0e3b26c2-02af-470c-887b-6ec8e438572c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012236441s STEP: Saw pod success Mar 16 14:11:52.843: INFO: Pod "client-containers-0e3b26c2-02af-470c-887b-6ec8e438572c" satisfied condition "success or failure" Mar 16 14:11:52.845: INFO: Trying to get logs from node iruya-worker2 pod client-containers-0e3b26c2-02af-470c-887b-6ec8e438572c container test-container: STEP: delete the pod Mar 16 14:11:52.958: INFO: Waiting for pod client-containers-0e3b26c2-02af-470c-887b-6ec8e438572c to disappear Mar 16 14:11:53.014: INFO: Pod client-containers-0e3b26c2-02af-470c-887b-6ec8e438572c no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 16 14:11:53.014: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-6962" for this suite. Mar 16 14:11:59.093: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 14:11:59.173: INFO: namespace containers-6962 deletion completed in 6.13306192s • [SLOW TEST:10.409 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 16 14:11:59.174: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-map-30c1f3d3-43c8-4291-a9f3-e8b0082b675d STEP: Creating a pod to test consume configMaps Mar 16 14:11:59.291: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-68af3169-54b0-49d2-94fd-a7718be6b1d8" in namespace "projected-972" to be "success or failure" Mar 16 14:11:59.308: INFO: Pod "pod-projected-configmaps-68af3169-54b0-49d2-94fd-a7718be6b1d8": Phase="Pending", Reason="", readiness=false. Elapsed: 16.843924ms Mar 16 14:12:01.312: INFO: Pod "pod-projected-configmaps-68af3169-54b0-49d2-94fd-a7718be6b1d8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020502306s Mar 16 14:12:03.316: INFO: Pod "pod-projected-configmaps-68af3169-54b0-49d2-94fd-a7718be6b1d8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024734845s STEP: Saw pod success Mar 16 14:12:03.316: INFO: Pod "pod-projected-configmaps-68af3169-54b0-49d2-94fd-a7718be6b1d8" satisfied condition "success or failure" Mar 16 14:12:03.319: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-configmaps-68af3169-54b0-49d2-94fd-a7718be6b1d8 container projected-configmap-volume-test: STEP: delete the pod Mar 16 14:12:03.351: INFO: Waiting for pod pod-projected-configmaps-68af3169-54b0-49d2-94fd-a7718be6b1d8 to disappear Mar 16 14:12:03.355: INFO: Pod pod-projected-configmaps-68af3169-54b0-49d2-94fd-a7718be6b1d8 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 16 14:12:03.355: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-972" for this suite. Mar 16 14:12:09.389: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 14:12:09.471: INFO: namespace projected-972 deletion completed in 6.112605591s • [SLOW TEST:10.297 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 16 14:12:09.472: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted Mar 16 14:12:16.500: INFO: 0 pods remaining Mar 16 14:12:16.500: INFO: 0 pods has nil DeletionTimestamp Mar 16 14:12:16.500: INFO: Mar 16 14:12:17.495: INFO: 0 pods remaining Mar 16 14:12:17.495: INFO: 0 pods has nil DeletionTimestamp Mar 16 14:12:17.495: INFO: STEP: Gathering metrics W0316 14:12:18.037708 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Mar 16 14:12:18.037: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 16 14:12:18.037: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-983" for this suite. Mar 16 14:12:24.500: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 14:12:24.573: INFO: namespace gc-983 deletion completed in 6.53324997s • [SLOW TEST:15.101 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 16 14:12:24.573: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test substitution in container's command Mar 16 14:12:25.066: INFO: Waiting up to 5m0s for pod "var-expansion-5d2b73dd-a99e-4448-8870-088021958fff" in namespace "var-expansion-1857" to be "success or failure" Mar 16 14:12:25.075: INFO: Pod "var-expansion-5d2b73dd-a99e-4448-8870-088021958fff": Phase="Pending", Reason="", readiness=false. Elapsed: 8.519882ms Mar 16 14:12:27.079: INFO: Pod "var-expansion-5d2b73dd-a99e-4448-8870-088021958fff": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012935376s Mar 16 14:12:29.084: INFO: Pod "var-expansion-5d2b73dd-a99e-4448-8870-088021958fff": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017771467s STEP: Saw pod success Mar 16 14:12:29.084: INFO: Pod "var-expansion-5d2b73dd-a99e-4448-8870-088021958fff" satisfied condition "success or failure" Mar 16 14:12:29.087: INFO: Trying to get logs from node iruya-worker pod var-expansion-5d2b73dd-a99e-4448-8870-088021958fff container dapi-container: STEP: delete the pod Mar 16 14:12:29.112: INFO: Waiting for pod var-expansion-5d2b73dd-a99e-4448-8870-088021958fff to disappear Mar 16 14:12:29.117: INFO: Pod var-expansion-5d2b73dd-a99e-4448-8870-088021958fff no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 16 14:12:29.117: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-1857" for this suite. Mar 16 14:12:35.143: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 14:12:35.214: INFO: namespace var-expansion-1857 deletion completed in 6.093684556s • [SLOW TEST:10.641 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 16 14:12:35.214: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Mar 16 14:12:35.268: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0a1f7a9c-6b0f-47e1-b28b-8d720a058929" in namespace "downward-api-8904" to be "success or failure" Mar 16 14:12:35.308: INFO: Pod "downwardapi-volume-0a1f7a9c-6b0f-47e1-b28b-8d720a058929": Phase="Pending", Reason="", readiness=false. Elapsed: 39.779095ms Mar 16 14:12:37.318: INFO: Pod "downwardapi-volume-0a1f7a9c-6b0f-47e1-b28b-8d720a058929": Phase="Pending", Reason="", readiness=false. Elapsed: 2.050352139s Mar 16 14:12:39.323: INFO: Pod "downwardapi-volume-0a1f7a9c-6b0f-47e1-b28b-8d720a058929": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.054711039s STEP: Saw pod success Mar 16 14:12:39.323: INFO: Pod "downwardapi-volume-0a1f7a9c-6b0f-47e1-b28b-8d720a058929" satisfied condition "success or failure" Mar 16 14:12:39.326: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-0a1f7a9c-6b0f-47e1-b28b-8d720a058929 container client-container: STEP: delete the pod Mar 16 14:12:39.339: INFO: Waiting for pod downwardapi-volume-0a1f7a9c-6b0f-47e1-b28b-8d720a058929 to disappear Mar 16 14:12:39.357: INFO: Pod downwardapi-volume-0a1f7a9c-6b0f-47e1-b28b-8d720a058929 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 16 14:12:39.357: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8904" for this suite. Mar 16 14:12:45.377: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 14:12:45.456: INFO: namespace downward-api-8904 deletion completed in 6.095281695s • [SLOW TEST:10.241 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 16 14:12:45.456: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod test-webserver-06d582e1-7848-43b7-971c-1b163cb5cb29 in namespace container-probe-3086 Mar 16 14:12:49.534: INFO: Started pod test-webserver-06d582e1-7848-43b7-971c-1b163cb5cb29 in namespace container-probe-3086 STEP: checking the pod's current state and verifying that restartCount is present Mar 16 14:12:49.538: INFO: Initial restart count of pod test-webserver-06d582e1-7848-43b7-971c-1b163cb5cb29 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 16 14:16:50.881: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-3086" for this suite. Mar 16 14:16:56.901: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 14:16:56.980: INFO: namespace container-probe-3086 deletion completed in 6.093417193s • [SLOW TEST:251.524 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-network] Services should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 16 14:16:56.980: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88 [It] should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating service multi-endpoint-test in namespace services-4078 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-4078 to expose endpoints map[] Mar 16 14:16:57.114: INFO: Get endpoints failed (13.281881ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found Mar 16 14:16:58.118: INFO: successfully validated that service multi-endpoint-test in namespace services-4078 exposes endpoints map[] (1.01704045s elapsed) STEP: Creating pod pod1 in namespace services-4078 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-4078 to expose endpoints map[pod1:[100]] Mar 16 14:17:01.173: INFO: successfully validated that service multi-endpoint-test in namespace services-4078 exposes endpoints map[pod1:[100]] (3.048287298s elapsed) STEP: Creating pod pod2 in namespace services-4078 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-4078 to expose endpoints map[pod1:[100] pod2:[101]] Mar 16 14:17:04.246: INFO: successfully validated that service multi-endpoint-test in namespace services-4078 exposes endpoints map[pod1:[100] pod2:[101]] (3.067922264s elapsed) STEP: Deleting pod pod1 in namespace services-4078 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-4078 to expose endpoints map[pod2:[101]] Mar 16 14:17:05.272: INFO: successfully validated that service multi-endpoint-test in namespace services-4078 exposes endpoints map[pod2:[101]] (1.021289783s elapsed) STEP: Deleting pod pod2 in namespace services-4078 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-4078 to expose endpoints map[] Mar 16 14:17:06.287: INFO: successfully validated that service multi-endpoint-test in namespace services-4078 exposes endpoints map[] (1.010186809s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 16 14:17:06.356: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-4078" for this suite. Mar 16 14:17:28.413: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 14:17:28.493: INFO: namespace services-4078 deletion completed in 22.097435085s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92 • [SLOW TEST:31.513 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 16 14:17:28.494: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: starting an echo server on multiple ports STEP: creating replication controller proxy-service-hdl4m in namespace proxy-4447 I0316 14:17:28.606427 6 runners.go:180] Created replication controller with name: proxy-service-hdl4m, namespace: proxy-4447, replica count: 1 I0316 14:17:29.656874 6 runners.go:180] proxy-service-hdl4m Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0316 14:17:30.657069 6 runners.go:180] proxy-service-hdl4m Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0316 14:17:31.657347 6 runners.go:180] proxy-service-hdl4m Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0316 14:17:32.657618 6 runners.go:180] proxy-service-hdl4m Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0316 14:17:33.657849 6 runners.go:180] proxy-service-hdl4m Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0316 14:17:34.658099 6 runners.go:180] proxy-service-hdl4m Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0316 14:17:35.658338 6 runners.go:180] proxy-service-hdl4m Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0316 14:17:36.658542 6 runners.go:180] proxy-service-hdl4m Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Mar 16 14:17:36.662: INFO: setup took 8.106912687s, starting test cases STEP: running 16 cases, 20 attempts per case, 320 total attempts Mar 16 14:17:36.667: INFO: (0) /api/v1/namespaces/proxy-4447/pods/proxy-service-hdl4m-b5c9d:162/proxy/: bar (200; 5.503242ms) Mar 16 14:17:36.676: INFO: (0) /api/v1/namespaces/proxy-4447/services/proxy-service-hdl4m:portname2/proxy/: bar (200; 13.584549ms) Mar 16 14:17:36.676: INFO: (0) /api/v1/namespaces/proxy-4447/services/http:proxy-service-hdl4m:portname1/proxy/: foo (200; 13.591299ms) Mar 16 14:17:36.676: INFO: (0) /api/v1/namespaces/proxy-4447/pods/http:proxy-service-hdl4m-b5c9d:162/proxy/: bar (200; 13.478192ms) Mar 16 14:17:36.676: INFO: (0) /api/v1/namespaces/proxy-4447/pods/http:proxy-service-hdl4m-b5c9d:1080/proxy/: ... (200; 13.731829ms) Mar 16 14:17:36.676: INFO: (0) /api/v1/namespaces/proxy-4447/pods/proxy-service-hdl4m-b5c9d:160/proxy/: foo (200; 13.812095ms) Mar 16 14:17:36.676: INFO: (0) /api/v1/namespaces/proxy-4447/services/http:proxy-service-hdl4m:portname2/proxy/: bar (200; 13.718075ms) Mar 16 14:17:36.676: INFO: (0) /api/v1/namespaces/proxy-4447/pods/proxy-service-hdl4m-b5c9d:1080/proxy/: test<... (200; 13.806859ms) Mar 16 14:17:36.676: INFO: (0) /api/v1/namespaces/proxy-4447/services/proxy-service-hdl4m:portname1/proxy/: foo (200; 13.880027ms) Mar 16 14:17:36.676: INFO: (0) /api/v1/namespaces/proxy-4447/pods/proxy-service-hdl4m-b5c9d/proxy/: test (200; 13.784935ms) Mar 16 14:17:36.676: INFO: (0) /api/v1/namespaces/proxy-4447/pods/http:proxy-service-hdl4m-b5c9d:160/proxy/: foo (200; 14.179817ms) Mar 16 14:17:36.677: INFO: (0) /api/v1/namespaces/proxy-4447/pods/https:proxy-service-hdl4m-b5c9d:462/proxy/: tls qux (200; 15.013453ms) Mar 16 14:17:36.682: INFO: (0) /api/v1/namespaces/proxy-4447/services/https:proxy-service-hdl4m:tlsportname2/proxy/: tls qux (200; 19.38831ms) Mar 16 14:17:36.682: INFO: (0) /api/v1/namespaces/proxy-4447/pods/https:proxy-service-hdl4m-b5c9d:460/proxy/: tls baz (200; 19.45679ms) Mar 16 14:17:36.682: INFO: (0) /api/v1/namespaces/proxy-4447/services/https:proxy-service-hdl4m:tlsportname1/proxy/: tls baz (200; 20.111373ms) Mar 16 14:17:36.682: INFO: (0) /api/v1/namespaces/proxy-4447/pods/https:proxy-service-hdl4m-b5c9d:443/proxy/: test<... (200; 5.826156ms) Mar 16 14:17:36.688: INFO: (1) /api/v1/namespaces/proxy-4447/pods/proxy-service-hdl4m-b5c9d/proxy/: test (200; 5.717522ms) Mar 16 14:17:36.688: INFO: (1) /api/v1/namespaces/proxy-4447/services/https:proxy-service-hdl4m:tlsportname2/proxy/: tls qux (200; 5.829282ms) Mar 16 14:17:36.688: INFO: (1) /api/v1/namespaces/proxy-4447/pods/http:proxy-service-hdl4m-b5c9d:1080/proxy/: ... (200; 5.818898ms) Mar 16 14:17:36.688: INFO: (1) /api/v1/namespaces/proxy-4447/services/http:proxy-service-hdl4m:portname1/proxy/: foo (200; 5.795937ms) Mar 16 14:17:36.688: INFO: (1) /api/v1/namespaces/proxy-4447/services/proxy-service-hdl4m:portname2/proxy/: bar (200; 5.748521ms) Mar 16 14:17:36.688: INFO: (1) /api/v1/namespaces/proxy-4447/services/http:proxy-service-hdl4m:portname2/proxy/: bar (200; 5.880546ms) Mar 16 14:17:36.688: INFO: (1) /api/v1/namespaces/proxy-4447/pods/proxy-service-hdl4m-b5c9d:162/proxy/: bar (200; 5.840849ms) Mar 16 14:17:36.688: INFO: (1) /api/v1/namespaces/proxy-4447/pods/http:proxy-service-hdl4m-b5c9d:162/proxy/: bar (200; 5.779881ms) Mar 16 14:17:36.693: INFO: (2) /api/v1/namespaces/proxy-4447/services/http:proxy-service-hdl4m:portname1/proxy/: foo (200; 4.021548ms) Mar 16 14:17:36.693: INFO: (2) /api/v1/namespaces/proxy-4447/pods/proxy-service-hdl4m-b5c9d/proxy/: test (200; 3.519571ms) Mar 16 14:17:36.693: INFO: (2) /api/v1/namespaces/proxy-4447/pods/https:proxy-service-hdl4m-b5c9d:443/proxy/: ... (200; 4.037421ms) Mar 16 14:17:36.694: INFO: (2) /api/v1/namespaces/proxy-4447/pods/http:proxy-service-hdl4m-b5c9d:160/proxy/: foo (200; 5.023732ms) Mar 16 14:17:36.694: INFO: (2) /api/v1/namespaces/proxy-4447/pods/http:proxy-service-hdl4m-b5c9d:162/proxy/: bar (200; 4.102599ms) Mar 16 14:17:36.694: INFO: (2) /api/v1/namespaces/proxy-4447/services/proxy-service-hdl4m:portname2/proxy/: bar (200; 4.622329ms) Mar 16 14:17:36.694: INFO: (2) /api/v1/namespaces/proxy-4447/pods/proxy-service-hdl4m-b5c9d:160/proxy/: foo (200; 4.82208ms) Mar 16 14:17:36.694: INFO: (2) /api/v1/namespaces/proxy-4447/pods/proxy-service-hdl4m-b5c9d:1080/proxy/: test<... (200; 5.201103ms) Mar 16 14:17:36.695: INFO: (2) /api/v1/namespaces/proxy-4447/pods/proxy-service-hdl4m-b5c9d:162/proxy/: bar (200; 6.088292ms) Mar 16 14:17:36.695: INFO: (2) /api/v1/namespaces/proxy-4447/services/http:proxy-service-hdl4m:portname2/proxy/: bar (200; 6.701679ms) Mar 16 14:17:36.695: INFO: (2) /api/v1/namespaces/proxy-4447/services/https:proxy-service-hdl4m:tlsportname1/proxy/: tls baz (200; 7.015296ms) Mar 16 14:17:36.696: INFO: (2) /api/v1/namespaces/proxy-4447/services/proxy-service-hdl4m:portname1/proxy/: foo (200; 5.819489ms) Mar 16 14:17:36.696: INFO: (2) /api/v1/namespaces/proxy-4447/pods/https:proxy-service-hdl4m-b5c9d:460/proxy/: tls baz (200; 5.897015ms) Mar 16 14:17:36.696: INFO: (2) /api/v1/namespaces/proxy-4447/services/https:proxy-service-hdl4m:tlsportname2/proxy/: tls qux (200; 6.812013ms) Mar 16 14:17:36.696: INFO: (2) /api/v1/namespaces/proxy-4447/pods/https:proxy-service-hdl4m-b5c9d:462/proxy/: tls qux (200; 5.697911ms) Mar 16 14:17:36.699: INFO: (3) /api/v1/namespaces/proxy-4447/pods/http:proxy-service-hdl4m-b5c9d:162/proxy/: bar (200; 3.154852ms) Mar 16 14:17:36.699: INFO: (3) /api/v1/namespaces/proxy-4447/pods/https:proxy-service-hdl4m-b5c9d:462/proxy/: tls qux (200; 3.328966ms) Mar 16 14:17:36.699: INFO: (3) /api/v1/namespaces/proxy-4447/pods/http:proxy-service-hdl4m-b5c9d:1080/proxy/: ... (200; 3.359012ms) Mar 16 14:17:36.700: INFO: (3) /api/v1/namespaces/proxy-4447/pods/https:proxy-service-hdl4m-b5c9d:460/proxy/: tls baz (200; 3.701504ms) Mar 16 14:17:36.700: INFO: (3) /api/v1/namespaces/proxy-4447/pods/https:proxy-service-hdl4m-b5c9d:443/proxy/: test (200; 4.103162ms) Mar 16 14:17:36.700: INFO: (3) /api/v1/namespaces/proxy-4447/pods/proxy-service-hdl4m-b5c9d:1080/proxy/: test<... (200; 4.064287ms) Mar 16 14:17:36.700: INFO: (3) /api/v1/namespaces/proxy-4447/pods/proxy-service-hdl4m-b5c9d:160/proxy/: foo (200; 4.084888ms) Mar 16 14:17:36.700: INFO: (3) /api/v1/namespaces/proxy-4447/pods/http:proxy-service-hdl4m-b5c9d:160/proxy/: foo (200; 4.08462ms) Mar 16 14:17:36.702: INFO: (3) /api/v1/namespaces/proxy-4447/services/http:proxy-service-hdl4m:portname2/proxy/: bar (200; 6.205488ms) Mar 16 14:17:36.702: INFO: (3) /api/v1/namespaces/proxy-4447/services/http:proxy-service-hdl4m:portname1/proxy/: foo (200; 6.343998ms) Mar 16 14:17:36.702: INFO: (3) /api/v1/namespaces/proxy-4447/services/proxy-service-hdl4m:portname1/proxy/: foo (200; 6.321743ms) Mar 16 14:17:36.702: INFO: (3) /api/v1/namespaces/proxy-4447/services/proxy-service-hdl4m:portname2/proxy/: bar (200; 6.33975ms) Mar 16 14:17:36.702: INFO: (3) /api/v1/namespaces/proxy-4447/services/https:proxy-service-hdl4m:tlsportname1/proxy/: tls baz (200; 6.397752ms) Mar 16 14:17:36.702: INFO: (3) /api/v1/namespaces/proxy-4447/services/https:proxy-service-hdl4m:tlsportname2/proxy/: tls qux (200; 6.42233ms) Mar 16 14:17:36.706: INFO: (4) /api/v1/namespaces/proxy-4447/pods/proxy-service-hdl4m-b5c9d:162/proxy/: bar (200; 3.237875ms) Mar 16 14:17:36.706: INFO: (4) /api/v1/namespaces/proxy-4447/pods/proxy-service-hdl4m-b5c9d/proxy/: test (200; 3.705255ms) Mar 16 14:17:36.706: INFO: (4) /api/v1/namespaces/proxy-4447/pods/proxy-service-hdl4m-b5c9d:1080/proxy/: test<... (200; 3.76325ms) Mar 16 14:17:36.706: INFO: (4) /api/v1/namespaces/proxy-4447/pods/http:proxy-service-hdl4m-b5c9d:1080/proxy/: ... (200; 3.741ms) Mar 16 14:17:36.706: INFO: (4) /api/v1/namespaces/proxy-4447/pods/https:proxy-service-hdl4m-b5c9d:460/proxy/: tls baz (200; 3.888785ms) Mar 16 14:17:36.706: INFO: (4) /api/v1/namespaces/proxy-4447/pods/http:proxy-service-hdl4m-b5c9d:160/proxy/: foo (200; 3.844652ms) Mar 16 14:17:36.706: INFO: (4) /api/v1/namespaces/proxy-4447/pods/proxy-service-hdl4m-b5c9d:160/proxy/: foo (200; 4.078822ms) Mar 16 14:17:36.706: INFO: (4) /api/v1/namespaces/proxy-4447/pods/https:proxy-service-hdl4m-b5c9d:462/proxy/: tls qux (200; 4.028622ms) Mar 16 14:17:36.707: INFO: (4) /api/v1/namespaces/proxy-4447/pods/http:proxy-service-hdl4m-b5c9d:162/proxy/: bar (200; 4.647252ms) Mar 16 14:17:36.707: INFO: (4) /api/v1/namespaces/proxy-4447/services/proxy-service-hdl4m:portname1/proxy/: foo (200; 4.963347ms) Mar 16 14:17:36.707: INFO: (4) /api/v1/namespaces/proxy-4447/services/http:proxy-service-hdl4m:portname2/proxy/: bar (200; 4.877001ms) Mar 16 14:17:36.707: INFO: (4) /api/v1/namespaces/proxy-4447/services/http:proxy-service-hdl4m:portname1/proxy/: foo (200; 4.898613ms) Mar 16 14:17:36.707: INFO: (4) /api/v1/namespaces/proxy-4447/pods/https:proxy-service-hdl4m-b5c9d:443/proxy/: test<... (200; 3.684202ms) Mar 16 14:17:36.711: INFO: (5) /api/v1/namespaces/proxy-4447/pods/http:proxy-service-hdl4m-b5c9d:1080/proxy/: ... (200; 3.714613ms) Mar 16 14:17:36.712: INFO: (5) /api/v1/namespaces/proxy-4447/pods/proxy-service-hdl4m-b5c9d:160/proxy/: foo (200; 3.937579ms) Mar 16 14:17:36.712: INFO: (5) /api/v1/namespaces/proxy-4447/pods/https:proxy-service-hdl4m-b5c9d:460/proxy/: tls baz (200; 3.919101ms) Mar 16 14:17:36.712: INFO: (5) /api/v1/namespaces/proxy-4447/pods/proxy-service-hdl4m-b5c9d:162/proxy/: bar (200; 4.068382ms) Mar 16 14:17:36.712: INFO: (5) /api/v1/namespaces/proxy-4447/pods/https:proxy-service-hdl4m-b5c9d:462/proxy/: tls qux (200; 4.209618ms) Mar 16 14:17:36.712: INFO: (5) /api/v1/namespaces/proxy-4447/pods/http:proxy-service-hdl4m-b5c9d:162/proxy/: bar (200; 4.159245ms) Mar 16 14:17:36.712: INFO: (5) /api/v1/namespaces/proxy-4447/pods/proxy-service-hdl4m-b5c9d/proxy/: test (200; 4.336072ms) Mar 16 14:17:36.712: INFO: (5) /api/v1/namespaces/proxy-4447/services/https:proxy-service-hdl4m:tlsportname1/proxy/: tls baz (200; 4.386684ms) Mar 16 14:17:36.712: INFO: (5) /api/v1/namespaces/proxy-4447/pods/https:proxy-service-hdl4m-b5c9d:443/proxy/: ... (200; 4.15493ms) Mar 16 14:17:36.717: INFO: (6) /api/v1/namespaces/proxy-4447/pods/https:proxy-service-hdl4m-b5c9d:460/proxy/: tls baz (200; 4.149395ms) Mar 16 14:17:36.717: INFO: (6) /api/v1/namespaces/proxy-4447/pods/proxy-service-hdl4m-b5c9d/proxy/: test (200; 4.183368ms) Mar 16 14:17:36.717: INFO: (6) /api/v1/namespaces/proxy-4447/pods/https:proxy-service-hdl4m-b5c9d:443/proxy/: test<... (200; 4.171141ms) Mar 16 14:17:36.718: INFO: (6) /api/v1/namespaces/proxy-4447/services/proxy-service-hdl4m:portname1/proxy/: foo (200; 4.925091ms) Mar 16 14:17:36.718: INFO: (6) /api/v1/namespaces/proxy-4447/services/http:proxy-service-hdl4m:portname2/proxy/: bar (200; 5.104275ms) Mar 16 14:17:36.718: INFO: (6) /api/v1/namespaces/proxy-4447/services/http:proxy-service-hdl4m:portname1/proxy/: foo (200; 5.160472ms) Mar 16 14:17:36.718: INFO: (6) /api/v1/namespaces/proxy-4447/services/proxy-service-hdl4m:portname2/proxy/: bar (200; 5.256367ms) Mar 16 14:17:36.718: INFO: (6) /api/v1/namespaces/proxy-4447/services/https:proxy-service-hdl4m:tlsportname1/proxy/: tls baz (200; 5.208493ms) Mar 16 14:17:36.718: INFO: (6) /api/v1/namespaces/proxy-4447/services/https:proxy-service-hdl4m:tlsportname2/proxy/: tls qux (200; 5.209385ms) Mar 16 14:17:36.722: INFO: (7) /api/v1/namespaces/proxy-4447/pods/https:proxy-service-hdl4m-b5c9d:460/proxy/: tls baz (200; 3.626453ms) Mar 16 14:17:36.722: INFO: (7) /api/v1/namespaces/proxy-4447/pods/proxy-service-hdl4m-b5c9d:160/proxy/: foo (200; 3.619616ms) Mar 16 14:17:36.722: INFO: (7) /api/v1/namespaces/proxy-4447/pods/https:proxy-service-hdl4m-b5c9d:462/proxy/: tls qux (200; 3.876437ms) Mar 16 14:17:36.725: INFO: (7) /api/v1/namespaces/proxy-4447/pods/proxy-service-hdl4m-b5c9d:1080/proxy/: test<... (200; 6.269217ms) Mar 16 14:17:36.725: INFO: (7) /api/v1/namespaces/proxy-4447/pods/https:proxy-service-hdl4m-b5c9d:443/proxy/: ... (200; 6.799567ms) Mar 16 14:17:36.725: INFO: (7) /api/v1/namespaces/proxy-4447/services/proxy-service-hdl4m:portname1/proxy/: foo (200; 6.889191ms) Mar 16 14:17:36.726: INFO: (7) /api/v1/namespaces/proxy-4447/services/proxy-service-hdl4m:portname2/proxy/: bar (200; 7.034253ms) Mar 16 14:17:36.726: INFO: (7) /api/v1/namespaces/proxy-4447/services/http:proxy-service-hdl4m:portname1/proxy/: foo (200; 7.188243ms) Mar 16 14:17:36.726: INFO: (7) /api/v1/namespaces/proxy-4447/pods/http:proxy-service-hdl4m-b5c9d:160/proxy/: foo (200; 7.324397ms) Mar 16 14:17:36.726: INFO: (7) /api/v1/namespaces/proxy-4447/pods/proxy-service-hdl4m-b5c9d/proxy/: test (200; 7.524112ms) Mar 16 14:17:36.729: INFO: (8) /api/v1/namespaces/proxy-4447/pods/proxy-service-hdl4m-b5c9d:160/proxy/: foo (200; 2.609064ms) Mar 16 14:17:36.729: INFO: (8) /api/v1/namespaces/proxy-4447/pods/proxy-service-hdl4m-b5c9d:1080/proxy/: test<... (200; 3.126479ms) Mar 16 14:17:36.729: INFO: (8) /api/v1/namespaces/proxy-4447/pods/http:proxy-service-hdl4m-b5c9d:162/proxy/: bar (200; 3.174245ms) Mar 16 14:17:36.729: INFO: (8) /api/v1/namespaces/proxy-4447/pods/proxy-service-hdl4m-b5c9d:162/proxy/: bar (200; 3.255523ms) Mar 16 14:17:36.730: INFO: (8) /api/v1/namespaces/proxy-4447/pods/http:proxy-service-hdl4m-b5c9d:1080/proxy/: ... (200; 3.318662ms) Mar 16 14:17:36.730: INFO: (8) /api/v1/namespaces/proxy-4447/pods/proxy-service-hdl4m-b5c9d/proxy/: test (200; 3.393526ms) Mar 16 14:17:36.730: INFO: (8) /api/v1/namespaces/proxy-4447/pods/http:proxy-service-hdl4m-b5c9d:160/proxy/: foo (200; 3.524095ms) Mar 16 14:17:36.730: INFO: (8) /api/v1/namespaces/proxy-4447/pods/https:proxy-service-hdl4m-b5c9d:443/proxy/: test<... (200; 5.149454ms) Mar 16 14:17:36.736: INFO: (9) /api/v1/namespaces/proxy-4447/pods/proxy-service-hdl4m-b5c9d:162/proxy/: bar (200; 5.336502ms) Mar 16 14:17:36.736: INFO: (9) /api/v1/namespaces/proxy-4447/pods/proxy-service-hdl4m-b5c9d/proxy/: test (200; 5.504956ms) Mar 16 14:17:36.736: INFO: (9) /api/v1/namespaces/proxy-4447/pods/http:proxy-service-hdl4m-b5c9d:1080/proxy/: ... (200; 5.513679ms) Mar 16 14:17:36.736: INFO: (9) /api/v1/namespaces/proxy-4447/pods/https:proxy-service-hdl4m-b5c9d:462/proxy/: tls qux (200; 5.548227ms) Mar 16 14:17:36.736: INFO: (9) /api/v1/namespaces/proxy-4447/services/proxy-service-hdl4m:portname1/proxy/: foo (200; 5.72151ms) Mar 16 14:17:36.737: INFO: (9) /api/v1/namespaces/proxy-4447/pods/proxy-service-hdl4m-b5c9d:160/proxy/: foo (200; 5.767438ms) Mar 16 14:17:36.737: INFO: (9) /api/v1/namespaces/proxy-4447/pods/https:proxy-service-hdl4m-b5c9d:443/proxy/: test (200; 2.43297ms) Mar 16 14:17:36.741: INFO: (10) /api/v1/namespaces/proxy-4447/pods/proxy-service-hdl4m-b5c9d:160/proxy/: foo (200; 2.765077ms) Mar 16 14:17:36.741: INFO: (10) /api/v1/namespaces/proxy-4447/pods/http:proxy-service-hdl4m-b5c9d:162/proxy/: bar (200; 2.897843ms) Mar 16 14:17:36.741: INFO: (10) /api/v1/namespaces/proxy-4447/pods/http:proxy-service-hdl4m-b5c9d:160/proxy/: foo (200; 3.009942ms) Mar 16 14:17:36.742: INFO: (10) /api/v1/namespaces/proxy-4447/pods/proxy-service-hdl4m-b5c9d:1080/proxy/: test<... (200; 4.278681ms) Mar 16 14:17:36.743: INFO: (10) /api/v1/namespaces/proxy-4447/pods/https:proxy-service-hdl4m-b5c9d:462/proxy/: tls qux (200; 4.739732ms) Mar 16 14:17:36.743: INFO: (10) /api/v1/namespaces/proxy-4447/pods/proxy-service-hdl4m-b5c9d:162/proxy/: bar (200; 4.809571ms) Mar 16 14:17:36.743: INFO: (10) /api/v1/namespaces/proxy-4447/pods/https:proxy-service-hdl4m-b5c9d:460/proxy/: tls baz (200; 4.803269ms) Mar 16 14:17:36.743: INFO: (10) /api/v1/namespaces/proxy-4447/pods/https:proxy-service-hdl4m-b5c9d:443/proxy/: ... (200; 5.116481ms) Mar 16 14:17:36.743: INFO: (10) /api/v1/namespaces/proxy-4447/services/http:proxy-service-hdl4m:portname2/proxy/: bar (200; 5.437095ms) Mar 16 14:17:36.743: INFO: (10) /api/v1/namespaces/proxy-4447/services/proxy-service-hdl4m:portname2/proxy/: bar (200; 5.488283ms) Mar 16 14:17:36.743: INFO: (10) /api/v1/namespaces/proxy-4447/services/proxy-service-hdl4m:portname1/proxy/: foo (200; 5.584027ms) Mar 16 14:17:36.743: INFO: (10) /api/v1/namespaces/proxy-4447/services/https:proxy-service-hdl4m:tlsportname1/proxy/: tls baz (200; 5.602776ms) Mar 16 14:17:36.744: INFO: (10) /api/v1/namespaces/proxy-4447/services/http:proxy-service-hdl4m:portname1/proxy/: foo (200; 5.674502ms) Mar 16 14:17:36.744: INFO: (10) /api/v1/namespaces/proxy-4447/services/https:proxy-service-hdl4m:tlsportname2/proxy/: tls qux (200; 5.80771ms) Mar 16 14:17:36.748: INFO: (11) /api/v1/namespaces/proxy-4447/services/http:proxy-service-hdl4m:portname2/proxy/: bar (200; 4.48587ms) Mar 16 14:17:36.749: INFO: (11) /api/v1/namespaces/proxy-4447/pods/http:proxy-service-hdl4m-b5c9d:162/proxy/: bar (200; 4.922722ms) Mar 16 14:17:36.749: INFO: (11) /api/v1/namespaces/proxy-4447/services/http:proxy-service-hdl4m:portname1/proxy/: foo (200; 4.71718ms) Mar 16 14:17:36.749: INFO: (11) /api/v1/namespaces/proxy-4447/pods/proxy-service-hdl4m-b5c9d/proxy/: test (200; 5.191677ms) Mar 16 14:17:36.749: INFO: (11) /api/v1/namespaces/proxy-4447/pods/http:proxy-service-hdl4m-b5c9d:160/proxy/: foo (200; 5.353535ms) Mar 16 14:17:36.749: INFO: (11) /api/v1/namespaces/proxy-4447/services/proxy-service-hdl4m:portname2/proxy/: bar (200; 5.334467ms) Mar 16 14:17:36.750: INFO: (11) /api/v1/namespaces/proxy-4447/pods/proxy-service-hdl4m-b5c9d:1080/proxy/: test<... (200; 5.72268ms) Mar 16 14:17:36.750: INFO: (11) /api/v1/namespaces/proxy-4447/pods/http:proxy-service-hdl4m-b5c9d:1080/proxy/: ... (200; 5.780267ms) Mar 16 14:17:36.750: INFO: (11) /api/v1/namespaces/proxy-4447/services/https:proxy-service-hdl4m:tlsportname2/proxy/: tls qux (200; 5.801578ms) Mar 16 14:17:36.750: INFO: (11) /api/v1/namespaces/proxy-4447/pods/proxy-service-hdl4m-b5c9d:162/proxy/: bar (200; 5.763827ms) Mar 16 14:17:36.750: INFO: (11) /api/v1/namespaces/proxy-4447/services/proxy-service-hdl4m:portname1/proxy/: foo (200; 5.907685ms) Mar 16 14:17:36.750: INFO: (11) /api/v1/namespaces/proxy-4447/pods/proxy-service-hdl4m-b5c9d:160/proxy/: foo (200; 5.800264ms) Mar 16 14:17:36.750: INFO: (11) /api/v1/namespaces/proxy-4447/pods/https:proxy-service-hdl4m-b5c9d:462/proxy/: tls qux (200; 5.834987ms) Mar 16 14:17:36.750: INFO: (11) /api/v1/namespaces/proxy-4447/pods/https:proxy-service-hdl4m-b5c9d:443/proxy/: test (200; 5.412772ms) Mar 16 14:17:36.755: INFO: (12) /api/v1/namespaces/proxy-4447/pods/proxy-service-hdl4m-b5c9d:1080/proxy/: test<... (200; 5.491608ms) Mar 16 14:17:36.755: INFO: (12) /api/v1/namespaces/proxy-4447/services/https:proxy-service-hdl4m:tlsportname1/proxy/: tls baz (200; 5.477573ms) Mar 16 14:17:36.755: INFO: (12) /api/v1/namespaces/proxy-4447/services/proxy-service-hdl4m:portname1/proxy/: foo (200; 5.536992ms) Mar 16 14:17:36.756: INFO: (12) /api/v1/namespaces/proxy-4447/services/http:proxy-service-hdl4m:portname1/proxy/: foo (200; 5.601921ms) Mar 16 14:17:36.755: INFO: (12) /api/v1/namespaces/proxy-4447/services/proxy-service-hdl4m:portname2/proxy/: bar (200; 5.449924ms) Mar 16 14:17:36.756: INFO: (12) /api/v1/namespaces/proxy-4447/services/https:proxy-service-hdl4m:tlsportname2/proxy/: tls qux (200; 5.597033ms) Mar 16 14:17:36.756: INFO: (12) /api/v1/namespaces/proxy-4447/pods/proxy-service-hdl4m-b5c9d:160/proxy/: foo (200; 5.365401ms) Mar 16 14:17:36.756: INFO: (12) /api/v1/namespaces/proxy-4447/pods/http:proxy-service-hdl4m-b5c9d:1080/proxy/: ... (200; 5.573111ms) Mar 16 14:17:36.759: INFO: (13) /api/v1/namespaces/proxy-4447/pods/http:proxy-service-hdl4m-b5c9d:162/proxy/: bar (200; 3.809817ms) Mar 16 14:17:36.759: INFO: (13) /api/v1/namespaces/proxy-4447/pods/proxy-service-hdl4m-b5c9d:162/proxy/: bar (200; 3.912255ms) Mar 16 14:17:36.760: INFO: (13) /api/v1/namespaces/proxy-4447/pods/http:proxy-service-hdl4m-b5c9d:1080/proxy/: ... (200; 3.864373ms) Mar 16 14:17:36.760: INFO: (13) /api/v1/namespaces/proxy-4447/pods/https:proxy-service-hdl4m-b5c9d:443/proxy/: test<... (200; 3.847565ms) Mar 16 14:17:36.760: INFO: (13) /api/v1/namespaces/proxy-4447/pods/proxy-service-hdl4m-b5c9d/proxy/: test (200; 3.958717ms) Mar 16 14:17:36.760: INFO: (13) /api/v1/namespaces/proxy-4447/pods/proxy-service-hdl4m-b5c9d:160/proxy/: foo (200; 3.949392ms) Mar 16 14:17:36.760: INFO: (13) /api/v1/namespaces/proxy-4447/pods/https:proxy-service-hdl4m-b5c9d:462/proxy/: tls qux (200; 3.918162ms) Mar 16 14:17:36.760: INFO: (13) /api/v1/namespaces/proxy-4447/pods/https:proxy-service-hdl4m-b5c9d:460/proxy/: tls baz (200; 4.047296ms) Mar 16 14:17:36.760: INFO: (13) /api/v1/namespaces/proxy-4447/pods/http:proxy-service-hdl4m-b5c9d:160/proxy/: foo (200; 4.62124ms) Mar 16 14:17:36.761: INFO: (13) /api/v1/namespaces/proxy-4447/services/http:proxy-service-hdl4m:portname2/proxy/: bar (200; 5.364336ms) Mar 16 14:17:36.761: INFO: (13) /api/v1/namespaces/proxy-4447/services/proxy-service-hdl4m:portname2/proxy/: bar (200; 5.419774ms) Mar 16 14:17:36.761: INFO: (13) /api/v1/namespaces/proxy-4447/services/http:proxy-service-hdl4m:portname1/proxy/: foo (200; 5.316127ms) Mar 16 14:17:36.761: INFO: (13) /api/v1/namespaces/proxy-4447/services/proxy-service-hdl4m:portname1/proxy/: foo (200; 5.375608ms) Mar 16 14:17:36.761: INFO: (13) /api/v1/namespaces/proxy-4447/services/https:proxy-service-hdl4m:tlsportname1/proxy/: tls baz (200; 5.425554ms) Mar 16 14:17:36.761: INFO: (13) /api/v1/namespaces/proxy-4447/services/https:proxy-service-hdl4m:tlsportname2/proxy/: tls qux (200; 5.479751ms) Mar 16 14:17:36.764: INFO: (14) /api/v1/namespaces/proxy-4447/pods/proxy-service-hdl4m-b5c9d:162/proxy/: bar (200; 2.246904ms) Mar 16 14:17:36.764: INFO: (14) /api/v1/namespaces/proxy-4447/pods/https:proxy-service-hdl4m-b5c9d:460/proxy/: tls baz (200; 2.372984ms) Mar 16 14:17:36.765: INFO: (14) /api/v1/namespaces/proxy-4447/pods/proxy-service-hdl4m-b5c9d/proxy/: test (200; 3.91972ms) Mar 16 14:17:36.765: INFO: (14) /api/v1/namespaces/proxy-4447/pods/proxy-service-hdl4m-b5c9d:1080/proxy/: test<... (200; 3.929374ms) Mar 16 14:17:36.765: INFO: (14) /api/v1/namespaces/proxy-4447/pods/https:proxy-service-hdl4m-b5c9d:443/proxy/: ... (200; 5.255806ms) Mar 16 14:17:36.766: INFO: (14) /api/v1/namespaces/proxy-4447/pods/proxy-service-hdl4m-b5c9d:160/proxy/: foo (200; 5.249498ms) Mar 16 14:17:36.767: INFO: (14) /api/v1/namespaces/proxy-4447/services/https:proxy-service-hdl4m:tlsportname2/proxy/: tls qux (200; 5.319919ms) Mar 16 14:17:36.767: INFO: (14) /api/v1/namespaces/proxy-4447/services/https:proxy-service-hdl4m:tlsportname1/proxy/: tls baz (200; 5.588347ms) Mar 16 14:17:36.769: INFO: (15) /api/v1/namespaces/proxy-4447/pods/proxy-service-hdl4m-b5c9d:162/proxy/: bar (200; 2.346363ms) Mar 16 14:17:36.769: INFO: (15) /api/v1/namespaces/proxy-4447/pods/proxy-service-hdl4m-b5c9d/proxy/: test (200; 2.380824ms) Mar 16 14:17:36.770: INFO: (15) /api/v1/namespaces/proxy-4447/pods/proxy-service-hdl4m-b5c9d:1080/proxy/: test<... (200; 2.518682ms) Mar 16 14:17:36.771: INFO: (15) /api/v1/namespaces/proxy-4447/pods/http:proxy-service-hdl4m-b5c9d:162/proxy/: bar (200; 3.609349ms) Mar 16 14:17:36.771: INFO: (15) /api/v1/namespaces/proxy-4447/services/https:proxy-service-hdl4m:tlsportname1/proxy/: tls baz (200; 3.720744ms) Mar 16 14:17:36.771: INFO: (15) /api/v1/namespaces/proxy-4447/pods/http:proxy-service-hdl4m-b5c9d:1080/proxy/: ... (200; 3.676755ms) Mar 16 14:17:36.771: INFO: (15) /api/v1/namespaces/proxy-4447/services/proxy-service-hdl4m:portname2/proxy/: bar (200; 3.89772ms) Mar 16 14:17:36.771: INFO: (15) /api/v1/namespaces/proxy-4447/pods/https:proxy-service-hdl4m-b5c9d:460/proxy/: tls baz (200; 3.922655ms) Mar 16 14:17:36.771: INFO: (15) /api/v1/namespaces/proxy-4447/pods/https:proxy-service-hdl4m-b5c9d:443/proxy/: ... (200; 2.00502ms) Mar 16 14:17:36.775: INFO: (16) /api/v1/namespaces/proxy-4447/pods/proxy-service-hdl4m-b5c9d/proxy/: test (200; 3.288061ms) Mar 16 14:17:36.775: INFO: (16) /api/v1/namespaces/proxy-4447/pods/http:proxy-service-hdl4m-b5c9d:160/proxy/: foo (200; 3.672841ms) Mar 16 14:17:36.775: INFO: (16) /api/v1/namespaces/proxy-4447/pods/proxy-service-hdl4m-b5c9d:160/proxy/: foo (200; 3.695558ms) Mar 16 14:17:36.775: INFO: (16) /api/v1/namespaces/proxy-4447/pods/proxy-service-hdl4m-b5c9d:162/proxy/: bar (200; 3.969839ms) Mar 16 14:17:36.775: INFO: (16) /api/v1/namespaces/proxy-4447/services/http:proxy-service-hdl4m:portname2/proxy/: bar (200; 4.058702ms) Mar 16 14:17:36.775: INFO: (16) /api/v1/namespaces/proxy-4447/pods/https:proxy-service-hdl4m-b5c9d:462/proxy/: tls qux (200; 3.895592ms) Mar 16 14:17:36.775: INFO: (16) /api/v1/namespaces/proxy-4447/pods/https:proxy-service-hdl4m-b5c9d:443/proxy/: test<... (200; 4.754912ms) Mar 16 14:17:36.776: INFO: (16) /api/v1/namespaces/proxy-4447/services/https:proxy-service-hdl4m:tlsportname1/proxy/: tls baz (200; 4.806512ms) Mar 16 14:17:36.776: INFO: (16) /api/v1/namespaces/proxy-4447/pods/http:proxy-service-hdl4m-b5c9d:162/proxy/: bar (200; 4.805548ms) Mar 16 14:17:36.776: INFO: (16) /api/v1/namespaces/proxy-4447/services/proxy-service-hdl4m:portname1/proxy/: foo (200; 4.806895ms) Mar 16 14:17:36.776: INFO: (16) /api/v1/namespaces/proxy-4447/services/http:proxy-service-hdl4m:portname1/proxy/: foo (200; 4.777144ms) Mar 16 14:17:36.776: INFO: (16) /api/v1/namespaces/proxy-4447/services/https:proxy-service-hdl4m:tlsportname2/proxy/: tls qux (200; 4.800174ms) Mar 16 14:17:36.776: INFO: (16) /api/v1/namespaces/proxy-4447/services/proxy-service-hdl4m:portname2/proxy/: bar (200; 4.823845ms) Mar 16 14:17:36.779: INFO: (17) /api/v1/namespaces/proxy-4447/pods/proxy-service-hdl4m-b5c9d:160/proxy/: foo (200; 2.542037ms) Mar 16 14:17:36.779: INFO: (17) /api/v1/namespaces/proxy-4447/pods/http:proxy-service-hdl4m-b5c9d:160/proxy/: foo (200; 2.501126ms) Mar 16 14:17:36.779: INFO: (17) /api/v1/namespaces/proxy-4447/pods/https:proxy-service-hdl4m-b5c9d:462/proxy/: tls qux (200; 2.863012ms) Mar 16 14:17:36.779: INFO: (17) /api/v1/namespaces/proxy-4447/pods/proxy-service-hdl4m-b5c9d:162/proxy/: bar (200; 2.893161ms) Mar 16 14:17:36.779: INFO: (17) /api/v1/namespaces/proxy-4447/pods/https:proxy-service-hdl4m-b5c9d:460/proxy/: tls baz (200; 2.965386ms) Mar 16 14:17:36.779: INFO: (17) /api/v1/namespaces/proxy-4447/pods/http:proxy-service-hdl4m-b5c9d:162/proxy/: bar (200; 2.933542ms) Mar 16 14:17:36.779: INFO: (17) /api/v1/namespaces/proxy-4447/pods/proxy-service-hdl4m-b5c9d:1080/proxy/: test<... (200; 2.922567ms) Mar 16 14:17:36.779: INFO: (17) /api/v1/namespaces/proxy-4447/pods/proxy-service-hdl4m-b5c9d/proxy/: test (200; 2.935756ms) Mar 16 14:17:36.779: INFO: (17) /api/v1/namespaces/proxy-4447/pods/http:proxy-service-hdl4m-b5c9d:1080/proxy/: ... (200; 2.902433ms) Mar 16 14:17:36.779: INFO: (17) /api/v1/namespaces/proxy-4447/pods/https:proxy-service-hdl4m-b5c9d:443/proxy/: test (200; 10.202414ms) Mar 16 14:17:36.792: INFO: (18) /api/v1/namespaces/proxy-4447/pods/https:proxy-service-hdl4m-b5c9d:460/proxy/: tls baz (200; 10.509204ms) Mar 16 14:17:36.792: INFO: (18) /api/v1/namespaces/proxy-4447/pods/proxy-service-hdl4m-b5c9d:160/proxy/: foo (200; 10.50767ms) Mar 16 14:17:36.792: INFO: (18) /api/v1/namespaces/proxy-4447/pods/http:proxy-service-hdl4m-b5c9d:1080/proxy/: ... (200; 10.459144ms) Mar 16 14:17:36.792: INFO: (18) /api/v1/namespaces/proxy-4447/pods/http:proxy-service-hdl4m-b5c9d:160/proxy/: foo (200; 10.560702ms) Mar 16 14:17:36.792: INFO: (18) /api/v1/namespaces/proxy-4447/pods/https:proxy-service-hdl4m-b5c9d:443/proxy/: test<... (200; 10.676759ms) Mar 16 14:17:36.792: INFO: (18) /api/v1/namespaces/proxy-4447/pods/http:proxy-service-hdl4m-b5c9d:162/proxy/: bar (200; 10.709818ms) Mar 16 14:17:36.792: INFO: (18) /api/v1/namespaces/proxy-4447/pods/proxy-service-hdl4m-b5c9d:162/proxy/: bar (200; 10.75212ms) Mar 16 14:17:36.792: INFO: (18) /api/v1/namespaces/proxy-4447/pods/https:proxy-service-hdl4m-b5c9d:462/proxy/: tls qux (200; 10.814158ms) Mar 16 14:17:36.793: INFO: (18) /api/v1/namespaces/proxy-4447/services/proxy-service-hdl4m:portname2/proxy/: bar (200; 11.505907ms) Mar 16 14:17:36.793: INFO: (18) /api/v1/namespaces/proxy-4447/services/http:proxy-service-hdl4m:portname2/proxy/: bar (200; 11.663083ms) Mar 16 14:17:36.793: INFO: (18) /api/v1/namespaces/proxy-4447/services/proxy-service-hdl4m:portname1/proxy/: foo (200; 11.760509ms) Mar 16 14:17:36.793: INFO: (18) /api/v1/namespaces/proxy-4447/services/http:proxy-service-hdl4m:portname1/proxy/: foo (200; 11.750082ms) Mar 16 14:17:36.793: INFO: (18) /api/v1/namespaces/proxy-4447/services/https:proxy-service-hdl4m:tlsportname1/proxy/: tls baz (200; 11.913602ms) Mar 16 14:17:36.793: INFO: (18) /api/v1/namespaces/proxy-4447/services/https:proxy-service-hdl4m:tlsportname2/proxy/: tls qux (200; 11.905627ms) Mar 16 14:17:36.795: INFO: (19) /api/v1/namespaces/proxy-4447/pods/https:proxy-service-hdl4m-b5c9d:443/proxy/: ... (200; 2.161734ms) Mar 16 14:17:36.796: INFO: (19) /api/v1/namespaces/proxy-4447/pods/proxy-service-hdl4m-b5c9d:160/proxy/: foo (200; 2.494015ms) Mar 16 14:17:36.796: INFO: (19) /api/v1/namespaces/proxy-4447/pods/http:proxy-service-hdl4m-b5c9d:162/proxy/: bar (200; 2.635327ms) Mar 16 14:17:36.796: INFO: (19) /api/v1/namespaces/proxy-4447/pods/proxy-service-hdl4m-b5c9d/proxy/: test (200; 2.886369ms) Mar 16 14:17:36.796: INFO: (19) /api/v1/namespaces/proxy-4447/pods/proxy-service-hdl4m-b5c9d:1080/proxy/: test<... (200; 2.980232ms) Mar 16 14:17:36.796: INFO: (19) /api/v1/namespaces/proxy-4447/pods/proxy-service-hdl4m-b5c9d:162/proxy/: bar (200; 3.09664ms) Mar 16 14:17:36.796: INFO: (19) /api/v1/namespaces/proxy-4447/pods/https:proxy-service-hdl4m-b5c9d:462/proxy/: tls qux (200; 3.114336ms) Mar 16 14:17:36.797: INFO: (19) /api/v1/namespaces/proxy-4447/pods/http:proxy-service-hdl4m-b5c9d:160/proxy/: foo (200; 3.401632ms) Mar 16 14:17:36.797: INFO: (19) /api/v1/namespaces/proxy-4447/pods/https:proxy-service-hdl4m-b5c9d:460/proxy/: tls baz (200; 3.296984ms) Mar 16 14:17:36.797: INFO: (19) /api/v1/namespaces/proxy-4447/services/https:proxy-service-hdl4m:tlsportname2/proxy/: tls qux (200; 3.99684ms) Mar 16 14:17:36.798: INFO: (19) /api/v1/namespaces/proxy-4447/services/proxy-service-hdl4m:portname1/proxy/: foo (200; 4.358599ms) Mar 16 14:17:36.798: INFO: (19) /api/v1/namespaces/proxy-4447/services/proxy-service-hdl4m:portname2/proxy/: bar (200; 4.330783ms) Mar 16 14:17:36.798: INFO: (19) /api/v1/namespaces/proxy-4447/services/http:proxy-service-hdl4m:portname1/proxy/: foo (200; 4.498445ms) Mar 16 14:17:36.798: INFO: (19) /api/v1/namespaces/proxy-4447/services/http:proxy-service-hdl4m:portname2/proxy/: bar (200; 4.396944ms) Mar 16 14:17:36.798: INFO: (19) /api/v1/namespaces/proxy-4447/services/https:proxy-service-hdl4m:tlsportname1/proxy/: tls baz (200; 4.48709ms) STEP: deleting ReplicationController proxy-service-hdl4m in namespace proxy-4447, will wait for the garbage collector to delete the pods Mar 16 14:17:36.856: INFO: Deleting ReplicationController proxy-service-hdl4m took: 6.553811ms Mar 16 14:17:37.156: INFO: Terminating ReplicationController proxy-service-hdl4m pods took: 300.308488ms [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 16 14:17:41.957: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-4447" for this suite. Mar 16 14:17:47.974: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 14:17:48.049: INFO: namespace proxy-4447 deletion completed in 6.088774903s • [SLOW TEST:19.555 seconds] [sig-network] Proxy /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:58 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 16 14:17:48.050: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Mar 16 14:17:52.711: INFO: Successfully updated pod "pod-update-17671b5e-e841-4708-9697-5ef665194915" STEP: verifying the updated pod is in kubernetes Mar 16 14:17:52.746: INFO: Pod update OK [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 16 14:17:52.746: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-4848" for this suite. Mar 16 14:18:14.762: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 14:18:14.838: INFO: namespace pods-4848 deletion completed in 22.088587649s • [SLOW TEST:26.788 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 16 14:18:14.838: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Mar 16 14:18:18.992: INFO: Waiting up to 5m0s for pod "client-envvars-6f34e691-a81e-44f4-ad87-f3b16882e9db" in namespace "pods-5185" to be "success or failure" Mar 16 14:18:18.998: INFO: Pod "client-envvars-6f34e691-a81e-44f4-ad87-f3b16882e9db": Phase="Pending", Reason="", readiness=false. Elapsed: 5.449907ms Mar 16 14:18:21.002: INFO: Pod "client-envvars-6f34e691-a81e-44f4-ad87-f3b16882e9db": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010112316s Mar 16 14:18:23.007: INFO: Pod "client-envvars-6f34e691-a81e-44f4-ad87-f3b16882e9db": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014499718s STEP: Saw pod success Mar 16 14:18:23.007: INFO: Pod "client-envvars-6f34e691-a81e-44f4-ad87-f3b16882e9db" satisfied condition "success or failure" Mar 16 14:18:23.010: INFO: Trying to get logs from node iruya-worker pod client-envvars-6f34e691-a81e-44f4-ad87-f3b16882e9db container env3cont: STEP: delete the pod Mar 16 14:18:23.044: INFO: Waiting for pod client-envvars-6f34e691-a81e-44f4-ad87-f3b16882e9db to disappear Mar 16 14:18:23.058: INFO: Pod client-envvars-6f34e691-a81e-44f4-ad87-f3b16882e9db no longer exists [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 16 14:18:23.058: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-5185" for this suite. Mar 16 14:19:13.074: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 14:19:13.150: INFO: namespace pods-5185 deletion completed in 50.088977065s • [SLOW TEST:58.313 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 16 14:19:13.151: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap configmap-6246/configmap-test-ce906c9c-e732-4a80-9bb0-9177e7baed5b STEP: Creating a pod to test consume configMaps Mar 16 14:19:13.232: INFO: Waiting up to 5m0s for pod "pod-configmaps-6a7e4474-dbdd-4f4f-ad63-6c4fa4b27948" in namespace "configmap-6246" to be "success or failure" Mar 16 14:19:13.236: INFO: Pod "pod-configmaps-6a7e4474-dbdd-4f4f-ad63-6c4fa4b27948": Phase="Pending", Reason="", readiness=false. Elapsed: 3.662067ms Mar 16 14:19:15.248: INFO: Pod "pod-configmaps-6a7e4474-dbdd-4f4f-ad63-6c4fa4b27948": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015734355s Mar 16 14:19:17.252: INFO: Pod "pod-configmaps-6a7e4474-dbdd-4f4f-ad63-6c4fa4b27948": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.020085122s STEP: Saw pod success Mar 16 14:19:17.252: INFO: Pod "pod-configmaps-6a7e4474-dbdd-4f4f-ad63-6c4fa4b27948" satisfied condition "success or failure" Mar 16 14:19:17.256: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-6a7e4474-dbdd-4f4f-ad63-6c4fa4b27948 container env-test: STEP: delete the pod Mar 16 14:19:17.289: INFO: Waiting for pod pod-configmaps-6a7e4474-dbdd-4f4f-ad63-6c4fa4b27948 to disappear Mar 16 14:19:17.302: INFO: Pod pod-configmaps-6a7e4474-dbdd-4f4f-ad63-6c4fa4b27948 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 16 14:19:17.302: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6246" for this suite. Mar 16 14:19:23.330: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 14:19:23.414: INFO: namespace configmap-6246 deletion completed in 6.107747535s • [SLOW TEST:10.263 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 16 14:19:23.414: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod Mar 16 14:19:23.495: INFO: PodSpec: initContainers in spec.initContainers Mar 16 14:20:12.135: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-ac296002-7cf2-4516-aec3-504816007d5b", GenerateName:"", Namespace:"init-container-4690", SelfLink:"/api/v1/namespaces/init-container-4690/pods/pod-init-ac296002-7cf2-4516-aec3-504816007d5b", UID:"9959658f-f151-475d-8dc5-7cf7bab54671", ResourceVersion:"172084", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63719965163, loc:(*time.Location)(0x7ea78c0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"495176930"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-4l6vf", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc0000b5840), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-4l6vf", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-4l6vf", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-4l6vf", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0024f8ad8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"iruya-worker", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0033644e0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0024f8b60)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0024f8b80)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc0024f8b88), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc0024f8b8c), PreemptionPolicy:(*v1.PreemptionPolicy)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719965163, loc:(*time.Location)(0x7ea78c0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719965163, loc:(*time.Location)(0x7ea78c0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719965163, loc:(*time.Location)(0x7ea78c0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719965163, loc:(*time.Location)(0x7ea78c0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.17.0.6", PodIP:"10.244.2.248", StartTime:(*v1.Time)(0xc002b3f880), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0007e2770)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0007e27e0)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://31f183385ce8d17d2f1299f7bd36b15f8c07e67f544d429f3f3ad48525e66704"}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc002b3f8c0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:""}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc002b3f8a0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:""}}, QOSClass:"Guaranteed"}} [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 16 14:20:12.135: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-4690" for this suite. Mar 16 14:20:34.173: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 14:20:34.246: INFO: namespace init-container-4690 deletion completed in 22.107052521s • [SLOW TEST:70.832 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 16 14:20:34.246: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-f054e9c8-ed4e-41e9-8137-7d5a0b288698 STEP: Creating a pod to test consume configMaps Mar 16 14:20:34.501: INFO: Waiting up to 5m0s for pod "pod-configmaps-b47e74b5-1a56-410d-9f70-8f1c665e6cef" in namespace "configmap-9829" to be "success or failure" Mar 16 14:20:34.510: INFO: Pod "pod-configmaps-b47e74b5-1a56-410d-9f70-8f1c665e6cef": Phase="Pending", Reason="", readiness=false. Elapsed: 8.932275ms Mar 16 14:20:36.514: INFO: Pod "pod-configmaps-b47e74b5-1a56-410d-9f70-8f1c665e6cef": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012790202s Mar 16 14:20:38.519: INFO: Pod "pod-configmaps-b47e74b5-1a56-410d-9f70-8f1c665e6cef": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017243565s STEP: Saw pod success Mar 16 14:20:38.519: INFO: Pod "pod-configmaps-b47e74b5-1a56-410d-9f70-8f1c665e6cef" satisfied condition "success or failure" Mar 16 14:20:38.522: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-b47e74b5-1a56-410d-9f70-8f1c665e6cef container configmap-volume-test: STEP: delete the pod Mar 16 14:20:38.541: INFO: Waiting for pod pod-configmaps-b47e74b5-1a56-410d-9f70-8f1c665e6cef to disappear Mar 16 14:20:38.546: INFO: Pod pod-configmaps-b47e74b5-1a56-410d-9f70-8f1c665e6cef no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 16 14:20:38.546: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9829" for this suite. Mar 16 14:20:44.612: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 14:20:44.685: INFO: namespace configmap-9829 deletion completed in 6.137242071s • [SLOW TEST:10.439 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 16 14:20:44.686: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-1277 [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace statefulset-1277 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-1277 Mar 16 14:20:45.045: INFO: Found 0 stateful pods, waiting for 1 Mar 16 14:20:55.050: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod Mar 16 14:20:55.054: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1277 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Mar 16 14:20:57.564: INFO: stderr: "I0316 14:20:57.457801 2760 log.go:172] (0xc000b5c420) (0xc000ba2780) Create stream\nI0316 14:20:57.457830 2760 log.go:172] (0xc000b5c420) (0xc000ba2780) Stream added, broadcasting: 1\nI0316 14:20:57.460058 2760 log.go:172] (0xc000b5c420) Reply frame received for 1\nI0316 14:20:57.460124 2760 log.go:172] (0xc000b5c420) (0xc000b44000) Create stream\nI0316 14:20:57.460151 2760 log.go:172] (0xc000b5c420) (0xc000b44000) Stream added, broadcasting: 3\nI0316 14:20:57.461100 2760 log.go:172] (0xc000b5c420) Reply frame received for 3\nI0316 14:20:57.461207 2760 log.go:172] (0xc000b5c420) (0xc000ba2820) Create stream\nI0316 14:20:57.461218 2760 log.go:172] (0xc000b5c420) (0xc000ba2820) Stream added, broadcasting: 5\nI0316 14:20:57.462105 2760 log.go:172] (0xc000b5c420) Reply frame received for 5\nI0316 14:20:57.518071 2760 log.go:172] (0xc000b5c420) Data frame received for 5\nI0316 14:20:57.518092 2760 log.go:172] (0xc000ba2820) (5) Data frame handling\nI0316 14:20:57.518103 2760 log.go:172] (0xc000ba2820) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0316 14:20:57.553691 2760 log.go:172] (0xc000b5c420) Data frame received for 5\nI0316 14:20:57.553744 2760 log.go:172] (0xc000ba2820) (5) Data frame handling\nI0316 14:20:57.553777 2760 log.go:172] (0xc000b5c420) Data frame received for 3\nI0316 14:20:57.553795 2760 log.go:172] (0xc000b44000) (3) Data frame handling\nI0316 14:20:57.553811 2760 log.go:172] (0xc000b44000) (3) Data frame sent\nI0316 14:20:57.553834 2760 log.go:172] (0xc000b5c420) Data frame received for 3\nI0316 14:20:57.553848 2760 log.go:172] (0xc000b44000) (3) Data frame handling\nI0316 14:20:57.560663 2760 log.go:172] (0xc000b5c420) Data frame received for 1\nI0316 14:20:57.560702 2760 log.go:172] (0xc000ba2780) (1) Data frame handling\nI0316 14:20:57.560723 2760 log.go:172] (0xc000ba2780) (1) Data frame sent\nI0316 14:20:57.560745 2760 log.go:172] (0xc000b5c420) (0xc000ba2780) Stream removed, broadcasting: 1\nI0316 14:20:57.560764 2760 log.go:172] (0xc000b5c420) Go away received\nI0316 14:20:57.561368 2760 log.go:172] (0xc000b5c420) (0xc000ba2780) Stream removed, broadcasting: 1\nI0316 14:20:57.561391 2760 log.go:172] (0xc000b5c420) (0xc000b44000) Stream removed, broadcasting: 3\nI0316 14:20:57.561403 2760 log.go:172] (0xc000b5c420) (0xc000ba2820) Stream removed, broadcasting: 5\n" Mar 16 14:20:57.564: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Mar 16 14:20:57.564: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Mar 16 14:20:57.567: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Mar 16 14:21:07.578: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Mar 16 14:21:07.578: INFO: Waiting for statefulset status.replicas updated to 0 Mar 16 14:21:07.594: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999669s Mar 16 14:21:08.599: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.992827227s Mar 16 14:21:09.604: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.98794892s Mar 16 14:21:10.608: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.983071897s Mar 16 14:21:11.613: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.978829459s Mar 16 14:21:12.618: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.973732715s Mar 16 14:21:13.622: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.968887258s Mar 16 14:21:14.627: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.964899326s Mar 16 14:21:15.632: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.959988443s Mar 16 14:21:16.637: INFO: Verifying statefulset ss doesn't scale past 1 for another 955.346891ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-1277 Mar 16 14:21:17.641: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1277 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 16 14:21:17.847: INFO: stderr: "I0316 14:21:17.759936 2795 log.go:172] (0xc000a3a420) (0xc000546820) Create stream\nI0316 14:21:17.759996 2795 log.go:172] (0xc000a3a420) (0xc000546820) Stream added, broadcasting: 1\nI0316 14:21:17.763556 2795 log.go:172] (0xc000a3a420) Reply frame received for 1\nI0316 14:21:17.763613 2795 log.go:172] (0xc000a3a420) (0xc000586280) Create stream\nI0316 14:21:17.763629 2795 log.go:172] (0xc000a3a420) (0xc000586280) Stream added, broadcasting: 3\nI0316 14:21:17.764582 2795 log.go:172] (0xc000a3a420) Reply frame received for 3\nI0316 14:21:17.764635 2795 log.go:172] (0xc000a3a420) (0xc000586320) Create stream\nI0316 14:21:17.764648 2795 log.go:172] (0xc000a3a420) (0xc000586320) Stream added, broadcasting: 5\nI0316 14:21:17.765753 2795 log.go:172] (0xc000a3a420) Reply frame received for 5\nI0316 14:21:17.841583 2795 log.go:172] (0xc000a3a420) Data frame received for 3\nI0316 14:21:17.841609 2795 log.go:172] (0xc000586280) (3) Data frame handling\nI0316 14:21:17.841622 2795 log.go:172] (0xc000586280) (3) Data frame sent\nI0316 14:21:17.841628 2795 log.go:172] (0xc000a3a420) Data frame received for 3\nI0316 14:21:17.841633 2795 log.go:172] (0xc000586280) (3) Data frame handling\nI0316 14:21:17.841640 2795 log.go:172] (0xc000a3a420) Data frame received for 5\nI0316 14:21:17.841645 2795 log.go:172] (0xc000586320) (5) Data frame handling\nI0316 14:21:17.841653 2795 log.go:172] (0xc000586320) (5) Data frame sent\nI0316 14:21:17.841660 2795 log.go:172] (0xc000a3a420) Data frame received for 5\nI0316 14:21:17.841665 2795 log.go:172] (0xc000586320) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0316 14:21:17.843183 2795 log.go:172] (0xc000a3a420) Data frame received for 1\nI0316 14:21:17.843239 2795 log.go:172] (0xc000546820) (1) Data frame handling\nI0316 14:21:17.843261 2795 log.go:172] (0xc000546820) (1) Data frame sent\nI0316 14:21:17.843406 2795 log.go:172] (0xc000a3a420) (0xc000546820) Stream removed, broadcasting: 1\nI0316 14:21:17.843456 2795 log.go:172] (0xc000a3a420) Go away received\nI0316 14:21:17.843827 2795 log.go:172] (0xc000a3a420) (0xc000546820) Stream removed, broadcasting: 1\nI0316 14:21:17.843851 2795 log.go:172] (0xc000a3a420) (0xc000586280) Stream removed, broadcasting: 3\nI0316 14:21:17.843878 2795 log.go:172] (0xc000a3a420) (0xc000586320) Stream removed, broadcasting: 5\n" Mar 16 14:21:17.848: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Mar 16 14:21:17.848: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Mar 16 14:21:17.852: INFO: Found 1 stateful pods, waiting for 3 Mar 16 14:21:27.857: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Mar 16 14:21:27.857: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Mar 16 14:21:27.857: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod Mar 16 14:21:27.863: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1277 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Mar 16 14:21:28.059: INFO: stderr: "I0316 14:21:27.980856 2818 log.go:172] (0xc000a506e0) (0xc000632be0) Create stream\nI0316 14:21:27.980913 2818 log.go:172] (0xc000a506e0) (0xc000632be0) Stream added, broadcasting: 1\nI0316 14:21:27.985708 2818 log.go:172] (0xc000a506e0) Reply frame received for 1\nI0316 14:21:27.985762 2818 log.go:172] (0xc000a506e0) (0xc000632320) Create stream\nI0316 14:21:27.985776 2818 log.go:172] (0xc000a506e0) (0xc000632320) Stream added, broadcasting: 3\nI0316 14:21:27.986922 2818 log.go:172] (0xc000a506e0) Reply frame received for 3\nI0316 14:21:27.986972 2818 log.go:172] (0xc000a506e0) (0xc00022c000) Create stream\nI0316 14:21:27.986994 2818 log.go:172] (0xc000a506e0) (0xc00022c000) Stream added, broadcasting: 5\nI0316 14:21:27.988012 2818 log.go:172] (0xc000a506e0) Reply frame received for 5\nI0316 14:21:28.053455 2818 log.go:172] (0xc000a506e0) Data frame received for 3\nI0316 14:21:28.053495 2818 log.go:172] (0xc000632320) (3) Data frame handling\nI0316 14:21:28.053509 2818 log.go:172] (0xc000632320) (3) Data frame sent\nI0316 14:21:28.053522 2818 log.go:172] (0xc000a506e0) Data frame received for 3\nI0316 14:21:28.053545 2818 log.go:172] (0xc000632320) (3) Data frame handling\nI0316 14:21:28.053590 2818 log.go:172] (0xc000a506e0) Data frame received for 5\nI0316 14:21:28.053622 2818 log.go:172] (0xc00022c000) (5) Data frame handling\nI0316 14:21:28.053652 2818 log.go:172] (0xc00022c000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0316 14:21:28.053665 2818 log.go:172] (0xc000a506e0) Data frame received for 5\nI0316 14:21:28.053718 2818 log.go:172] (0xc00022c000) (5) Data frame handling\nI0316 14:21:28.055352 2818 log.go:172] (0xc000a506e0) Data frame received for 1\nI0316 14:21:28.055382 2818 log.go:172] (0xc000632be0) (1) Data frame handling\nI0316 14:21:28.055404 2818 log.go:172] (0xc000632be0) (1) Data frame sent\nI0316 14:21:28.055418 2818 log.go:172] (0xc000a506e0) (0xc000632be0) Stream removed, broadcasting: 1\nI0316 14:21:28.055439 2818 log.go:172] (0xc000a506e0) Go away received\nI0316 14:21:28.055871 2818 log.go:172] (0xc000a506e0) (0xc000632be0) Stream removed, broadcasting: 1\nI0316 14:21:28.055894 2818 log.go:172] (0xc000a506e0) (0xc000632320) Stream removed, broadcasting: 3\nI0316 14:21:28.055910 2818 log.go:172] (0xc000a506e0) (0xc00022c000) Stream removed, broadcasting: 5\n" Mar 16 14:21:28.059: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Mar 16 14:21:28.059: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Mar 16 14:21:28.060: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1277 ss-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Mar 16 14:21:28.288: INFO: stderr: "I0316 14:21:28.182782 2840 log.go:172] (0xc000b144d0) (0xc0006ba6e0) Create stream\nI0316 14:21:28.182833 2840 log.go:172] (0xc000b144d0) (0xc0006ba6e0) Stream added, broadcasting: 1\nI0316 14:21:28.186272 2840 log.go:172] (0xc000b144d0) Reply frame received for 1\nI0316 14:21:28.186318 2840 log.go:172] (0xc000b144d0) (0xc0006ba000) Create stream\nI0316 14:21:28.186330 2840 log.go:172] (0xc000b144d0) (0xc0006ba000) Stream added, broadcasting: 3\nI0316 14:21:28.187346 2840 log.go:172] (0xc000b144d0) Reply frame received for 3\nI0316 14:21:28.187373 2840 log.go:172] (0xc000b144d0) (0xc0006e2280) Create stream\nI0316 14:21:28.187385 2840 log.go:172] (0xc000b144d0) (0xc0006e2280) Stream added, broadcasting: 5\nI0316 14:21:28.188370 2840 log.go:172] (0xc000b144d0) Reply frame received for 5\nI0316 14:21:28.239927 2840 log.go:172] (0xc000b144d0) Data frame received for 5\nI0316 14:21:28.239958 2840 log.go:172] (0xc0006e2280) (5) Data frame handling\nI0316 14:21:28.239979 2840 log.go:172] (0xc0006e2280) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0316 14:21:28.281571 2840 log.go:172] (0xc000b144d0) Data frame received for 3\nI0316 14:21:28.281608 2840 log.go:172] (0xc0006ba000) (3) Data frame handling\nI0316 14:21:28.281638 2840 log.go:172] (0xc0006ba000) (3) Data frame sent\nI0316 14:21:28.281740 2840 log.go:172] (0xc000b144d0) Data frame received for 5\nI0316 14:21:28.281842 2840 log.go:172] (0xc0006e2280) (5) Data frame handling\nI0316 14:21:28.282032 2840 log.go:172] (0xc000b144d0) Data frame received for 3\nI0316 14:21:28.282066 2840 log.go:172] (0xc0006ba000) (3) Data frame handling\nI0316 14:21:28.283942 2840 log.go:172] (0xc000b144d0) Data frame received for 1\nI0316 14:21:28.283963 2840 log.go:172] (0xc0006ba6e0) (1) Data frame handling\nI0316 14:21:28.283974 2840 log.go:172] (0xc0006ba6e0) (1) Data frame sent\nI0316 14:21:28.283996 2840 log.go:172] (0xc000b144d0) (0xc0006ba6e0) Stream removed, broadcasting: 1\nI0316 14:21:28.284121 2840 log.go:172] (0xc000b144d0) Go away received\nI0316 14:21:28.284578 2840 log.go:172] (0xc000b144d0) (0xc0006ba6e0) Stream removed, broadcasting: 1\nI0316 14:21:28.284600 2840 log.go:172] (0xc000b144d0) (0xc0006ba000) Stream removed, broadcasting: 3\nI0316 14:21:28.284610 2840 log.go:172] (0xc000b144d0) (0xc0006e2280) Stream removed, broadcasting: 5\n" Mar 16 14:21:28.289: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Mar 16 14:21:28.289: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Mar 16 14:21:28.289: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1277 ss-2 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Mar 16 14:21:28.507: INFO: stderr: "I0316 14:21:28.432506 2861 log.go:172] (0xc000972370) (0xc0008506e0) Create stream\nI0316 14:21:28.432566 2861 log.go:172] (0xc000972370) (0xc0008506e0) Stream added, broadcasting: 1\nI0316 14:21:28.434965 2861 log.go:172] (0xc000972370) Reply frame received for 1\nI0316 14:21:28.435016 2861 log.go:172] (0xc000972370) (0xc0007220a0) Create stream\nI0316 14:21:28.435042 2861 log.go:172] (0xc000972370) (0xc0007220a0) Stream added, broadcasting: 3\nI0316 14:21:28.435710 2861 log.go:172] (0xc000972370) Reply frame received for 3\nI0316 14:21:28.435728 2861 log.go:172] (0xc000972370) (0xc000850780) Create stream\nI0316 14:21:28.435734 2861 log.go:172] (0xc000972370) (0xc000850780) Stream added, broadcasting: 5\nI0316 14:21:28.436477 2861 log.go:172] (0xc000972370) Reply frame received for 5\nI0316 14:21:28.477784 2861 log.go:172] (0xc000972370) Data frame received for 5\nI0316 14:21:28.477812 2861 log.go:172] (0xc000850780) (5) Data frame handling\nI0316 14:21:28.477831 2861 log.go:172] (0xc000850780) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0316 14:21:28.500389 2861 log.go:172] (0xc000972370) Data frame received for 3\nI0316 14:21:28.500425 2861 log.go:172] (0xc0007220a0) (3) Data frame handling\nI0316 14:21:28.500474 2861 log.go:172] (0xc0007220a0) (3) Data frame sent\nI0316 14:21:28.500515 2861 log.go:172] (0xc000972370) Data frame received for 3\nI0316 14:21:28.500534 2861 log.go:172] (0xc0007220a0) (3) Data frame handling\nI0316 14:21:28.500600 2861 log.go:172] (0xc000972370) Data frame received for 5\nI0316 14:21:28.500630 2861 log.go:172] (0xc000850780) (5) Data frame handling\nI0316 14:21:28.502838 2861 log.go:172] (0xc000972370) Data frame received for 1\nI0316 14:21:28.502864 2861 log.go:172] (0xc0008506e0) (1) Data frame handling\nI0316 14:21:28.502880 2861 log.go:172] (0xc0008506e0) (1) Data frame sent\nI0316 14:21:28.502899 2861 log.go:172] (0xc000972370) (0xc0008506e0) Stream removed, broadcasting: 1\nI0316 14:21:28.502990 2861 log.go:172] (0xc000972370) Go away received\nI0316 14:21:28.503296 2861 log.go:172] (0xc000972370) (0xc0008506e0) Stream removed, broadcasting: 1\nI0316 14:21:28.503313 2861 log.go:172] (0xc000972370) (0xc0007220a0) Stream removed, broadcasting: 3\nI0316 14:21:28.503322 2861 log.go:172] (0xc000972370) (0xc000850780) Stream removed, broadcasting: 5\n" Mar 16 14:21:28.507: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Mar 16 14:21:28.507: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Mar 16 14:21:28.507: INFO: Waiting for statefulset status.replicas updated to 0 Mar 16 14:21:28.510: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 3 Mar 16 14:21:38.517: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Mar 16 14:21:38.517: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Mar 16 14:21:38.517: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Mar 16 14:21:38.526: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999516s Mar 16 14:21:39.532: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.996545122s Mar 16 14:21:40.537: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.991188649s Mar 16 14:21:41.542: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.985946759s Mar 16 14:21:42.546: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.981115439s Mar 16 14:21:43.552: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.976254194s Mar 16 14:21:44.558: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.970692017s Mar 16 14:21:45.562: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.965187998s Mar 16 14:21:46.568: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.960596526s Mar 16 14:21:47.573: INFO: Verifying statefulset ss doesn't scale past 3 for another 955.182886ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-1277 Mar 16 14:21:48.578: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1277 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 16 14:21:48.767: INFO: stderr: "I0316 14:21:48.700467 2881 log.go:172] (0xc0009aa420) (0xc000220820) Create stream\nI0316 14:21:48.700517 2881 log.go:172] (0xc0009aa420) (0xc000220820) Stream added, broadcasting: 1\nI0316 14:21:48.706874 2881 log.go:172] (0xc0009aa420) Reply frame received for 1\nI0316 14:21:48.706914 2881 log.go:172] (0xc0009aa420) (0xc000220000) Create stream\nI0316 14:21:48.706925 2881 log.go:172] (0xc0009aa420) (0xc000220000) Stream added, broadcasting: 3\nI0316 14:21:48.707789 2881 log.go:172] (0xc0009aa420) Reply frame received for 3\nI0316 14:21:48.707834 2881 log.go:172] (0xc0009aa420) (0xc00027e1e0) Create stream\nI0316 14:21:48.707855 2881 log.go:172] (0xc0009aa420) (0xc00027e1e0) Stream added, broadcasting: 5\nI0316 14:21:48.708719 2881 log.go:172] (0xc0009aa420) Reply frame received for 5\nI0316 14:21:48.759572 2881 log.go:172] (0xc0009aa420) Data frame received for 5\nI0316 14:21:48.759614 2881 log.go:172] (0xc00027e1e0) (5) Data frame handling\nI0316 14:21:48.759643 2881 log.go:172] (0xc00027e1e0) (5) Data frame sent\nI0316 14:21:48.759662 2881 log.go:172] (0xc0009aa420) Data frame received for 5\nI0316 14:21:48.759676 2881 log.go:172] (0xc00027e1e0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0316 14:21:48.759696 2881 log.go:172] (0xc0009aa420) Data frame received for 3\nI0316 14:21:48.759712 2881 log.go:172] (0xc000220000) (3) Data frame handling\nI0316 14:21:48.759728 2881 log.go:172] (0xc000220000) (3) Data frame sent\nI0316 14:21:48.759756 2881 log.go:172] (0xc0009aa420) Data frame received for 3\nI0316 14:21:48.759771 2881 log.go:172] (0xc000220000) (3) Data frame handling\nI0316 14:21:48.761573 2881 log.go:172] (0xc0009aa420) Data frame received for 1\nI0316 14:21:48.761621 2881 log.go:172] (0xc000220820) (1) Data frame handling\nI0316 14:21:48.761656 2881 log.go:172] (0xc000220820) (1) Data frame sent\nI0316 14:21:48.761685 2881 log.go:172] (0xc0009aa420) (0xc000220820) Stream removed, broadcasting: 1\nI0316 14:21:48.761718 2881 log.go:172] (0xc0009aa420) Go away received\nI0316 14:21:48.762095 2881 log.go:172] (0xc0009aa420) (0xc000220820) Stream removed, broadcasting: 1\nI0316 14:21:48.762119 2881 log.go:172] (0xc0009aa420) (0xc000220000) Stream removed, broadcasting: 3\nI0316 14:21:48.762131 2881 log.go:172] (0xc0009aa420) (0xc00027e1e0) Stream removed, broadcasting: 5\n" Mar 16 14:21:48.768: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Mar 16 14:21:48.768: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Mar 16 14:21:48.768: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1277 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 16 14:21:48.985: INFO: stderr: "I0316 14:21:48.923165 2901 log.go:172] (0xc000aa2630) (0xc0006b8d20) Create stream\nI0316 14:21:48.923245 2901 log.go:172] (0xc000aa2630) (0xc0006b8d20) Stream added, broadcasting: 1\nI0316 14:21:48.927675 2901 log.go:172] (0xc000aa2630) Reply frame received for 1\nI0316 14:21:48.927731 2901 log.go:172] (0xc000aa2630) (0xc0006b8460) Create stream\nI0316 14:21:48.927752 2901 log.go:172] (0xc000aa2630) (0xc0006b8460) Stream added, broadcasting: 3\nI0316 14:21:48.929003 2901 log.go:172] (0xc000aa2630) Reply frame received for 3\nI0316 14:21:48.929036 2901 log.go:172] (0xc000aa2630) (0xc00018a000) Create stream\nI0316 14:21:48.929045 2901 log.go:172] (0xc000aa2630) (0xc00018a000) Stream added, broadcasting: 5\nI0316 14:21:48.930054 2901 log.go:172] (0xc000aa2630) Reply frame received for 5\nI0316 14:21:48.980549 2901 log.go:172] (0xc000aa2630) Data frame received for 5\nI0316 14:21:48.980593 2901 log.go:172] (0xc00018a000) (5) Data frame handling\nI0316 14:21:48.980606 2901 log.go:172] (0xc00018a000) (5) Data frame sent\nI0316 14:21:48.980615 2901 log.go:172] (0xc000aa2630) Data frame received for 5\nI0316 14:21:48.980623 2901 log.go:172] (0xc00018a000) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0316 14:21:48.980667 2901 log.go:172] (0xc000aa2630) Data frame received for 3\nI0316 14:21:48.980705 2901 log.go:172] (0xc0006b8460) (3) Data frame handling\nI0316 14:21:48.980729 2901 log.go:172] (0xc0006b8460) (3) Data frame sent\nI0316 14:21:48.980739 2901 log.go:172] (0xc000aa2630) Data frame received for 3\nI0316 14:21:48.980744 2901 log.go:172] (0xc0006b8460) (3) Data frame handling\nI0316 14:21:48.982075 2901 log.go:172] (0xc000aa2630) Data frame received for 1\nI0316 14:21:48.982090 2901 log.go:172] (0xc0006b8d20) (1) Data frame handling\nI0316 14:21:48.982100 2901 log.go:172] (0xc0006b8d20) (1) Data frame sent\nI0316 14:21:48.982209 2901 log.go:172] (0xc000aa2630) (0xc0006b8d20) Stream removed, broadcasting: 1\nI0316 14:21:48.982244 2901 log.go:172] (0xc000aa2630) Go away received\nI0316 14:21:48.982518 2901 log.go:172] (0xc000aa2630) (0xc0006b8d20) Stream removed, broadcasting: 1\nI0316 14:21:48.982536 2901 log.go:172] (0xc000aa2630) (0xc0006b8460) Stream removed, broadcasting: 3\nI0316 14:21:48.982550 2901 log.go:172] (0xc000aa2630) (0xc00018a000) Stream removed, broadcasting: 5\n" Mar 16 14:21:48.986: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Mar 16 14:21:48.986: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Mar 16 14:21:48.986: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1277 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 16 14:21:49.188: INFO: stderr: "I0316 14:21:49.118914 2922 log.go:172] (0xc0009da4d0) (0xc0006426e0) Create stream\nI0316 14:21:49.118981 2922 log.go:172] (0xc0009da4d0) (0xc0006426e0) Stream added, broadcasting: 1\nI0316 14:21:49.122582 2922 log.go:172] (0xc0009da4d0) Reply frame received for 1\nI0316 14:21:49.122653 2922 log.go:172] (0xc0009da4d0) (0xc000642000) Create stream\nI0316 14:21:49.122681 2922 log.go:172] (0xc0009da4d0) (0xc000642000) Stream added, broadcasting: 3\nI0316 14:21:49.123709 2922 log.go:172] (0xc0009da4d0) Reply frame received for 3\nI0316 14:21:49.123751 2922 log.go:172] (0xc0009da4d0) (0xc000556140) Create stream\nI0316 14:21:49.123772 2922 log.go:172] (0xc0009da4d0) (0xc000556140) Stream added, broadcasting: 5\nI0316 14:21:49.124773 2922 log.go:172] (0xc0009da4d0) Reply frame received for 5\nI0316 14:21:49.180018 2922 log.go:172] (0xc0009da4d0) Data frame received for 3\nI0316 14:21:49.180060 2922 log.go:172] (0xc000642000) (3) Data frame handling\nI0316 14:21:49.180081 2922 log.go:172] (0xc000642000) (3) Data frame sent\nI0316 14:21:49.180097 2922 log.go:172] (0xc0009da4d0) Data frame received for 3\nI0316 14:21:49.180112 2922 log.go:172] (0xc000642000) (3) Data frame handling\nI0316 14:21:49.181514 2922 log.go:172] (0xc0009da4d0) Data frame received for 5\nI0316 14:21:49.181544 2922 log.go:172] (0xc000556140) (5) Data frame handling\nI0316 14:21:49.181564 2922 log.go:172] (0xc000556140) (5) Data frame sent\nI0316 14:21:49.181581 2922 log.go:172] (0xc0009da4d0) Data frame received for 5\nI0316 14:21:49.181597 2922 log.go:172] (0xc000556140) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0316 14:21:49.184565 2922 log.go:172] (0xc0009da4d0) Data frame received for 1\nI0316 14:21:49.184582 2922 log.go:172] (0xc0006426e0) (1) Data frame handling\nI0316 14:21:49.184592 2922 log.go:172] (0xc0006426e0) (1) Data frame sent\nI0316 14:21:49.184605 2922 log.go:172] (0xc0009da4d0) (0xc0006426e0) Stream removed, broadcasting: 1\nI0316 14:21:49.184624 2922 log.go:172] (0xc0009da4d0) Go away received\nI0316 14:21:49.185203 2922 log.go:172] (0xc0009da4d0) (0xc0006426e0) Stream removed, broadcasting: 1\nI0316 14:21:49.185225 2922 log.go:172] (0xc0009da4d0) (0xc000642000) Stream removed, broadcasting: 3\nI0316 14:21:49.185232 2922 log.go:172] (0xc0009da4d0) (0xc000556140) Stream removed, broadcasting: 5\n" Mar 16 14:21:49.188: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Mar 16 14:21:49.188: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Mar 16 14:21:49.188: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 Mar 16 14:22:19.204: INFO: Deleting all statefulset in ns statefulset-1277 Mar 16 14:22:19.207: INFO: Scaling statefulset ss to 0 Mar 16 14:22:19.216: INFO: Waiting for statefulset status.replicas updated to 0 Mar 16 14:22:19.218: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 16 14:22:19.228: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-1277" for this suite. Mar 16 14:22:25.250: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 14:22:25.321: INFO: namespace statefulset-1277 deletion completed in 6.090476971s • [SLOW TEST:100.634 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 16 14:22:25.322: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-secret-mvvp STEP: Creating a pod to test atomic-volume-subpath Mar 16 14:22:25.444: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-mvvp" in namespace "subpath-1743" to be "success or failure" Mar 16 14:22:25.458: INFO: Pod "pod-subpath-test-secret-mvvp": Phase="Pending", Reason="", readiness=false. Elapsed: 14.027231ms Mar 16 14:22:27.462: INFO: Pod "pod-subpath-test-secret-mvvp": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017753947s Mar 16 14:22:29.466: INFO: Pod "pod-subpath-test-secret-mvvp": Phase="Running", Reason="", readiness=true. Elapsed: 4.021700639s Mar 16 14:22:31.470: INFO: Pod "pod-subpath-test-secret-mvvp": Phase="Running", Reason="", readiness=true. Elapsed: 6.025799364s Mar 16 14:22:33.474: INFO: Pod "pod-subpath-test-secret-mvvp": Phase="Running", Reason="", readiness=true. Elapsed: 8.030013748s Mar 16 14:22:35.477: INFO: Pod "pod-subpath-test-secret-mvvp": Phase="Running", Reason="", readiness=true. Elapsed: 10.033319194s Mar 16 14:22:37.481: INFO: Pod "pod-subpath-test-secret-mvvp": Phase="Running", Reason="", readiness=true. Elapsed: 12.037290517s Mar 16 14:22:39.485: INFO: Pod "pod-subpath-test-secret-mvvp": Phase="Running", Reason="", readiness=true. Elapsed: 14.041478732s Mar 16 14:22:41.489: INFO: Pod "pod-subpath-test-secret-mvvp": Phase="Running", Reason="", readiness=true. Elapsed: 16.04482867s Mar 16 14:22:43.493: INFO: Pod "pod-subpath-test-secret-mvvp": Phase="Running", Reason="", readiness=true. Elapsed: 18.049054295s Mar 16 14:22:45.497: INFO: Pod "pod-subpath-test-secret-mvvp": Phase="Running", Reason="", readiness=true. Elapsed: 20.053278033s Mar 16 14:22:47.502: INFO: Pod "pod-subpath-test-secret-mvvp": Phase="Running", Reason="", readiness=true. Elapsed: 22.057701027s Mar 16 14:22:49.506: INFO: Pod "pod-subpath-test-secret-mvvp": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.061547466s STEP: Saw pod success Mar 16 14:22:49.506: INFO: Pod "pod-subpath-test-secret-mvvp" satisfied condition "success or failure" Mar 16 14:22:49.508: INFO: Trying to get logs from node iruya-worker2 pod pod-subpath-test-secret-mvvp container test-container-subpath-secret-mvvp: STEP: delete the pod Mar 16 14:22:49.550: INFO: Waiting for pod pod-subpath-test-secret-mvvp to disappear Mar 16 14:22:49.560: INFO: Pod pod-subpath-test-secret-mvvp no longer exists STEP: Deleting pod pod-subpath-test-secret-mvvp Mar 16 14:22:49.560: INFO: Deleting pod "pod-subpath-test-secret-mvvp" in namespace "subpath-1743" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 16 14:22:49.577: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-1743" for this suite. Mar 16 14:22:55.594: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 14:22:55.676: INFO: namespace subpath-1743 deletion completed in 6.095323587s • [SLOW TEST:30.354 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 16 14:22:55.676: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-bc666168-43bf-4128-99d2-6225ff34b600 STEP: Creating a pod to test consume secrets Mar 16 14:22:55.760: INFO: Waiting up to 5m0s for pod "pod-secrets-56b47791-de8a-4c94-8edc-da1657cf4899" in namespace "secrets-6470" to be "success or failure" Mar 16 14:22:55.764: INFO: Pod "pod-secrets-56b47791-de8a-4c94-8edc-da1657cf4899": Phase="Pending", Reason="", readiness=false. Elapsed: 3.810999ms Mar 16 14:22:57.767: INFO: Pod "pod-secrets-56b47791-de8a-4c94-8edc-da1657cf4899": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00700634s Mar 16 14:22:59.772: INFO: Pod "pod-secrets-56b47791-de8a-4c94-8edc-da1657cf4899": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011277181s STEP: Saw pod success Mar 16 14:22:59.772: INFO: Pod "pod-secrets-56b47791-de8a-4c94-8edc-da1657cf4899" satisfied condition "success or failure" Mar 16 14:22:59.775: INFO: Trying to get logs from node iruya-worker pod pod-secrets-56b47791-de8a-4c94-8edc-da1657cf4899 container secret-env-test: STEP: delete the pod Mar 16 14:22:59.812: INFO: Waiting for pod pod-secrets-56b47791-de8a-4c94-8edc-da1657cf4899 to disappear Mar 16 14:22:59.837: INFO: Pod pod-secrets-56b47791-de8a-4c94-8edc-da1657cf4899 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 16 14:22:59.837: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-6470" for this suite. Mar 16 14:23:05.852: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 14:23:05.933: INFO: namespace secrets-6470 deletion completed in 6.092072774s • [SLOW TEST:10.256 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31 should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 16 14:23:05.933: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-map-ac4599e4-df30-40fb-82c9-79ff1ddb3f4e STEP: Creating a pod to test consume configMaps Mar 16 14:23:06.020: INFO: Waiting up to 5m0s for pod "pod-configmaps-90895fc5-29f1-4381-a189-4bbd937590ce" in namespace "configmap-2041" to be "success or failure" Mar 16 14:23:06.023: INFO: Pod "pod-configmaps-90895fc5-29f1-4381-a189-4bbd937590ce": Phase="Pending", Reason="", readiness=false. Elapsed: 3.295513ms Mar 16 14:23:08.027: INFO: Pod "pod-configmaps-90895fc5-29f1-4381-a189-4bbd937590ce": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007621941s Mar 16 14:23:10.032: INFO: Pod "pod-configmaps-90895fc5-29f1-4381-a189-4bbd937590ce": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012051468s STEP: Saw pod success Mar 16 14:23:10.032: INFO: Pod "pod-configmaps-90895fc5-29f1-4381-a189-4bbd937590ce" satisfied condition "success or failure" Mar 16 14:23:10.035: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-90895fc5-29f1-4381-a189-4bbd937590ce container configmap-volume-test: STEP: delete the pod Mar 16 14:23:10.090: INFO: Waiting for pod pod-configmaps-90895fc5-29f1-4381-a189-4bbd937590ce to disappear Mar 16 14:23:10.095: INFO: Pod pod-configmaps-90895fc5-29f1-4381-a189-4bbd937590ce no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 16 14:23:10.095: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2041" for this suite. Mar 16 14:23:16.111: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 14:23:16.194: INFO: namespace configmap-2041 deletion completed in 6.096783541s • [SLOW TEST:10.261 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 16 14:23:16.195: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 16 14:23:16.312: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-5637" for this suite. Mar 16 14:23:22.334: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 14:23:22.431: INFO: namespace kubelet-test-5637 deletion completed in 6.106982389s • [SLOW TEST:6.236 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78 should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 16 14:23:22.431: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Mar 16 14:23:22.503: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1a14d431-7b31-4bce-8a36-fe134217f11e" in namespace "downward-api-2410" to be "success or failure" Mar 16 14:23:22.530: INFO: Pod "downwardapi-volume-1a14d431-7b31-4bce-8a36-fe134217f11e": Phase="Pending", Reason="", readiness=false. Elapsed: 26.995941ms Mar 16 14:23:24.534: INFO: Pod "downwardapi-volume-1a14d431-7b31-4bce-8a36-fe134217f11e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030956624s Mar 16 14:23:26.539: INFO: Pod "downwardapi-volume-1a14d431-7b31-4bce-8a36-fe134217f11e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.035283118s STEP: Saw pod success Mar 16 14:23:26.539: INFO: Pod "downwardapi-volume-1a14d431-7b31-4bce-8a36-fe134217f11e" satisfied condition "success or failure" Mar 16 14:23:26.542: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-1a14d431-7b31-4bce-8a36-fe134217f11e container client-container: STEP: delete the pod Mar 16 14:23:26.600: INFO: Waiting for pod downwardapi-volume-1a14d431-7b31-4bce-8a36-fe134217f11e to disappear Mar 16 14:23:26.627: INFO: Pod downwardapi-volume-1a14d431-7b31-4bce-8a36-fe134217f11e no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 16 14:23:26.627: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2410" for this suite. Mar 16 14:23:32.657: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 14:23:32.748: INFO: namespace downward-api-2410 deletion completed in 6.118355864s • [SLOW TEST:10.317 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run job should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 16 14:23:32.748: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1612 [It] should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Mar 16 14:23:32.797: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-5479' Mar 16 14:23:32.899: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Mar 16 14:23:32.899: INFO: stdout: "job.batch/e2e-test-nginx-job created\n" STEP: verifying the job e2e-test-nginx-job was created [AfterEach] [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1617 Mar 16 14:23:32.912: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-nginx-job --namespace=kubectl-5479' Mar 16 14:23:33.020: INFO: stderr: "" Mar 16 14:23:33.020: INFO: stdout: "job.batch \"e2e-test-nginx-job\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 16 14:23:33.020: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5479" for this suite. Mar 16 14:23:55.032: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 14:23:55.116: INFO: namespace kubectl-5479 deletion completed in 22.092990614s • [SLOW TEST:22.368 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 16 14:23:55.116: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating the pod Mar 16 14:23:59.719: INFO: Successfully updated pod "annotationupdatea2a38e4a-3299-4b0a-a407-6e40f70e29e0" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 16 14:24:01.736: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6889" for this suite. Mar 16 14:24:23.753: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 14:24:23.830: INFO: namespace downward-api-6889 deletion completed in 22.089793623s • [SLOW TEST:28.713 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 16 14:24:23.830: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 Mar 16 14:24:23.895: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Mar 16 14:24:23.901: INFO: Waiting for terminating namespaces to be deleted... Mar 16 14:24:23.903: INFO: Logging pods the kubelet thinks is on node iruya-worker before test Mar 16 14:24:23.909: INFO: kindnet-gwz5g from kube-system started at 2020-03-15 18:24:55 +0000 UTC (1 container statuses recorded) Mar 16 14:24:23.909: INFO: Container kindnet-cni ready: true, restart count 0 Mar 16 14:24:23.909: INFO: kube-proxy-pmz4p from kube-system started at 2020-03-15 18:24:55 +0000 UTC (1 container statuses recorded) Mar 16 14:24:23.909: INFO: Container kube-proxy ready: true, restart count 0 Mar 16 14:24:23.909: INFO: Logging pods the kubelet thinks is on node iruya-worker2 before test Mar 16 14:24:23.917: INFO: kube-proxy-vwbcj from kube-system started at 2020-03-15 18:24:42 +0000 UTC (1 container statuses recorded) Mar 16 14:24:23.917: INFO: Container kube-proxy ready: true, restart count 0 Mar 16 14:24:23.917: INFO: kindnet-mgd8b from kube-system started at 2020-03-15 18:24:43 +0000 UTC (1 container statuses recorded) Mar 16 14:24:23.917: INFO: Container kindnet-cni ready: true, restart count 0 Mar 16 14:24:23.917: INFO: coredns-5d4dd4b4db-gm7vr from kube-system started at 2020-03-15 18:24:52 +0000 UTC (1 container statuses recorded) Mar 16 14:24:23.917: INFO: Container coredns ready: true, restart count 0 Mar 16 14:24:23.917: INFO: coredns-5d4dd4b4db-6jcgz from kube-system started at 2020-03-15 18:24:54 +0000 UTC (1 container statuses recorded) Mar 16 14:24:23.917: INFO: Container coredns ready: true, restart count 0 [It] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-dc044384-79b5-4a50-90e4-20e35f9e4dc2 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-dc044384-79b5-4a50-90e4-20e35f9e4dc2 off the node iruya-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-dc044384-79b5-4a50-90e4-20e35f9e4dc2 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 16 14:24:32.120: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-7698" for this suite. Mar 16 14:24:44.138: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 14:24:44.210: INFO: namespace sched-pred-7698 deletion completed in 12.086498132s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72 • [SLOW TEST:20.380 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23 validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 16 14:24:44.211: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change Mar 16 14:24:44.288: INFO: Pod name pod-release: Found 0 pods out of 1 Mar 16 14:24:49.293: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 16 14:24:50.311: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-8726" for this suite. Mar 16 14:24:58.327: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 14:24:58.396: INFO: namespace replication-controller-8726 deletion completed in 8.081266685s • [SLOW TEST:14.185 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 16 14:24:58.396: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-3060.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-3060.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-3060.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-3060.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-3060.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-3060.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-3060.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-3060.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-3060.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-3060.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-3060.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-3060.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3060.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 188.146.97.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.97.146.188_udp@PTR;check="$$(dig +tcp +noall +answer +search 188.146.97.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.97.146.188_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-3060.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-3060.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-3060.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-3060.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-3060.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-3060.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-3060.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-3060.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-3060.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-3060.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-3060.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-3060.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3060.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 188.146.97.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.97.146.188_udp@PTR;check="$$(dig +tcp +noall +answer +search 188.146.97.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.97.146.188_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 16 14:25:06.575: INFO: Unable to read wheezy_udp@dns-test-service.dns-3060.svc.cluster.local from pod dns-3060/dns-test-e3020fc2-62ca-4a39-86ec-8fb34f354ca9: the server could not find the requested resource (get pods dns-test-e3020fc2-62ca-4a39-86ec-8fb34f354ca9) Mar 16 14:25:06.578: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3060.svc.cluster.local from pod dns-3060/dns-test-e3020fc2-62ca-4a39-86ec-8fb34f354ca9: the server could not find the requested resource (get pods dns-test-e3020fc2-62ca-4a39-86ec-8fb34f354ca9) Mar 16 14:25:06.581: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3060.svc.cluster.local from pod dns-3060/dns-test-e3020fc2-62ca-4a39-86ec-8fb34f354ca9: the server could not find the requested resource (get pods dns-test-e3020fc2-62ca-4a39-86ec-8fb34f354ca9) Mar 16 14:25:06.583: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3060.svc.cluster.local from pod dns-3060/dns-test-e3020fc2-62ca-4a39-86ec-8fb34f354ca9: the server could not find the requested resource (get pods dns-test-e3020fc2-62ca-4a39-86ec-8fb34f354ca9) Mar 16 14:25:06.604: INFO: Unable to read jessie_udp@dns-test-service.dns-3060.svc.cluster.local from pod dns-3060/dns-test-e3020fc2-62ca-4a39-86ec-8fb34f354ca9: the server could not find the requested resource (get pods dns-test-e3020fc2-62ca-4a39-86ec-8fb34f354ca9) Mar 16 14:25:06.607: INFO: Unable to read jessie_tcp@dns-test-service.dns-3060.svc.cluster.local from pod dns-3060/dns-test-e3020fc2-62ca-4a39-86ec-8fb34f354ca9: the server could not find the requested resource (get pods dns-test-e3020fc2-62ca-4a39-86ec-8fb34f354ca9) Mar 16 14:25:06.610: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3060.svc.cluster.local from pod dns-3060/dns-test-e3020fc2-62ca-4a39-86ec-8fb34f354ca9: the server could not find the requested resource (get pods dns-test-e3020fc2-62ca-4a39-86ec-8fb34f354ca9) Mar 16 14:25:06.613: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3060.svc.cluster.local from pod dns-3060/dns-test-e3020fc2-62ca-4a39-86ec-8fb34f354ca9: the server could not find the requested resource (get pods dns-test-e3020fc2-62ca-4a39-86ec-8fb34f354ca9) Mar 16 14:25:06.631: INFO: Lookups using dns-3060/dns-test-e3020fc2-62ca-4a39-86ec-8fb34f354ca9 failed for: [wheezy_udp@dns-test-service.dns-3060.svc.cluster.local wheezy_tcp@dns-test-service.dns-3060.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-3060.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-3060.svc.cluster.local jessie_udp@dns-test-service.dns-3060.svc.cluster.local jessie_tcp@dns-test-service.dns-3060.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-3060.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-3060.svc.cluster.local] Mar 16 14:25:11.636: INFO: Unable to read wheezy_udp@dns-test-service.dns-3060.svc.cluster.local from pod dns-3060/dns-test-e3020fc2-62ca-4a39-86ec-8fb34f354ca9: the server could not find the requested resource (get pods dns-test-e3020fc2-62ca-4a39-86ec-8fb34f354ca9) Mar 16 14:25:11.639: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3060.svc.cluster.local from pod dns-3060/dns-test-e3020fc2-62ca-4a39-86ec-8fb34f354ca9: the server could not find the requested resource (get pods dns-test-e3020fc2-62ca-4a39-86ec-8fb34f354ca9) Mar 16 14:25:11.642: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3060.svc.cluster.local from pod dns-3060/dns-test-e3020fc2-62ca-4a39-86ec-8fb34f354ca9: the server could not find the requested resource (get pods dns-test-e3020fc2-62ca-4a39-86ec-8fb34f354ca9) Mar 16 14:25:11.645: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3060.svc.cluster.local from pod dns-3060/dns-test-e3020fc2-62ca-4a39-86ec-8fb34f354ca9: the server could not find the requested resource (get pods dns-test-e3020fc2-62ca-4a39-86ec-8fb34f354ca9) Mar 16 14:25:11.666: INFO: Unable to read jessie_udp@dns-test-service.dns-3060.svc.cluster.local from pod dns-3060/dns-test-e3020fc2-62ca-4a39-86ec-8fb34f354ca9: the server could not find the requested resource (get pods dns-test-e3020fc2-62ca-4a39-86ec-8fb34f354ca9) Mar 16 14:25:11.668: INFO: Unable to read jessie_tcp@dns-test-service.dns-3060.svc.cluster.local from pod dns-3060/dns-test-e3020fc2-62ca-4a39-86ec-8fb34f354ca9: the server could not find the requested resource (get pods dns-test-e3020fc2-62ca-4a39-86ec-8fb34f354ca9) Mar 16 14:25:11.671: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3060.svc.cluster.local from pod dns-3060/dns-test-e3020fc2-62ca-4a39-86ec-8fb34f354ca9: the server could not find the requested resource (get pods dns-test-e3020fc2-62ca-4a39-86ec-8fb34f354ca9) Mar 16 14:25:11.673: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3060.svc.cluster.local from pod dns-3060/dns-test-e3020fc2-62ca-4a39-86ec-8fb34f354ca9: the server could not find the requested resource (get pods dns-test-e3020fc2-62ca-4a39-86ec-8fb34f354ca9) Mar 16 14:25:11.692: INFO: Lookups using dns-3060/dns-test-e3020fc2-62ca-4a39-86ec-8fb34f354ca9 failed for: [wheezy_udp@dns-test-service.dns-3060.svc.cluster.local wheezy_tcp@dns-test-service.dns-3060.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-3060.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-3060.svc.cluster.local jessie_udp@dns-test-service.dns-3060.svc.cluster.local jessie_tcp@dns-test-service.dns-3060.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-3060.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-3060.svc.cluster.local] Mar 16 14:25:16.636: INFO: Unable to read wheezy_udp@dns-test-service.dns-3060.svc.cluster.local from pod dns-3060/dns-test-e3020fc2-62ca-4a39-86ec-8fb34f354ca9: the server could not find the requested resource (get pods dns-test-e3020fc2-62ca-4a39-86ec-8fb34f354ca9) Mar 16 14:25:16.639: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3060.svc.cluster.local from pod dns-3060/dns-test-e3020fc2-62ca-4a39-86ec-8fb34f354ca9: the server could not find the requested resource (get pods dns-test-e3020fc2-62ca-4a39-86ec-8fb34f354ca9) Mar 16 14:25:16.643: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3060.svc.cluster.local from pod dns-3060/dns-test-e3020fc2-62ca-4a39-86ec-8fb34f354ca9: the server could not find the requested resource (get pods dns-test-e3020fc2-62ca-4a39-86ec-8fb34f354ca9) Mar 16 14:25:16.647: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3060.svc.cluster.local from pod dns-3060/dns-test-e3020fc2-62ca-4a39-86ec-8fb34f354ca9: the server could not find the requested resource (get pods dns-test-e3020fc2-62ca-4a39-86ec-8fb34f354ca9) Mar 16 14:25:16.707: INFO: Unable to read jessie_udp@dns-test-service.dns-3060.svc.cluster.local from pod dns-3060/dns-test-e3020fc2-62ca-4a39-86ec-8fb34f354ca9: the server could not find the requested resource (get pods dns-test-e3020fc2-62ca-4a39-86ec-8fb34f354ca9) Mar 16 14:25:16.710: INFO: Unable to read jessie_tcp@dns-test-service.dns-3060.svc.cluster.local from pod dns-3060/dns-test-e3020fc2-62ca-4a39-86ec-8fb34f354ca9: the server could not find the requested resource (get pods dns-test-e3020fc2-62ca-4a39-86ec-8fb34f354ca9) Mar 16 14:25:16.711: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3060.svc.cluster.local from pod dns-3060/dns-test-e3020fc2-62ca-4a39-86ec-8fb34f354ca9: the server could not find the requested resource (get pods dns-test-e3020fc2-62ca-4a39-86ec-8fb34f354ca9) Mar 16 14:25:16.713: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3060.svc.cluster.local from pod dns-3060/dns-test-e3020fc2-62ca-4a39-86ec-8fb34f354ca9: the server could not find the requested resource (get pods dns-test-e3020fc2-62ca-4a39-86ec-8fb34f354ca9) Mar 16 14:25:16.728: INFO: Lookups using dns-3060/dns-test-e3020fc2-62ca-4a39-86ec-8fb34f354ca9 failed for: [wheezy_udp@dns-test-service.dns-3060.svc.cluster.local wheezy_tcp@dns-test-service.dns-3060.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-3060.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-3060.svc.cluster.local jessie_udp@dns-test-service.dns-3060.svc.cluster.local jessie_tcp@dns-test-service.dns-3060.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-3060.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-3060.svc.cluster.local] Mar 16 14:25:21.636: INFO: Unable to read wheezy_udp@dns-test-service.dns-3060.svc.cluster.local from pod dns-3060/dns-test-e3020fc2-62ca-4a39-86ec-8fb34f354ca9: the server could not find the requested resource (get pods dns-test-e3020fc2-62ca-4a39-86ec-8fb34f354ca9) Mar 16 14:25:21.639: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3060.svc.cluster.local from pod dns-3060/dns-test-e3020fc2-62ca-4a39-86ec-8fb34f354ca9: the server could not find the requested resource (get pods dns-test-e3020fc2-62ca-4a39-86ec-8fb34f354ca9) Mar 16 14:25:21.642: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3060.svc.cluster.local from pod dns-3060/dns-test-e3020fc2-62ca-4a39-86ec-8fb34f354ca9: the server could not find the requested resource (get pods dns-test-e3020fc2-62ca-4a39-86ec-8fb34f354ca9) Mar 16 14:25:21.645: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3060.svc.cluster.local from pod dns-3060/dns-test-e3020fc2-62ca-4a39-86ec-8fb34f354ca9: the server could not find the requested resource (get pods dns-test-e3020fc2-62ca-4a39-86ec-8fb34f354ca9) Mar 16 14:25:21.663: INFO: Unable to read jessie_udp@dns-test-service.dns-3060.svc.cluster.local from pod dns-3060/dns-test-e3020fc2-62ca-4a39-86ec-8fb34f354ca9: the server could not find the requested resource (get pods dns-test-e3020fc2-62ca-4a39-86ec-8fb34f354ca9) Mar 16 14:25:21.665: INFO: Unable to read jessie_tcp@dns-test-service.dns-3060.svc.cluster.local from pod dns-3060/dns-test-e3020fc2-62ca-4a39-86ec-8fb34f354ca9: the server could not find the requested resource (get pods dns-test-e3020fc2-62ca-4a39-86ec-8fb34f354ca9) Mar 16 14:25:21.668: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3060.svc.cluster.local from pod dns-3060/dns-test-e3020fc2-62ca-4a39-86ec-8fb34f354ca9: the server could not find the requested resource (get pods dns-test-e3020fc2-62ca-4a39-86ec-8fb34f354ca9) Mar 16 14:25:21.670: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3060.svc.cluster.local from pod dns-3060/dns-test-e3020fc2-62ca-4a39-86ec-8fb34f354ca9: the server could not find the requested resource (get pods dns-test-e3020fc2-62ca-4a39-86ec-8fb34f354ca9) Mar 16 14:25:21.687: INFO: Lookups using dns-3060/dns-test-e3020fc2-62ca-4a39-86ec-8fb34f354ca9 failed for: [wheezy_udp@dns-test-service.dns-3060.svc.cluster.local wheezy_tcp@dns-test-service.dns-3060.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-3060.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-3060.svc.cluster.local jessie_udp@dns-test-service.dns-3060.svc.cluster.local jessie_tcp@dns-test-service.dns-3060.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-3060.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-3060.svc.cluster.local] Mar 16 14:25:26.635: INFO: Unable to read wheezy_udp@dns-test-service.dns-3060.svc.cluster.local from pod dns-3060/dns-test-e3020fc2-62ca-4a39-86ec-8fb34f354ca9: the server could not find the requested resource (get pods dns-test-e3020fc2-62ca-4a39-86ec-8fb34f354ca9) Mar 16 14:25:26.638: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3060.svc.cluster.local from pod dns-3060/dns-test-e3020fc2-62ca-4a39-86ec-8fb34f354ca9: the server could not find the requested resource (get pods dns-test-e3020fc2-62ca-4a39-86ec-8fb34f354ca9) Mar 16 14:25:26.641: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3060.svc.cluster.local from pod dns-3060/dns-test-e3020fc2-62ca-4a39-86ec-8fb34f354ca9: the server could not find the requested resource (get pods dns-test-e3020fc2-62ca-4a39-86ec-8fb34f354ca9) Mar 16 14:25:26.644: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3060.svc.cluster.local from pod dns-3060/dns-test-e3020fc2-62ca-4a39-86ec-8fb34f354ca9: the server could not find the requested resource (get pods dns-test-e3020fc2-62ca-4a39-86ec-8fb34f354ca9) Mar 16 14:25:26.660: INFO: Unable to read jessie_udp@dns-test-service.dns-3060.svc.cluster.local from pod dns-3060/dns-test-e3020fc2-62ca-4a39-86ec-8fb34f354ca9: the server could not find the requested resource (get pods dns-test-e3020fc2-62ca-4a39-86ec-8fb34f354ca9) Mar 16 14:25:26.662: INFO: Unable to read jessie_tcp@dns-test-service.dns-3060.svc.cluster.local from pod dns-3060/dns-test-e3020fc2-62ca-4a39-86ec-8fb34f354ca9: the server could not find the requested resource (get pods dns-test-e3020fc2-62ca-4a39-86ec-8fb34f354ca9) Mar 16 14:25:26.664: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3060.svc.cluster.local from pod dns-3060/dns-test-e3020fc2-62ca-4a39-86ec-8fb34f354ca9: the server could not find the requested resource (get pods dns-test-e3020fc2-62ca-4a39-86ec-8fb34f354ca9) Mar 16 14:25:26.667: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3060.svc.cluster.local from pod dns-3060/dns-test-e3020fc2-62ca-4a39-86ec-8fb34f354ca9: the server could not find the requested resource (get pods dns-test-e3020fc2-62ca-4a39-86ec-8fb34f354ca9) Mar 16 14:25:26.684: INFO: Lookups using dns-3060/dns-test-e3020fc2-62ca-4a39-86ec-8fb34f354ca9 failed for: [wheezy_udp@dns-test-service.dns-3060.svc.cluster.local wheezy_tcp@dns-test-service.dns-3060.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-3060.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-3060.svc.cluster.local jessie_udp@dns-test-service.dns-3060.svc.cluster.local jessie_tcp@dns-test-service.dns-3060.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-3060.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-3060.svc.cluster.local] Mar 16 14:25:31.636: INFO: Unable to read wheezy_udp@dns-test-service.dns-3060.svc.cluster.local from pod dns-3060/dns-test-e3020fc2-62ca-4a39-86ec-8fb34f354ca9: the server could not find the requested resource (get pods dns-test-e3020fc2-62ca-4a39-86ec-8fb34f354ca9) Mar 16 14:25:31.640: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3060.svc.cluster.local from pod dns-3060/dns-test-e3020fc2-62ca-4a39-86ec-8fb34f354ca9: the server could not find the requested resource (get pods dns-test-e3020fc2-62ca-4a39-86ec-8fb34f354ca9) Mar 16 14:25:31.644: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3060.svc.cluster.local from pod dns-3060/dns-test-e3020fc2-62ca-4a39-86ec-8fb34f354ca9: the server could not find the requested resource (get pods dns-test-e3020fc2-62ca-4a39-86ec-8fb34f354ca9) Mar 16 14:25:31.654: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3060.svc.cluster.local from pod dns-3060/dns-test-e3020fc2-62ca-4a39-86ec-8fb34f354ca9: the server could not find the requested resource (get pods dns-test-e3020fc2-62ca-4a39-86ec-8fb34f354ca9) Mar 16 14:25:31.676: INFO: Unable to read jessie_udp@dns-test-service.dns-3060.svc.cluster.local from pod dns-3060/dns-test-e3020fc2-62ca-4a39-86ec-8fb34f354ca9: the server could not find the requested resource (get pods dns-test-e3020fc2-62ca-4a39-86ec-8fb34f354ca9) Mar 16 14:25:31.679: INFO: Unable to read jessie_tcp@dns-test-service.dns-3060.svc.cluster.local from pod dns-3060/dns-test-e3020fc2-62ca-4a39-86ec-8fb34f354ca9: the server could not find the requested resource (get pods dns-test-e3020fc2-62ca-4a39-86ec-8fb34f354ca9) Mar 16 14:25:31.682: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3060.svc.cluster.local from pod dns-3060/dns-test-e3020fc2-62ca-4a39-86ec-8fb34f354ca9: the server could not find the requested resource (get pods dns-test-e3020fc2-62ca-4a39-86ec-8fb34f354ca9) Mar 16 14:25:31.685: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3060.svc.cluster.local from pod dns-3060/dns-test-e3020fc2-62ca-4a39-86ec-8fb34f354ca9: the server could not find the requested resource (get pods dns-test-e3020fc2-62ca-4a39-86ec-8fb34f354ca9) Mar 16 14:25:31.704: INFO: Lookups using dns-3060/dns-test-e3020fc2-62ca-4a39-86ec-8fb34f354ca9 failed for: [wheezy_udp@dns-test-service.dns-3060.svc.cluster.local wheezy_tcp@dns-test-service.dns-3060.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-3060.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-3060.svc.cluster.local jessie_udp@dns-test-service.dns-3060.svc.cluster.local jessie_tcp@dns-test-service.dns-3060.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-3060.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-3060.svc.cluster.local] Mar 16 14:25:36.705: INFO: DNS probes using dns-3060/dns-test-e3020fc2-62ca-4a39-86ec-8fb34f354ca9 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 16 14:25:37.189: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-3060" for this suite. Mar 16 14:25:43.256: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 14:25:43.339: INFO: namespace dns-3060 deletion completed in 6.132955989s • [SLOW TEST:44.942 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 16 14:25:43.339: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars Mar 16 14:25:43.426: INFO: Waiting up to 5m0s for pod "downward-api-a2c75328-c58b-4ad2-914f-0810b9f1edc5" in namespace "downward-api-8049" to be "success or failure" Mar 16 14:25:43.428: INFO: Pod "downward-api-a2c75328-c58b-4ad2-914f-0810b9f1edc5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.28824ms Mar 16 14:25:45.433: INFO: Pod "downward-api-a2c75328-c58b-4ad2-914f-0810b9f1edc5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006805693s Mar 16 14:25:47.437: INFO: Pod "downward-api-a2c75328-c58b-4ad2-914f-0810b9f1edc5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011150835s STEP: Saw pod success Mar 16 14:25:47.437: INFO: Pod "downward-api-a2c75328-c58b-4ad2-914f-0810b9f1edc5" satisfied condition "success or failure" Mar 16 14:25:47.440: INFO: Trying to get logs from node iruya-worker pod downward-api-a2c75328-c58b-4ad2-914f-0810b9f1edc5 container dapi-container: STEP: delete the pod Mar 16 14:25:47.458: INFO: Waiting for pod downward-api-a2c75328-c58b-4ad2-914f-0810b9f1edc5 to disappear Mar 16 14:25:47.462: INFO: Pod downward-api-a2c75328-c58b-4ad2-914f-0810b9f1edc5 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 16 14:25:47.462: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8049" for this suite. Mar 16 14:25:53.526: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 14:25:53.608: INFO: namespace downward-api-8049 deletion completed in 6.143359875s • [SLOW TEST:10.269 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 16 14:25:53.608: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Mar 16 14:25:53.685: INFO: Creating deployment "nginx-deployment" Mar 16 14:25:53.690: INFO: Waiting for observed generation 1 Mar 16 14:25:55.702: INFO: Waiting for all required pods to come up Mar 16 14:25:55.707: INFO: Pod name nginx: Found 10 pods out of 10 STEP: ensuring each pod is running Mar 16 14:26:03.716: INFO: Waiting for deployment "nginx-deployment" to complete Mar 16 14:26:03.722: INFO: Updating deployment "nginx-deployment" with a non-existent image Mar 16 14:26:03.728: INFO: Updating deployment nginx-deployment Mar 16 14:26:03.728: INFO: Waiting for observed generation 2 Mar 16 14:26:05.749: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 Mar 16 14:26:05.752: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 Mar 16 14:26:05.755: INFO: Waiting for the first rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas Mar 16 14:26:05.762: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 Mar 16 14:26:05.762: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 Mar 16 14:26:05.764: INFO: Waiting for the second rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas Mar 16 14:26:05.769: INFO: Verifying that deployment "nginx-deployment" has minimum required number of available replicas Mar 16 14:26:05.769: INFO: Scaling up the deployment "nginx-deployment" from 10 to 30 Mar 16 14:26:05.774: INFO: Updating deployment nginx-deployment Mar 16 14:26:05.774: INFO: Waiting for the replicasets of deployment "nginx-deployment" to have desired number of replicas Mar 16 14:26:05.834: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 Mar 16 14:26:05.908: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 Mar 16 14:26:06.056: INFO: Deployment "nginx-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment,GenerateName:,Namespace:deployment-642,SelfLink:/apis/apps/v1/namespaces/deployment-642/deployments/nginx-deployment,UID:ceba32fd-8377-4d31-8675-4d83ec7405b9,ResourceVersion:173537,Generation:3,CreationTimestamp:2020-03-16 14:25:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*30,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:13,UpdatedReplicas:5,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[{Progressing True 2020-03-16 14:26:04 +0000 UTC 2020-03-16 14:25:53 +0000 UTC ReplicaSetUpdated ReplicaSet "nginx-deployment-55fb7cb77f" is progressing.} {Available False 2020-03-16 14:26:05 +0000 UTC 2020-03-16 14:26:05 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.}],ReadyReplicas:8,CollisionCount:nil,},} Mar 16 14:26:06.200: INFO: New ReplicaSet "nginx-deployment-55fb7cb77f" of Deployment "nginx-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f,GenerateName:,Namespace:deployment-642,SelfLink:/apis/apps/v1/namespaces/deployment-642/replicasets/nginx-deployment-55fb7cb77f,UID:8f493d4c-3ccb-4bdf-9ecb-ab5bbe4769f3,ResourceVersion:173581,Generation:3,CreationTimestamp:2020-03-16 14:26:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment nginx-deployment ceba32fd-8377-4d31-8675-4d83ec7405b9 0xc002ca1c87 0xc002ca1c88}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*13,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:5,FullyLabeledReplicas:5,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Mar 16 14:26:06.200: INFO: All old ReplicaSets of Deployment "nginx-deployment": Mar 16 14:26:06.200: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498,GenerateName:,Namespace:deployment-642,SelfLink:/apis/apps/v1/namespaces/deployment-642/replicasets/nginx-deployment-7b8c6f4498,UID:77f8c72b-6035-41a1-81c7-306d80617ee4,ResourceVersion:173580,Generation:3,CreationTimestamp:2020-03-16 14:25:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment nginx-deployment ceba32fd-8377-4d31-8675-4d83ec7405b9 0xc002ca1d57 0xc002ca1d58}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*20,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[],},} Mar 16 14:26:06.258: INFO: Pod "nginx-deployment-55fb7cb77f-5bb2z" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-5bb2z,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-642,SelfLink:/api/v1/namespaces/deployment-642/pods/nginx-deployment-55fb7cb77f-5bb2z,UID:74b9cdf0-4fc4-4af2-bffb-9fe8ff1feefe,ResourceVersion:173513,Generation:0,CreationTimestamp:2020-03-16 14:26:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 8f493d4c-3ccb-4bdf-9ecb-ab5bbe4769f3 0xc0019786c7 0xc0019786c8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-slft7 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-slft7,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-slft7 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001978740} {node.kubernetes.io/unreachable Exists NoExecute 0xc001978760}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-16 14:26:03 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-16 14:26:03 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-16 14:26:03 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-16 14:26:03 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:,StartTime:2020-03-16 14:26:03 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 16 14:26:06.258: INFO: Pod "nginx-deployment-55fb7cb77f-6vlf5" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-6vlf5,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-642,SelfLink:/api/v1/namespaces/deployment-642/pods/nginx-deployment-55fb7cb77f-6vlf5,UID:f33a6ec4-f74f-49a2-ae65-499fa0a0d88f,ResourceVersion:173506,Generation:0,CreationTimestamp:2020-03-16 14:26:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 8f493d4c-3ccb-4bdf-9ecb-ab5bbe4769f3 0xc001978830 0xc001978831}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-slft7 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-slft7,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-slft7 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0019788c0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0019788e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-16 14:26:03 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-16 14:26:03 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-16 14:26:03 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-16 14:26:03 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:,StartTime:2020-03-16 14:26:03 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 16 14:26:06.258: INFO: Pod "nginx-deployment-55fb7cb77f-95s5k" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-95s5k,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-642,SelfLink:/api/v1/namespaces/deployment-642/pods/nginx-deployment-55fb7cb77f-95s5k,UID:d44a4385-41dc-4de2-bb5b-173d949b14ad,ResourceVersion:173496,Generation:0,CreationTimestamp:2020-03-16 14:26:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 8f493d4c-3ccb-4bdf-9ecb-ab5bbe4769f3 0xc0019789b0 0xc0019789b1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-slft7 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-slft7,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-slft7 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001978a30} {node.kubernetes.io/unreachable Exists NoExecute 0xc001978a50}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-16 14:26:03 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-16 14:26:03 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-16 14:26:03 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-16 14:26:03 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:,StartTime:2020-03-16 14:26:03 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 16 14:26:06.259: INFO: Pod "nginx-deployment-55fb7cb77f-blqdf" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-blqdf,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-642,SelfLink:/api/v1/namespaces/deployment-642/pods/nginx-deployment-55fb7cb77f-blqdf,UID:e06cd437-474b-4e76-851c-36e179222077,ResourceVersion:173567,Generation:0,CreationTimestamp:2020-03-16 14:26:05 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 8f493d4c-3ccb-4bdf-9ecb-ab5bbe4769f3 0xc001978b20 0xc001978b21}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-slft7 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-slft7,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-slft7 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001978ba0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001978bc0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-16 14:26:06 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 16 14:26:06.259: INFO: Pod "nginx-deployment-55fb7cb77f-cnm6l" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-cnm6l,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-642,SelfLink:/api/v1/namespaces/deployment-642/pods/nginx-deployment-55fb7cb77f-cnm6l,UID:b3f48538-926d-4ffd-b588-280da258c0fc,ResourceVersion:173582,Generation:0,CreationTimestamp:2020-03-16 14:26:06 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 8f493d4c-3ccb-4bdf-9ecb-ab5bbe4769f3 0xc001978c40 0xc001978c41}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-slft7 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-slft7,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-slft7 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001978cc0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001978ce0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-16 14:26:06 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 16 14:26:06.259: INFO: Pod "nginx-deployment-55fb7cb77f-gtpmq" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-gtpmq,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-642,SelfLink:/api/v1/namespaces/deployment-642/pods/nginx-deployment-55fb7cb77f-gtpmq,UID:002d36b7-6c1f-40a6-89ed-cd19db9f45c2,ResourceVersion:173572,Generation:0,CreationTimestamp:2020-03-16 14:26:05 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 8f493d4c-3ccb-4bdf-9ecb-ab5bbe4769f3 0xc001978d60 0xc001978d61}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-slft7 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-slft7,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-slft7 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001978de0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001978e00}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-16 14:26:06 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 16 14:26:06.259: INFO: Pod "nginx-deployment-55fb7cb77f-hc7wv" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-hc7wv,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-642,SelfLink:/api/v1/namespaces/deployment-642/pods/nginx-deployment-55fb7cb77f-hc7wv,UID:c2da2073-20c2-4a96-8557-7afd86bd1718,ResourceVersion:173578,Generation:0,CreationTimestamp:2020-03-16 14:26:05 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 8f493d4c-3ccb-4bdf-9ecb-ab5bbe4769f3 0xc001978e80 0xc001978e81}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-slft7 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-slft7,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-slft7 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001978f00} {node.kubernetes.io/unreachable Exists NoExecute 0xc001978f20}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-16 14:26:06 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 16 14:26:06.259: INFO: Pod "nginx-deployment-55fb7cb77f-jbf6l" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-jbf6l,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-642,SelfLink:/api/v1/namespaces/deployment-642/pods/nginx-deployment-55fb7cb77f-jbf6l,UID:3255125c-8d3c-4b57-bf83-f7f2be3fa918,ResourceVersion:173586,Generation:0,CreationTimestamp:2020-03-16 14:26:05 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 8f493d4c-3ccb-4bdf-9ecb-ab5bbe4769f3 0xc001978fa0 0xc001978fa1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-slft7 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-slft7,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-slft7 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001979020} {node.kubernetes.io/unreachable Exists NoExecute 0xc001979040}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-16 14:26:05 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-16 14:26:05 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-16 14:26:05 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-16 14:26:05 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:,StartTime:2020-03-16 14:26:05 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 16 14:26:06.259: INFO: Pod "nginx-deployment-55fb7cb77f-q424t" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-q424t,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-642,SelfLink:/api/v1/namespaces/deployment-642/pods/nginx-deployment-55fb7cb77f-q424t,UID:3a425223-c98b-4ade-aea3-307bddf28f69,ResourceVersion:173556,Generation:0,CreationTimestamp:2020-03-16 14:26:05 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 8f493d4c-3ccb-4bdf-9ecb-ab5bbe4769f3 0xc001979110 0xc001979111}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-slft7 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-slft7,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-slft7 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001979190} {node.kubernetes.io/unreachable Exists NoExecute 0xc0019791b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-16 14:26:05 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 16 14:26:06.259: INFO: Pod "nginx-deployment-55fb7cb77f-qhljb" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-qhljb,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-642,SelfLink:/api/v1/namespaces/deployment-642/pods/nginx-deployment-55fb7cb77f-qhljb,UID:4fbbf410-193d-4188-ae26-02a29a5723e6,ResourceVersion:173561,Generation:0,CreationTimestamp:2020-03-16 14:26:05 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 8f493d4c-3ccb-4bdf-9ecb-ab5bbe4769f3 0xc001979230 0xc001979231}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-slft7 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-slft7,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-slft7 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0019792b0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0019792d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-16 14:26:05 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 16 14:26:06.259: INFO: Pod "nginx-deployment-55fb7cb77f-wzlk4" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-wzlk4,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-642,SelfLink:/api/v1/namespaces/deployment-642/pods/nginx-deployment-55fb7cb77f-wzlk4,UID:7e4b9990-2669-43ed-83d6-ed3474d21910,ResourceVersion:173490,Generation:0,CreationTimestamp:2020-03-16 14:26:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 8f493d4c-3ccb-4bdf-9ecb-ab5bbe4769f3 0xc001979350 0xc001979351}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-slft7 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-slft7,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-slft7 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0019793d0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0019793f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-16 14:26:03 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-16 14:26:03 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-16 14:26:03 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-16 14:26:03 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:,StartTime:2020-03-16 14:26:03 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 16 14:26:06.260: INFO: Pod "nginx-deployment-55fb7cb77f-xpzpm" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-xpzpm,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-642,SelfLink:/api/v1/namespaces/deployment-642/pods/nginx-deployment-55fb7cb77f-xpzpm,UID:eada70c0-76c5-4ccd-acee-da96c603a939,ResourceVersion:173516,Generation:0,CreationTimestamp:2020-03-16 14:26:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 8f493d4c-3ccb-4bdf-9ecb-ab5bbe4769f3 0xc0019794c0 0xc0019794c1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-slft7 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-slft7,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-slft7 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001979540} {node.kubernetes.io/unreachable Exists NoExecute 0xc001979560}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-16 14:26:04 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-16 14:26:04 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-16 14:26:04 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-16 14:26:03 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:,StartTime:2020-03-16 14:26:04 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 16 14:26:06.260: INFO: Pod "nginx-deployment-55fb7cb77f-z5lj5" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-z5lj5,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-642,SelfLink:/api/v1/namespaces/deployment-642/pods/nginx-deployment-55fb7cb77f-z5lj5,UID:e88044f1-699d-40a2-8442-d82efdf5b1f1,ResourceVersion:173570,Generation:0,CreationTimestamp:2020-03-16 14:26:05 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 8f493d4c-3ccb-4bdf-9ecb-ab5bbe4769f3 0xc001979630 0xc001979631}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-slft7 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-slft7,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-slft7 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0019796b0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0019796d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-16 14:26:06 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 16 14:26:06.260: INFO: Pod "nginx-deployment-7b8c6f4498-2lmgs" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-2lmgs,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-642,SelfLink:/api/v1/namespaces/deployment-642/pods/nginx-deployment-7b8c6f4498-2lmgs,UID:6ed9c9eb-d6fe-4843-88b1-8ff5bfd71194,ResourceVersion:173436,Generation:0,CreationTimestamp:2020-03-16 14:25:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 77f8c72b-6035-41a1-81c7-306d80617ee4 0xc001979750 0xc001979751}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-slft7 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-slft7,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-slft7 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0019797c0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0019797e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-16 14:25:53 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-03-16 14:26:01 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-03-16 14:26:01 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-16 14:25:53 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:10.244.2.7,StartTime:2020-03-16 14:25:53 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-03-16 14:26:01 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://44bb2fc2877c9b70387e39fc9ea3bc7e5f83322c55f90fece8b27667207ade6c}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 16 14:26:06.260: INFO: Pod "nginx-deployment-7b8c6f4498-2rpwg" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-2rpwg,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-642,SelfLink:/api/v1/namespaces/deployment-642/pods/nginx-deployment-7b8c6f4498-2rpwg,UID:5c280ba1-f44e-4da1-a480-967a75f0b0b0,ResourceVersion:173450,Generation:0,CreationTimestamp:2020-03-16 14:25:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 77f8c72b-6035-41a1-81c7-306d80617ee4 0xc0019798b0 0xc0019798b1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-slft7 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-slft7,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-slft7 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001979920} {node.kubernetes.io/unreachable Exists NoExecute 0xc001979940}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-16 14:25:53 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-03-16 14:26:01 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-03-16 14:26:01 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-16 14:25:53 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:10.244.1.23,StartTime:2020-03-16 14:25:53 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-03-16 14:26:00 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://6edf8f1cc5ea626de0bb2b4d8747698620efb01c113ef9e5d5549a025b73c3e3}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 16 14:26:06.260: INFO: Pod "nginx-deployment-7b8c6f4498-4m6qr" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-4m6qr,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-642,SelfLink:/api/v1/namespaces/deployment-642/pods/nginx-deployment-7b8c6f4498-4m6qr,UID:a4f24724-96a9-4ef5-8413-b981838cd563,ResourceVersion:173566,Generation:0,CreationTimestamp:2020-03-16 14:26:05 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 77f8c72b-6035-41a1-81c7-306d80617ee4 0xc001979a10 0xc001979a11}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-slft7 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-slft7,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-slft7 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001979a80} {node.kubernetes.io/unreachable Exists NoExecute 0xc001979aa0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-16 14:26:06 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 16 14:26:06.260: INFO: Pod "nginx-deployment-7b8c6f4498-b77sd" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-b77sd,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-642,SelfLink:/api/v1/namespaces/deployment-642/pods/nginx-deployment-7b8c6f4498-b77sd,UID:bf16626a-c2e0-415c-a412-a435a09fa281,ResourceVersion:173575,Generation:0,CreationTimestamp:2020-03-16 14:26:05 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 77f8c72b-6035-41a1-81c7-306d80617ee4 0xc001979b20 0xc001979b21}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-slft7 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-slft7,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-slft7 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001979b90} {node.kubernetes.io/unreachable Exists NoExecute 0xc001979bb0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-16 14:26:05 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-16 14:26:05 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-16 14:26:05 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-16 14:26:05 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:,StartTime:2020-03-16 14:26:05 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 16 14:26:06.260: INFO: Pod "nginx-deployment-7b8c6f4498-bcgbj" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-bcgbj,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-642,SelfLink:/api/v1/namespaces/deployment-642/pods/nginx-deployment-7b8c6f4498-bcgbj,UID:b7c858b6-f163-4d4f-bd2b-9df024666a43,ResourceVersion:173427,Generation:0,CreationTimestamp:2020-03-16 14:25:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 77f8c72b-6035-41a1-81c7-306d80617ee4 0xc001979c70 0xc001979c71}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-slft7 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-slft7,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-slft7 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001979ce0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001979d10}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-16 14:25:53 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-03-16 14:26:00 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-03-16 14:26:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-16 14:25:53 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:10.244.1.24,StartTime:2020-03-16 14:25:53 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-03-16 14:26:00 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://7a21d1089547736ab21e1a4e411a0ba5ca9aea20f63c5915b034525337bd9da4}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 16 14:26:06.260: INFO: Pod "nginx-deployment-7b8c6f4498-bptsz" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-bptsz,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-642,SelfLink:/api/v1/namespaces/deployment-642/pods/nginx-deployment-7b8c6f4498-bptsz,UID:07656713-df1b-4ff6-ac03-5818cdd27a21,ResourceVersion:173569,Generation:0,CreationTimestamp:2020-03-16 14:26:05 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 77f8c72b-6035-41a1-81c7-306d80617ee4 0xc001979de0 0xc001979de1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-slft7 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-slft7,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-slft7 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001979e50} {node.kubernetes.io/unreachable Exists NoExecute 0xc001979e70}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-16 14:26:06 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 16 14:26:06.261: INFO: Pod "nginx-deployment-7b8c6f4498-c9sc8" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-c9sc8,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-642,SelfLink:/api/v1/namespaces/deployment-642/pods/nginx-deployment-7b8c6f4498-c9sc8,UID:47c0e699-da22-4557-bbcc-3117ae77648e,ResourceVersion:173443,Generation:0,CreationTimestamp:2020-03-16 14:25:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 77f8c72b-6035-41a1-81c7-306d80617ee4 0xc001979ef0 0xc001979ef1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-slft7 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-slft7,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-slft7 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001979f60} {node.kubernetes.io/unreachable Exists NoExecute 0xc001979f80}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-16 14:25:53 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-03-16 14:26:01 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-03-16 14:26:01 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-16 14:25:53 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:10.244.2.8,StartTime:2020-03-16 14:25:53 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-03-16 14:26:01 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://3a2a05129d277dde31ef7d501bbd648c6e0038449b3f3e78daa50cc919c1dfa9}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 16 14:26:06.261: INFO: Pod "nginx-deployment-7b8c6f4498-clrhb" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-clrhb,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-642,SelfLink:/api/v1/namespaces/deployment-642/pods/nginx-deployment-7b8c6f4498-clrhb,UID:eb15dc57-8ec7-484f-8879-56c7795a9d5e,ResourceVersion:173545,Generation:0,CreationTimestamp:2020-03-16 14:26:05 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 77f8c72b-6035-41a1-81c7-306d80617ee4 0xc002936050 0xc002936051}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-slft7 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-slft7,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-slft7 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0029360c0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0029360e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-16 14:26:05 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 16 14:26:06.261: INFO: Pod "nginx-deployment-7b8c6f4498-dzq2p" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-dzq2p,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-642,SelfLink:/api/v1/namespaces/deployment-642/pods/nginx-deployment-7b8c6f4498-dzq2p,UID:cc834c10-85c6-4745-902d-d0bb09c74d7a,ResourceVersion:173551,Generation:0,CreationTimestamp:2020-03-16 14:26:05 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 77f8c72b-6035-41a1-81c7-306d80617ee4 0xc002936170 0xc002936171}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-slft7 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-slft7,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-slft7 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0029361e0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002936200}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-16 14:26:05 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 16 14:26:06.261: INFO: Pod "nginx-deployment-7b8c6f4498-dzzh7" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-dzzh7,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-642,SelfLink:/api/v1/namespaces/deployment-642/pods/nginx-deployment-7b8c6f4498-dzzh7,UID:03496aee-dd4e-4183-a770-8a7b48f2ff50,ResourceVersion:173577,Generation:0,CreationTimestamp:2020-03-16 14:26:05 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 77f8c72b-6035-41a1-81c7-306d80617ee4 0xc002936280 0xc002936281}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-slft7 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-slft7,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-slft7 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0029362f0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002936310}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-16 14:26:06 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 16 14:26:06.261: INFO: Pod "nginx-deployment-7b8c6f4498-hlzzg" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-hlzzg,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-642,SelfLink:/api/v1/namespaces/deployment-642/pods/nginx-deployment-7b8c6f4498-hlzzg,UID:a26434db-bff4-4c9a-8394-26f6353dd5d2,ResourceVersion:173446,Generation:0,CreationTimestamp:2020-03-16 14:25:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 77f8c72b-6035-41a1-81c7-306d80617ee4 0xc002936390 0xc002936391}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-slft7 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-slft7,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-slft7 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002936400} {node.kubernetes.io/unreachable Exists NoExecute 0xc002936420}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-16 14:25:53 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-03-16 14:26:01 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-03-16 14:26:01 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-16 14:25:53 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:10.244.1.25,StartTime:2020-03-16 14:25:53 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-03-16 14:26:01 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://6356250d2c7cd1f35141223ad66dbfdffcde65c87c5bc9ddcf78684de5deef26}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 16 14:26:06.261: INFO: Pod "nginx-deployment-7b8c6f4498-jmqxr" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-jmqxr,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-642,SelfLink:/api/v1/namespaces/deployment-642/pods/nginx-deployment-7b8c6f4498-jmqxr,UID:c60e4fec-d3de-4fa8-bad4-d0556876f0b4,ResourceVersion:173568,Generation:0,CreationTimestamp:2020-03-16 14:26:05 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 77f8c72b-6035-41a1-81c7-306d80617ee4 0xc0029364f0 0xc0029364f1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-slft7 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-slft7,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-slft7 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002936560} {node.kubernetes.io/unreachable Exists NoExecute 0xc002936580}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-16 14:26:06 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 16 14:26:06.261: INFO: Pod "nginx-deployment-7b8c6f4498-ksm67" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-ksm67,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-642,SelfLink:/api/v1/namespaces/deployment-642/pods/nginx-deployment-7b8c6f4498-ksm67,UID:e60c8ae0-dacb-4ef1-ad26-a05f2a18f476,ResourceVersion:173573,Generation:0,CreationTimestamp:2020-03-16 14:26:05 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 77f8c72b-6035-41a1-81c7-306d80617ee4 0xc002936600 0xc002936601}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-slft7 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-slft7,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-slft7 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002936670} {node.kubernetes.io/unreachable Exists NoExecute 0xc002936690}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-16 14:26:06 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 16 14:26:06.261: INFO: Pod "nginx-deployment-7b8c6f4498-n5vtb" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-n5vtb,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-642,SelfLink:/api/v1/namespaces/deployment-642/pods/nginx-deployment-7b8c6f4498-n5vtb,UID:7eee9c9b-8fd3-4287-8d8e-6ce7634268d6,ResourceVersion:173423,Generation:0,CreationTimestamp:2020-03-16 14:25:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 77f8c72b-6035-41a1-81c7-306d80617ee4 0xc002936710 0xc002936711}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-slft7 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-slft7,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-slft7 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002936780} {node.kubernetes.io/unreachable Exists NoExecute 0xc0029367a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-16 14:25:53 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-03-16 14:26:00 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-03-16 14:26:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-16 14:25:53 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:10.244.2.5,StartTime:2020-03-16 14:25:53 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-03-16 14:25:59 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://5fe6bacc389082cbff7627cf55e4292b1b1df6e3a00e99d73a69d83666d850b5}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 16 14:26:06.261: INFO: Pod "nginx-deployment-7b8c6f4498-pdr9k" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-pdr9k,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-642,SelfLink:/api/v1/namespaces/deployment-642/pods/nginx-deployment-7b8c6f4498-pdr9k,UID:9827e892-1b5e-4db7-97f7-77d185a0f17b,ResourceVersion:173452,Generation:0,CreationTimestamp:2020-03-16 14:25:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 77f8c72b-6035-41a1-81c7-306d80617ee4 0xc002936870 0xc002936871}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-slft7 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-slft7,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-slft7 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0029368e0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002936900}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-16 14:25:53 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-03-16 14:26:01 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-03-16 14:26:01 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-16 14:25:53 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:10.244.1.26,StartTime:2020-03-16 14:25:53 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-03-16 14:26:01 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://6a0e093e1a7d67b26d616ba219e14e272c989ecd2bc3a76308ed1c0ef370a88d}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 16 14:26:06.262: INFO: Pod "nginx-deployment-7b8c6f4498-vllh4" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-vllh4,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-642,SelfLink:/api/v1/namespaces/deployment-642/pods/nginx-deployment-7b8c6f4498-vllh4,UID:23a25827-b14e-41b0-a9ae-a5afdb5ed650,ResourceVersion:173548,Generation:0,CreationTimestamp:2020-03-16 14:26:05 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 77f8c72b-6035-41a1-81c7-306d80617ee4 0xc0029369f0 0xc0029369f1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-slft7 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-slft7,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-slft7 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002936a80} {node.kubernetes.io/unreachable Exists NoExecute 0xc002936aa0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-16 14:26:05 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 16 14:26:06.262: INFO: Pod "nginx-deployment-7b8c6f4498-vr5cx" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-vr5cx,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-642,SelfLink:/api/v1/namespaces/deployment-642/pods/nginx-deployment-7b8c6f4498-vr5cx,UID:e664966e-781a-42e1-a6d4-8548e2bd88d1,ResourceVersion:173557,Generation:0,CreationTimestamp:2020-03-16 14:26:05 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 77f8c72b-6035-41a1-81c7-306d80617ee4 0xc002936b20 0xc002936b21}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-slft7 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-slft7,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-slft7 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002936b90} {node.kubernetes.io/unreachable Exists NoExecute 0xc002936bb0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-16 14:26:05 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 16 14:26:06.262: INFO: Pod "nginx-deployment-7b8c6f4498-w8mnj" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-w8mnj,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-642,SelfLink:/api/v1/namespaces/deployment-642/pods/nginx-deployment-7b8c6f4498-w8mnj,UID:fd39321a-59b9-4018-845f-57035e2018ca,ResourceVersion:173591,Generation:0,CreationTimestamp:2020-03-16 14:26:05 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 77f8c72b-6035-41a1-81c7-306d80617ee4 0xc002936c30 0xc002936c31}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-slft7 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-slft7,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-slft7 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002936ca0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002936cc0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-16 14:26:06 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-16 14:26:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-16 14:26:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-16 14:26:05 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:,StartTime:2020-03-16 14:26:06 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 16 14:26:06.262: INFO: Pod "nginx-deployment-7b8c6f4498-zbxxg" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-zbxxg,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-642,SelfLink:/api/v1/namespaces/deployment-642/pods/nginx-deployment-7b8c6f4498-zbxxg,UID:8efdf509-45cf-4694-a966-af83674edd5d,ResourceVersion:173565,Generation:0,CreationTimestamp:2020-03-16 14:26:05 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 77f8c72b-6035-41a1-81c7-306d80617ee4 0xc002936d80 0xc002936d81}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-slft7 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-slft7,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-slft7 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002936df0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002936e10}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-16 14:26:05 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-16 14:26:05 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-16 14:26:05 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-16 14:26:05 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:,StartTime:2020-03-16 14:26:05 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 16 14:26:06.262: INFO: Pod "nginx-deployment-7b8c6f4498-zr54g" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-zr54g,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-642,SelfLink:/api/v1/namespaces/deployment-642/pods/nginx-deployment-7b8c6f4498-zr54g,UID:696c1f76-d14a-4745-a417-b69c864719e6,ResourceVersion:173408,Generation:0,CreationTimestamp:2020-03-16 14:25:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 77f8c72b-6035-41a1-81c7-306d80617ee4 0xc002936ed0 0xc002936ed1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-slft7 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-slft7,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-slft7 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002936f40} {node.kubernetes.io/unreachable Exists NoExecute 0xc002936f60}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-16 14:25:53 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-03-16 14:25:59 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-03-16 14:25:59 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-16 14:25:53 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:10.244.2.4,StartTime:2020-03-16 14:25:53 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-03-16 14:25:58 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://c44c97e662f0f3b8ea21f407cf6d276399b21d21ac4af1f2233671a504cd1ff6}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 16 14:26:06.262: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-642" for this suite. Mar 16 14:26:22.646: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 14:26:22.721: INFO: namespace deployment-642 deletion completed in 16.351653717s • [SLOW TEST:29.113 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 16 14:26:22.721: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Performing setup for networking test in namespace pod-network-test-6092 STEP: creating a selector STEP: Creating the service pods in kubernetes Mar 16 14:26:22.896: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Mar 16 14:26:51.499: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.23:8080/dial?request=hostName&protocol=http&host=10.244.1.40&port=8080&tries=1'] Namespace:pod-network-test-6092 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 16 14:26:51.499: INFO: >>> kubeConfig: /root/.kube/config I0316 14:26:51.534377 6 log.go:172] (0xc000cca420) (0xc0010fcd20) Create stream I0316 14:26:51.534397 6 log.go:172] (0xc000cca420) (0xc0010fcd20) Stream added, broadcasting: 1 I0316 14:26:51.537009 6 log.go:172] (0xc000cca420) Reply frame received for 1 I0316 14:26:51.537046 6 log.go:172] (0xc000cca420) (0xc0010fcfa0) Create stream I0316 14:26:51.537061 6 log.go:172] (0xc000cca420) (0xc0010fcfa0) Stream added, broadcasting: 3 I0316 14:26:51.538350 6 log.go:172] (0xc000cca420) Reply frame received for 3 I0316 14:26:51.538394 6 log.go:172] (0xc000cca420) (0xc001d73400) Create stream I0316 14:26:51.538415 6 log.go:172] (0xc000cca420) (0xc001d73400) Stream added, broadcasting: 5 I0316 14:26:51.539431 6 log.go:172] (0xc000cca420) Reply frame received for 5 I0316 14:26:51.622711 6 log.go:172] (0xc000cca420) Data frame received for 3 I0316 14:26:51.622742 6 log.go:172] (0xc0010fcfa0) (3) Data frame handling I0316 14:26:51.622765 6 log.go:172] (0xc0010fcfa0) (3) Data frame sent I0316 14:26:51.623542 6 log.go:172] (0xc000cca420) Data frame received for 3 I0316 14:26:51.623569 6 log.go:172] (0xc000cca420) Data frame received for 5 I0316 14:26:51.623602 6 log.go:172] (0xc001d73400) (5) Data frame handling I0316 14:26:51.623625 6 log.go:172] (0xc0010fcfa0) (3) Data frame handling I0316 14:26:51.625846 6 log.go:172] (0xc000cca420) Data frame received for 1 I0316 14:26:51.625894 6 log.go:172] (0xc0010fcd20) (1) Data frame handling I0316 14:26:51.625947 6 log.go:172] (0xc0010fcd20) (1) Data frame sent I0316 14:26:51.625977 6 log.go:172] (0xc000cca420) (0xc0010fcd20) Stream removed, broadcasting: 1 I0316 14:26:51.626050 6 log.go:172] (0xc000cca420) Go away received I0316 14:26:51.626114 6 log.go:172] (0xc000cca420) (0xc0010fcd20) Stream removed, broadcasting: 1 I0316 14:26:51.626150 6 log.go:172] (0xc000cca420) (0xc0010fcfa0) Stream removed, broadcasting: 3 I0316 14:26:51.626170 6 log.go:172] (0xc000cca420) (0xc001d73400) Stream removed, broadcasting: 5 Mar 16 14:26:51.626: INFO: Waiting for endpoints: map[] Mar 16 14:26:51.630: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.23:8080/dial?request=hostName&protocol=http&host=10.244.2.22&port=8080&tries=1'] Namespace:pod-network-test-6092 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 16 14:26:51.630: INFO: >>> kubeConfig: /root/.kube/config I0316 14:26:51.666002 6 log.go:172] (0xc000ccb550) (0xc0010fd9a0) Create stream I0316 14:26:51.666026 6 log.go:172] (0xc000ccb550) (0xc0010fd9a0) Stream added, broadcasting: 1 I0316 14:26:51.668701 6 log.go:172] (0xc000ccb550) Reply frame received for 1 I0316 14:26:51.668747 6 log.go:172] (0xc000ccb550) (0xc0010fdd60) Create stream I0316 14:26:51.668760 6 log.go:172] (0xc000ccb550) (0xc0010fdd60) Stream added, broadcasting: 3 I0316 14:26:51.669898 6 log.go:172] (0xc000ccb550) Reply frame received for 3 I0316 14:26:51.669943 6 log.go:172] (0xc000ccb550) (0xc002b24000) Create stream I0316 14:26:51.669958 6 log.go:172] (0xc000ccb550) (0xc002b24000) Stream added, broadcasting: 5 I0316 14:26:51.671029 6 log.go:172] (0xc000ccb550) Reply frame received for 5 I0316 14:26:51.745711 6 log.go:172] (0xc000ccb550) Data frame received for 3 I0316 14:26:51.745753 6 log.go:172] (0xc0010fdd60) (3) Data frame handling I0316 14:26:51.745783 6 log.go:172] (0xc0010fdd60) (3) Data frame sent I0316 14:26:51.746396 6 log.go:172] (0xc000ccb550) Data frame received for 3 I0316 14:26:51.746425 6 log.go:172] (0xc0010fdd60) (3) Data frame handling I0316 14:26:51.746455 6 log.go:172] (0xc000ccb550) Data frame received for 5 I0316 14:26:51.746471 6 log.go:172] (0xc002b24000) (5) Data frame handling I0316 14:26:51.748479 6 log.go:172] (0xc000ccb550) Data frame received for 1 I0316 14:26:51.748517 6 log.go:172] (0xc0010fd9a0) (1) Data frame handling I0316 14:26:51.748555 6 log.go:172] (0xc0010fd9a0) (1) Data frame sent I0316 14:26:51.748578 6 log.go:172] (0xc000ccb550) (0xc0010fd9a0) Stream removed, broadcasting: 1 I0316 14:26:51.748602 6 log.go:172] (0xc000ccb550) Go away received I0316 14:26:51.748746 6 log.go:172] (0xc000ccb550) (0xc0010fd9a0) Stream removed, broadcasting: 1 I0316 14:26:51.748770 6 log.go:172] (0xc000ccb550) (0xc0010fdd60) Stream removed, broadcasting: 3 I0316 14:26:51.748783 6 log.go:172] (0xc000ccb550) (0xc002b24000) Stream removed, broadcasting: 5 Mar 16 14:26:51.748: INFO: Waiting for endpoints: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 16 14:26:51.748: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-6092" for this suite. Mar 16 14:27:13.769: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 14:27:13.854: INFO: namespace pod-network-test-6092 deletion completed in 22.100349475s • [SLOW TEST:51.133 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 16 14:27:13.855: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods Mar 16 14:27:14.504: INFO: Pod name wrapped-volume-race-8cadb6c9-d36c-4814-9be6-0773ef2637d0: Found 0 pods out of 5 Mar 16 14:27:19.510: INFO: Pod name wrapped-volume-race-8cadb6c9-d36c-4814-9be6-0773ef2637d0: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-8cadb6c9-d36c-4814-9be6-0773ef2637d0 in namespace emptydir-wrapper-4270, will wait for the garbage collector to delete the pods Mar 16 14:27:35.612: INFO: Deleting ReplicationController wrapped-volume-race-8cadb6c9-d36c-4814-9be6-0773ef2637d0 took: 22.708737ms Mar 16 14:27:35.912: INFO: Terminating ReplicationController wrapped-volume-race-8cadb6c9-d36c-4814-9be6-0773ef2637d0 pods took: 300.251772ms STEP: Creating RC which spawns configmap-volume pods Mar 16 14:28:12.773: INFO: Pod name wrapped-volume-race-0bee160b-6afa-4d95-b3ba-d2bfc8b99845: Found 0 pods out of 5 Mar 16 14:28:17.780: INFO: Pod name wrapped-volume-race-0bee160b-6afa-4d95-b3ba-d2bfc8b99845: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-0bee160b-6afa-4d95-b3ba-d2bfc8b99845 in namespace emptydir-wrapper-4270, will wait for the garbage collector to delete the pods Mar 16 14:28:31.872: INFO: Deleting ReplicationController wrapped-volume-race-0bee160b-6afa-4d95-b3ba-d2bfc8b99845 took: 7.774267ms Mar 16 14:28:32.172: INFO: Terminating ReplicationController wrapped-volume-race-0bee160b-6afa-4d95-b3ba-d2bfc8b99845 pods took: 300.252653ms STEP: Creating RC which spawns configmap-volume pods Mar 16 14:29:12.611: INFO: Pod name wrapped-volume-race-df70e305-2224-4aa7-9847-37abe2ebc13f: Found 0 pods out of 5 Mar 16 14:29:17.619: INFO: Pod name wrapped-volume-race-df70e305-2224-4aa7-9847-37abe2ebc13f: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-df70e305-2224-4aa7-9847-37abe2ebc13f in namespace emptydir-wrapper-4270, will wait for the garbage collector to delete the pods Mar 16 14:29:31.702: INFO: Deleting ReplicationController wrapped-volume-race-df70e305-2224-4aa7-9847-37abe2ebc13f took: 7.768493ms Mar 16 14:29:32.003: INFO: Terminating ReplicationController wrapped-volume-race-df70e305-2224-4aa7-9847-37abe2ebc13f pods took: 300.317649ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 16 14:30:13.186: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-4270" for this suite. Mar 16 14:30:21.200: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 14:30:21.271: INFO: namespace emptydir-wrapper-4270 deletion completed in 8.080583126s • [SLOW TEST:187.416 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 16 14:30:21.272: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: validating cluster-info Mar 16 14:30:21.342: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info' Mar 16 14:30:21.448: INFO: stderr: "" Mar 16 14:30:21.448: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32769\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32769/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 16 14:30:21.449: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3057" for this suite. Mar 16 14:30:27.464: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 14:30:27.547: INFO: namespace kubectl-3057 deletion completed in 6.093528595s • [SLOW TEST:6.275 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl cluster-info /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 16 14:30:27.547: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-621eb523-9bc5-4796-a13a-447e6d788927 STEP: Creating a pod to test consume secrets Mar 16 14:30:27.637: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-0b499282-62fa-4ab7-81c7-d4a7e921d541" in namespace "projected-7457" to be "success or failure" Mar 16 14:30:27.640: INFO: Pod "pod-projected-secrets-0b499282-62fa-4ab7-81c7-d4a7e921d541": Phase="Pending", Reason="", readiness=false. Elapsed: 2.935351ms Mar 16 14:30:29.644: INFO: Pod "pod-projected-secrets-0b499282-62fa-4ab7-81c7-d4a7e921d541": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007289174s Mar 16 14:30:31.649: INFO: Pod "pod-projected-secrets-0b499282-62fa-4ab7-81c7-d4a7e921d541": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011831943s STEP: Saw pod success Mar 16 14:30:31.649: INFO: Pod "pod-projected-secrets-0b499282-62fa-4ab7-81c7-d4a7e921d541" satisfied condition "success or failure" Mar 16 14:30:31.652: INFO: Trying to get logs from node iruya-worker pod pod-projected-secrets-0b499282-62fa-4ab7-81c7-d4a7e921d541 container projected-secret-volume-test: STEP: delete the pod Mar 16 14:30:31.672: INFO: Waiting for pod pod-projected-secrets-0b499282-62fa-4ab7-81c7-d4a7e921d541 to disappear Mar 16 14:30:31.692: INFO: Pod pod-projected-secrets-0b499282-62fa-4ab7-81c7-d4a7e921d541 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 16 14:30:31.692: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7457" for this suite. Mar 16 14:30:37.710: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 14:30:37.788: INFO: namespace projected-7457 deletion completed in 6.092460965s • [SLOW TEST:10.241 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 16 14:30:37.788: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Mar 16 14:30:59.869: INFO: Container started at 2020-03-16 14:30:40 +0000 UTC, pod became ready at 2020-03-16 14:30:57 +0000 UTC [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 16 14:30:59.869: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-8954" for this suite. Mar 16 14:31:21.883: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 14:31:21.966: INFO: namespace container-probe-8954 deletion completed in 22.092827676s • [SLOW TEST:44.177 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 16 14:31:21.966: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-856 [It] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace statefulset-856 STEP: Creating statefulset with conflicting port in namespace statefulset-856 STEP: Waiting until pod test-pod will start running in namespace statefulset-856 STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-856 Mar 16 14:31:26.066: INFO: Observed stateful pod in namespace: statefulset-856, name: ss-0, uid: 52ccc42a-0ebf-4cc6-85fd-72ad8b414543, status phase: Pending. Waiting for statefulset controller to delete. Mar 16 14:31:32.149: INFO: Observed stateful pod in namespace: statefulset-856, name: ss-0, uid: 52ccc42a-0ebf-4cc6-85fd-72ad8b414543, status phase: Failed. Waiting for statefulset controller to delete. Mar 16 14:31:32.218: INFO: Observed stateful pod in namespace: statefulset-856, name: ss-0, uid: 52ccc42a-0ebf-4cc6-85fd-72ad8b414543, status phase: Failed. Waiting for statefulset controller to delete. Mar 16 14:31:32.230: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-856 STEP: Removing pod with conflicting port in namespace statefulset-856 STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-856 and will be in running state [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 Mar 16 14:31:36.312: INFO: Deleting all statefulset in ns statefulset-856 Mar 16 14:31:36.316: INFO: Scaling statefulset ss to 0 Mar 16 14:31:46.335: INFO: Waiting for statefulset status.replicas updated to 0 Mar 16 14:31:46.338: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 16 14:31:46.356: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-856" for this suite. Mar 16 14:31:52.375: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 14:31:52.450: INFO: namespace statefulset-856 deletion completed in 6.091956959s • [SLOW TEST:30.484 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Aggregator Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 16 14:31:52.451: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename aggregator STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76 Mar 16 14:31:52.511: INFO: >>> kubeConfig: /root/.kube/config [It] Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Registering the sample API server. Mar 16 14:31:53.276: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set Mar 16 14:31:55.499: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719965913, loc:(*time.Location)(0x7ea78c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719965913, loc:(*time.Location)(0x7ea78c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719965913, loc:(*time.Location)(0x7ea78c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719965913, loc:(*time.Location)(0x7ea78c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 16 14:31:58.133: INFO: Waited 621.746294ms for the sample-apiserver to be ready to handle requests. [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67 [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 16 14:31:58.539: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "aggregator-4297" for this suite. Mar 16 14:32:04.716: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 14:32:04.793: INFO: namespace aggregator-4297 deletion completed in 6.224505678s • [SLOW TEST:12.342 seconds] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 16 14:32:04.793: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-downwardapi-zrjz STEP: Creating a pod to test atomic-volume-subpath Mar 16 14:32:04.878: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-zrjz" in namespace "subpath-701" to be "success or failure" Mar 16 14:32:04.899: INFO: Pod "pod-subpath-test-downwardapi-zrjz": Phase="Pending", Reason="", readiness=false. Elapsed: 20.927746ms Mar 16 14:32:06.917: INFO: Pod "pod-subpath-test-downwardapi-zrjz": Phase="Pending", Reason="", readiness=false. Elapsed: 2.038208778s Mar 16 14:32:08.921: INFO: Pod "pod-subpath-test-downwardapi-zrjz": Phase="Running", Reason="", readiness=true. Elapsed: 4.042757329s Mar 16 14:32:10.925: INFO: Pod "pod-subpath-test-downwardapi-zrjz": Phase="Running", Reason="", readiness=true. Elapsed: 6.046976604s Mar 16 14:32:12.929: INFO: Pod "pod-subpath-test-downwardapi-zrjz": Phase="Running", Reason="", readiness=true. Elapsed: 8.051094488s Mar 16 14:32:14.934: INFO: Pod "pod-subpath-test-downwardapi-zrjz": Phase="Running", Reason="", readiness=true. Elapsed: 10.05549684s Mar 16 14:32:16.939: INFO: Pod "pod-subpath-test-downwardapi-zrjz": Phase="Running", Reason="", readiness=true. Elapsed: 12.060212237s Mar 16 14:32:18.943: INFO: Pod "pod-subpath-test-downwardapi-zrjz": Phase="Running", Reason="", readiness=true. Elapsed: 14.064436803s Mar 16 14:32:20.947: INFO: Pod "pod-subpath-test-downwardapi-zrjz": Phase="Running", Reason="", readiness=true. Elapsed: 16.068785964s Mar 16 14:32:22.951: INFO: Pod "pod-subpath-test-downwardapi-zrjz": Phase="Running", Reason="", readiness=true. Elapsed: 18.072817622s Mar 16 14:32:24.956: INFO: Pod "pod-subpath-test-downwardapi-zrjz": Phase="Running", Reason="", readiness=true. Elapsed: 20.077564648s Mar 16 14:32:26.961: INFO: Pod "pod-subpath-test-downwardapi-zrjz": Phase="Running", Reason="", readiness=true. Elapsed: 22.082236502s Mar 16 14:32:28.965: INFO: Pod "pod-subpath-test-downwardapi-zrjz": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.0863929s STEP: Saw pod success Mar 16 14:32:28.965: INFO: Pod "pod-subpath-test-downwardapi-zrjz" satisfied condition "success or failure" Mar 16 14:32:28.968: INFO: Trying to get logs from node iruya-worker2 pod pod-subpath-test-downwardapi-zrjz container test-container-subpath-downwardapi-zrjz: STEP: delete the pod Mar 16 14:32:28.991: INFO: Waiting for pod pod-subpath-test-downwardapi-zrjz to disappear Mar 16 14:32:28.995: INFO: Pod pod-subpath-test-downwardapi-zrjz no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-zrjz Mar 16 14:32:28.995: INFO: Deleting pod "pod-subpath-test-downwardapi-zrjz" in namespace "subpath-701" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 16 14:32:28.998: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-701" for this suite. Mar 16 14:32:35.026: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 14:32:35.122: INFO: namespace subpath-701 deletion completed in 6.120682265s • [SLOW TEST:30.329 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 16 14:32:35.122: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Mar 16 14:32:35.177: INFO: Waiting up to 5m0s for pod "downwardapi-volume-3932fd2f-18c0-40fe-99bd-b084aff6a7c0" in namespace "projected-7196" to be "success or failure" Mar 16 14:32:35.194: INFO: Pod "downwardapi-volume-3932fd2f-18c0-40fe-99bd-b084aff6a7c0": Phase="Pending", Reason="", readiness=false. Elapsed: 17.09993ms Mar 16 14:32:37.247: INFO: Pod "downwardapi-volume-3932fd2f-18c0-40fe-99bd-b084aff6a7c0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.069826779s Mar 16 14:32:39.250: INFO: Pod "downwardapi-volume-3932fd2f-18c0-40fe-99bd-b084aff6a7c0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.072984635s STEP: Saw pod success Mar 16 14:32:39.250: INFO: Pod "downwardapi-volume-3932fd2f-18c0-40fe-99bd-b084aff6a7c0" satisfied condition "success or failure" Mar 16 14:32:39.252: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-3932fd2f-18c0-40fe-99bd-b084aff6a7c0 container client-container: STEP: delete the pod Mar 16 14:32:39.279: INFO: Waiting for pod downwardapi-volume-3932fd2f-18c0-40fe-99bd-b084aff6a7c0 to disappear Mar 16 14:32:39.290: INFO: Pod downwardapi-volume-3932fd2f-18c0-40fe-99bd-b084aff6a7c0 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 16 14:32:39.291: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7196" for this suite. Mar 16 14:32:45.306: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 14:32:45.376: INFO: namespace projected-7196 deletion completed in 6.082595985s • [SLOW TEST:10.254 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 16 14:32:45.376: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars Mar 16 14:32:45.455: INFO: Waiting up to 5m0s for pod "downward-api-a7d39a6a-811a-4371-9842-b72a3ad9cd8e" in namespace "downward-api-3770" to be "success or failure" Mar 16 14:32:45.498: INFO: Pod "downward-api-a7d39a6a-811a-4371-9842-b72a3ad9cd8e": Phase="Pending", Reason="", readiness=false. Elapsed: 43.011609ms Mar 16 14:32:47.504: INFO: Pod "downward-api-a7d39a6a-811a-4371-9842-b72a3ad9cd8e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.048279767s Mar 16 14:32:49.508: INFO: Pod "downward-api-a7d39a6a-811a-4371-9842-b72a3ad9cd8e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.052274211s STEP: Saw pod success Mar 16 14:32:49.508: INFO: Pod "downward-api-a7d39a6a-811a-4371-9842-b72a3ad9cd8e" satisfied condition "success or failure" Mar 16 14:32:49.511: INFO: Trying to get logs from node iruya-worker2 pod downward-api-a7d39a6a-811a-4371-9842-b72a3ad9cd8e container dapi-container: STEP: delete the pod Mar 16 14:32:49.543: INFO: Waiting for pod downward-api-a7d39a6a-811a-4371-9842-b72a3ad9cd8e to disappear Mar 16 14:32:49.555: INFO: Pod downward-api-a7d39a6a-811a-4371-9842-b72a3ad9cd8e no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 16 14:32:49.555: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3770" for this suite. Mar 16 14:32:55.582: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 14:32:55.659: INFO: namespace downward-api-3770 deletion completed in 6.089268085s • [SLOW TEST:10.283 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 16 14:32:55.660: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating the pod Mar 16 14:33:00.255: INFO: Successfully updated pod "labelsupdate4ca88dcf-2a2c-4ade-b065-74e9744ade97" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 16 14:33:02.295: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6514" for this suite. Mar 16 14:33:24.336: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 14:33:24.435: INFO: namespace downward-api-6514 deletion completed in 22.135263217s • [SLOW TEST:28.775 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 16 14:33:24.435: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars Mar 16 14:33:24.497: INFO: Waiting up to 5m0s for pod "downward-api-55023c99-1bdf-4afc-bdd9-05c8a86cca77" in namespace "downward-api-198" to be "success or failure" Mar 16 14:33:24.546: INFO: Pod "downward-api-55023c99-1bdf-4afc-bdd9-05c8a86cca77": Phase="Pending", Reason="", readiness=false. Elapsed: 49.693849ms Mar 16 14:33:26.550: INFO: Pod "downward-api-55023c99-1bdf-4afc-bdd9-05c8a86cca77": Phase="Pending", Reason="", readiness=false. Elapsed: 2.053095021s Mar 16 14:33:28.554: INFO: Pod "downward-api-55023c99-1bdf-4afc-bdd9-05c8a86cca77": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.057364505s STEP: Saw pod success Mar 16 14:33:28.554: INFO: Pod "downward-api-55023c99-1bdf-4afc-bdd9-05c8a86cca77" satisfied condition "success or failure" Mar 16 14:33:28.556: INFO: Trying to get logs from node iruya-worker2 pod downward-api-55023c99-1bdf-4afc-bdd9-05c8a86cca77 container dapi-container: STEP: delete the pod Mar 16 14:33:28.616: INFO: Waiting for pod downward-api-55023c99-1bdf-4afc-bdd9-05c8a86cca77 to disappear Mar 16 14:33:28.627: INFO: Pod downward-api-55023c99-1bdf-4afc-bdd9-05c8a86cca77 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 16 14:33:28.627: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-198" for this suite. Mar 16 14:33:34.638: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 14:33:34.708: INFO: namespace downward-api-198 deletion completed in 6.078794037s • [SLOW TEST:10.273 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl label should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 16 14:33:34.708: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1210 STEP: creating the pod Mar 16 14:33:34.779: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4064' Mar 16 14:33:37.327: INFO: stderr: "" Mar 16 14:33:37.327: INFO: stdout: "pod/pause created\n" Mar 16 14:33:37.327: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] Mar 16 14:33:37.327: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-4064" to be "running and ready" Mar 16 14:33:37.379: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 51.9834ms Mar 16 14:33:39.383: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.055747707s Mar 16 14:33:41.388: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 4.060357282s Mar 16 14:33:41.388: INFO: Pod "pause" satisfied condition "running and ready" Mar 16 14:33:41.388: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: adding the label testing-label with value testing-label-value to a pod Mar 16 14:33:41.388: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-4064' Mar 16 14:33:41.495: INFO: stderr: "" Mar 16 14:33:41.495: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value Mar 16 14:33:41.495: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-4064' Mar 16 14:33:41.605: INFO: stderr: "" Mar 16 14:33:41.605: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 4s testing-label-value\n" STEP: removing the label testing-label of a pod Mar 16 14:33:41.605: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-4064' Mar 16 14:33:41.719: INFO: stderr: "" Mar 16 14:33:41.719: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label Mar 16 14:33:41.719: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-4064' Mar 16 14:33:41.813: INFO: stderr: "" Mar 16 14:33:41.813: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 4s \n" [AfterEach] [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1217 STEP: using delete to clean up resources Mar 16 14:33:41.814: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-4064' Mar 16 14:33:41.912: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 16 14:33:41.912: INFO: stdout: "pod \"pause\" force deleted\n" Mar 16 14:33:41.912: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-4064' Mar 16 14:33:42.001: INFO: stderr: "No resources found.\n" Mar 16 14:33:42.001: INFO: stdout: "" Mar 16 14:33:42.001: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-4064 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Mar 16 14:33:42.091: INFO: stderr: "" Mar 16 14:33:42.091: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 16 14:33:42.091: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4064" for this suite. Mar 16 14:33:48.110: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 14:33:48.186: INFO: namespace kubectl-4064 deletion completed in 6.09171015s • [SLOW TEST:13.478 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 16 14:33:48.186: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-0d194569-6222-4c7f-9266-7591076f6d3d STEP: Creating a pod to test consume configMaps Mar 16 14:33:48.251: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-826e7b74-967b-4f69-a282-a5b7cf9ecfa9" in namespace "projected-2603" to be "success or failure" Mar 16 14:33:48.313: INFO: Pod "pod-projected-configmaps-826e7b74-967b-4f69-a282-a5b7cf9ecfa9": Phase="Pending", Reason="", readiness=false. Elapsed: 62.467797ms Mar 16 14:33:50.318: INFO: Pod "pod-projected-configmaps-826e7b74-967b-4f69-a282-a5b7cf9ecfa9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.06679278s Mar 16 14:33:52.322: INFO: Pod "pod-projected-configmaps-826e7b74-967b-4f69-a282-a5b7cf9ecfa9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.071462505s STEP: Saw pod success Mar 16 14:33:52.323: INFO: Pod "pod-projected-configmaps-826e7b74-967b-4f69-a282-a5b7cf9ecfa9" satisfied condition "success or failure" Mar 16 14:33:52.326: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-configmaps-826e7b74-967b-4f69-a282-a5b7cf9ecfa9 container projected-configmap-volume-test: STEP: delete the pod Mar 16 14:33:52.371: INFO: Waiting for pod pod-projected-configmaps-826e7b74-967b-4f69-a282-a5b7cf9ecfa9 to disappear Mar 16 14:33:52.380: INFO: Pod pod-projected-configmaps-826e7b74-967b-4f69-a282-a5b7cf9ecfa9 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 16 14:33:52.380: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2603" for this suite. Mar 16 14:33:58.396: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 14:33:58.478: INFO: namespace projected-2603 deletion completed in 6.094940758s • [SLOW TEST:10.292 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 16 14:33:58.478: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-upd-8aebb77c-8e40-4161-b83a-627d580d74ca STEP: Creating the pod STEP: Updating configmap configmap-test-upd-8aebb77c-8e40-4161-b83a-627d580d74ca STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 16 14:34:04.598: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5153" for this suite. Mar 16 14:34:26.614: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 14:34:26.692: INFO: namespace configmap-5153 deletion completed in 22.090287184s • [SLOW TEST:28.214 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSMar 16 14:34:26.693: INFO: Running AfterSuite actions on all nodes Mar 16 14:34:26.693: INFO: Running AfterSuite actions on node 1 Mar 16 14:34:26.693: INFO: Skipping dumping logs from cluster Ran 215 of 4412 Specs in 5926.969 seconds SUCCESS! -- 215 Passed | 0 Failed | 0 Pending | 4197 Skipped PASS