I0421 12:55:56.902456 6 e2e.go:243] Starting e2e run "58d493cb-a5ae-4aa7-a91a-ec37020c1d44" on Ginkgo node 1 Running Suite: Kubernetes e2e suite =================================== Random Seed: 1587473756 - Will randomize all specs Will run 215 of 4412 specs Apr 21 12:55:57.098: INFO: >>> kubeConfig: /root/.kube/config Apr 21 12:55:57.103: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Apr 21 12:55:57.128: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Apr 21 12:55:57.157: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Apr 21 12:55:57.157: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Apr 21 12:55:57.157: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Apr 21 12:55:57.165: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) Apr 21 12:55:57.165: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Apr 21 12:55:57.165: INFO: e2e test version: v1.15.11 Apr 21 12:55:57.166: INFO: kube-apiserver version: v1.15.7 SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 21 12:55:57.167: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl Apr 21 12:55:57.232: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [It] should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the initial replication controller Apr 21 12:55:57.234: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8584' Apr 21 12:55:59.924: INFO: stderr: "" Apr 21 12:55:59.924: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Apr 21 12:55:59.924: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8584' Apr 21 12:56:00.061: INFO: stderr: "" Apr 21 12:56:00.061: INFO: stdout: "update-demo-nautilus-gf4js update-demo-nautilus-ttwxp " Apr 21 12:56:00.061: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-gf4js -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8584' Apr 21 12:56:00.164: INFO: stderr: "" Apr 21 12:56:00.164: INFO: stdout: "" Apr 21 12:56:00.164: INFO: update-demo-nautilus-gf4js is created but not running Apr 21 12:56:05.164: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8584' Apr 21 12:56:05.270: INFO: stderr: "" Apr 21 12:56:05.270: INFO: stdout: "update-demo-nautilus-gf4js update-demo-nautilus-ttwxp " Apr 21 12:56:05.270: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-gf4js -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8584' Apr 21 12:56:05.360: INFO: stderr: "" Apr 21 12:56:05.360: INFO: stdout: "true" Apr 21 12:56:05.360: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-gf4js -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8584' Apr 21 12:56:05.445: INFO: stderr: "" Apr 21 12:56:05.445: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 21 12:56:05.446: INFO: validating pod update-demo-nautilus-gf4js Apr 21 12:56:05.450: INFO: got data: { "image": "nautilus.jpg" } Apr 21 12:56:05.450: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 21 12:56:05.450: INFO: update-demo-nautilus-gf4js is verified up and running Apr 21 12:56:05.450: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-ttwxp -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8584' Apr 21 12:56:05.545: INFO: stderr: "" Apr 21 12:56:05.545: INFO: stdout: "true" Apr 21 12:56:05.545: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-ttwxp -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8584' Apr 21 12:56:05.651: INFO: stderr: "" Apr 21 12:56:05.651: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 21 12:56:05.651: INFO: validating pod update-demo-nautilus-ttwxp Apr 21 12:56:05.655: INFO: got data: { "image": "nautilus.jpg" } Apr 21 12:56:05.655: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 21 12:56:05.655: INFO: update-demo-nautilus-ttwxp is verified up and running STEP: rolling-update to new replication controller Apr 21 12:56:05.658: INFO: scanned /root for discovery docs: Apr 21 12:56:05.658: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=kubectl-8584' Apr 21 12:56:28.190: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" Apr 21 12:56:28.190: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n" STEP: waiting for all containers in name=update-demo pods to come up. Apr 21 12:56:28.191: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8584' Apr 21 12:56:28.287: INFO: stderr: "" Apr 21 12:56:28.287: INFO: stdout: "update-demo-kitten-9w4cc update-demo-kitten-kckmv " Apr 21 12:56:28.287: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-9w4cc -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8584' Apr 21 12:56:28.398: INFO: stderr: "" Apr 21 12:56:28.399: INFO: stdout: "true" Apr 21 12:56:28.399: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-9w4cc -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8584' Apr 21 12:56:28.491: INFO: stderr: "" Apr 21 12:56:28.491: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" Apr 21 12:56:28.491: INFO: validating pod update-demo-kitten-9w4cc Apr 21 12:56:28.496: INFO: got data: { "image": "kitten.jpg" } Apr 21 12:56:28.496: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . Apr 21 12:56:28.496: INFO: update-demo-kitten-9w4cc is verified up and running Apr 21 12:56:28.496: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-kckmv -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8584' Apr 21 12:56:28.582: INFO: stderr: "" Apr 21 12:56:28.582: INFO: stdout: "true" Apr 21 12:56:28.582: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-kckmv -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8584' Apr 21 12:56:28.694: INFO: stderr: "" Apr 21 12:56:28.694: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" Apr 21 12:56:28.694: INFO: validating pod update-demo-kitten-kckmv Apr 21 12:56:28.700: INFO: got data: { "image": "kitten.jpg" } Apr 21 12:56:28.700: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . Apr 21 12:56:28.700: INFO: update-demo-kitten-kckmv is verified up and running [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 21 12:56:28.700: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8584" for this suite. Apr 21 12:56:52.715: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 21 12:56:52.787: INFO: namespace kubectl-8584 deletion completed in 24.08354283s • [SLOW TEST:55.620 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 21 12:56:52.787: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-ba7554ca-1da4-4b77-b5e7-b419d3ac1877 STEP: Creating a pod to test consume configMaps Apr 21 12:56:52.871: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-080798e7-afc8-47bc-affc-f1ebdfff91c4" in namespace "projected-3631" to be "success or failure" Apr 21 12:56:52.884: INFO: Pod "pod-projected-configmaps-080798e7-afc8-47bc-affc-f1ebdfff91c4": Phase="Pending", Reason="", readiness=false. Elapsed: 13.585782ms Apr 21 12:56:54.889: INFO: Pod "pod-projected-configmaps-080798e7-afc8-47bc-affc-f1ebdfff91c4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017774654s Apr 21 12:56:56.894: INFO: Pod "pod-projected-configmaps-080798e7-afc8-47bc-affc-f1ebdfff91c4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.022885161s STEP: Saw pod success Apr 21 12:56:56.894: INFO: Pod "pod-projected-configmaps-080798e7-afc8-47bc-affc-f1ebdfff91c4" satisfied condition "success or failure" Apr 21 12:56:56.897: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-configmaps-080798e7-afc8-47bc-affc-f1ebdfff91c4 container projected-configmap-volume-test: STEP: delete the pod Apr 21 12:56:56.934: INFO: Waiting for pod pod-projected-configmaps-080798e7-afc8-47bc-affc-f1ebdfff91c4 to disappear Apr 21 12:56:56.937: INFO: Pod pod-projected-configmaps-080798e7-afc8-47bc-affc-f1ebdfff91c4 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 21 12:56:56.937: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3631" for this suite. Apr 21 12:57:02.951: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 21 12:57:03.029: INFO: namespace projected-3631 deletion completed in 6.088500692s • [SLOW TEST:10.241 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-cli] Kubectl client [k8s.io] Proxy server should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 21 12:57:03.029: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Starting the proxy Apr 21 12:57:03.064: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix715616335/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 21 12:57:03.134: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1142" for this suite. Apr 21 12:57:09.154: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 21 12:57:09.231: INFO: namespace kubectl-1142 deletion completed in 6.092627181s • [SLOW TEST:6.202 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Proxy server /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 21 12:57:09.231: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Apr 21 12:57:09.309: INFO: Waiting up to 5m0s for pod "downwardapi-volume-591e1889-abc4-4892-8a89-7c20f154f9ef" in namespace "projected-9411" to be "success or failure" Apr 21 12:57:09.313: INFO: Pod "downwardapi-volume-591e1889-abc4-4892-8a89-7c20f154f9ef": Phase="Pending", Reason="", readiness=false. Elapsed: 3.517775ms Apr 21 12:57:11.317: INFO: Pod "downwardapi-volume-591e1889-abc4-4892-8a89-7c20f154f9ef": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008249393s Apr 21 12:57:13.322: INFO: Pod "downwardapi-volume-591e1889-abc4-4892-8a89-7c20f154f9ef": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012587557s STEP: Saw pod success Apr 21 12:57:13.322: INFO: Pod "downwardapi-volume-591e1889-abc4-4892-8a89-7c20f154f9ef" satisfied condition "success or failure" Apr 21 12:57:13.325: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-591e1889-abc4-4892-8a89-7c20f154f9ef container client-container: STEP: delete the pod Apr 21 12:57:13.360: INFO: Waiting for pod downwardapi-volume-591e1889-abc4-4892-8a89-7c20f154f9ef to disappear Apr 21 12:57:13.394: INFO: Pod downwardapi-volume-591e1889-abc4-4892-8a89-7c20f154f9ef no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 21 12:57:13.394: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9411" for this suite. Apr 21 12:57:19.410: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 21 12:57:19.481: INFO: namespace projected-9411 deletion completed in 6.083284633s • [SLOW TEST:10.250 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 21 12:57:19.482: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name cm-test-opt-del-44753948-d435-4740-87b8-e99b6faad940 STEP: Creating configMap with name cm-test-opt-upd-a9549d01-4485-4f88-8c1e-75adac89b7fe STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-44753948-d435-4740-87b8-e99b6faad940 STEP: Updating configmap cm-test-opt-upd-a9549d01-4485-4f88-8c1e-75adac89b7fe STEP: Creating configMap with name cm-test-opt-create-f09ba81a-2595-4e9b-bf9c-4f496d524b4d STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 21 12:58:40.059: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1542" for this suite. Apr 21 12:59:02.104: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 21 12:59:02.215: INFO: namespace configmap-1542 deletion completed in 22.152087291s • [SLOW TEST:102.733 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 21 12:59:02.215: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-downwardapi-d5t5 STEP: Creating a pod to test atomic-volume-subpath Apr 21 12:59:02.286: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-d5t5" in namespace "subpath-9149" to be "success or failure" Apr 21 12:59:02.301: INFO: Pod "pod-subpath-test-downwardapi-d5t5": Phase="Pending", Reason="", readiness=false. Elapsed: 15.406851ms Apr 21 12:59:04.323: INFO: Pod "pod-subpath-test-downwardapi-d5t5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.037249588s Apr 21 12:59:06.327: INFO: Pod "pod-subpath-test-downwardapi-d5t5": Phase="Running", Reason="", readiness=true. Elapsed: 4.041458622s Apr 21 12:59:08.332: INFO: Pod "pod-subpath-test-downwardapi-d5t5": Phase="Running", Reason="", readiness=true. Elapsed: 6.045658131s Apr 21 12:59:10.336: INFO: Pod "pod-subpath-test-downwardapi-d5t5": Phase="Running", Reason="", readiness=true. Elapsed: 8.05049026s Apr 21 12:59:12.348: INFO: Pod "pod-subpath-test-downwardapi-d5t5": Phase="Running", Reason="", readiness=true. Elapsed: 10.06194974s Apr 21 12:59:14.352: INFO: Pod "pod-subpath-test-downwardapi-d5t5": Phase="Running", Reason="", readiness=true. Elapsed: 12.066123202s Apr 21 12:59:16.359: INFO: Pod "pod-subpath-test-downwardapi-d5t5": Phase="Running", Reason="", readiness=true. Elapsed: 14.072688202s Apr 21 12:59:18.363: INFO: Pod "pod-subpath-test-downwardapi-d5t5": Phase="Running", Reason="", readiness=true. Elapsed: 16.07688047s Apr 21 12:59:20.367: INFO: Pod "pod-subpath-test-downwardapi-d5t5": Phase="Running", Reason="", readiness=true. Elapsed: 18.080530296s Apr 21 12:59:22.373: INFO: Pod "pod-subpath-test-downwardapi-d5t5": Phase="Running", Reason="", readiness=true. Elapsed: 20.086720479s Apr 21 12:59:24.377: INFO: Pod "pod-subpath-test-downwardapi-d5t5": Phase="Running", Reason="", readiness=true. Elapsed: 22.091117647s Apr 21 12:59:26.402: INFO: Pod "pod-subpath-test-downwardapi-d5t5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.116330128s STEP: Saw pod success Apr 21 12:59:26.402: INFO: Pod "pod-subpath-test-downwardapi-d5t5" satisfied condition "success or failure" Apr 21 12:59:26.405: INFO: Trying to get logs from node iruya-worker pod pod-subpath-test-downwardapi-d5t5 container test-container-subpath-downwardapi-d5t5: STEP: delete the pod Apr 21 12:59:26.473: INFO: Waiting for pod pod-subpath-test-downwardapi-d5t5 to disappear Apr 21 12:59:26.487: INFO: Pod pod-subpath-test-downwardapi-d5t5 no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-d5t5 Apr 21 12:59:26.487: INFO: Deleting pod "pod-subpath-test-downwardapi-d5t5" in namespace "subpath-9149" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 21 12:59:26.489: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-9149" for this suite. Apr 21 12:59:32.503: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 21 12:59:32.575: INFO: namespace subpath-9149 deletion completed in 6.082254973s • [SLOW TEST:30.360 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 21 12:59:32.575: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name cm-test-opt-del-16879361-eeed-4f1c-9c58-5923fa099fa5 STEP: Creating configMap with name cm-test-opt-upd-607c17d3-9f3d-40d7-953a-12a08c769a1b STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-16879361-eeed-4f1c-9c58-5923fa099fa5 STEP: Updating configmap cm-test-opt-upd-607c17d3-9f3d-40d7-953a-12a08c769a1b STEP: Creating configMap with name cm-test-opt-create-762093a9-9bd3-492c-8ab9-214b67928764 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 21 13:00:51.114: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-324" for this suite. Apr 21 13:01:13.134: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 21 13:01:13.216: INFO: namespace projected-324 deletion completed in 22.098215569s • [SLOW TEST:100.641 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run pod should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 21 13:01:13.217: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1685 [It] should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Apr 21 13:01:13.263: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-4111' Apr 21 13:01:13.382: INFO: stderr: "" Apr 21 13:01:13.382: INFO: stdout: "pod/e2e-test-nginx-pod created\n" STEP: verifying the pod e2e-test-nginx-pod was created [AfterEach] [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1690 Apr 21 13:01:13.389: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=kubectl-4111' Apr 21 13:01:22.197: INFO: stderr: "" Apr 21 13:01:22.197: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 21 13:01:22.197: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4111" for this suite. Apr 21 13:01:28.211: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 21 13:01:28.283: INFO: namespace kubectl-4111 deletion completed in 6.081764448s • [SLOW TEST:15.065 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 21 13:01:28.283: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 21 13:01:32.383: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-8867" for this suite. Apr 21 13:02:14.398: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 21 13:02:14.481: INFO: namespace kubelet-test-8867 deletion completed in 42.0957219s • [SLOW TEST:46.199 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a busybox command in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:40 should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 21 13:02:14.482: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-7383196d-838d-4b8a-9506-de97056f4854 STEP: Creating a pod to test consume secrets Apr 21 13:02:14.582: INFO: Waiting up to 5m0s for pod "pod-secrets-abcf02e4-a50a-4443-84f1-b24271cd02ce" in namespace "secrets-9094" to be "success or failure" Apr 21 13:02:14.588: INFO: Pod "pod-secrets-abcf02e4-a50a-4443-84f1-b24271cd02ce": Phase="Pending", Reason="", readiness=false. Elapsed: 6.642292ms Apr 21 13:02:16.592: INFO: Pod "pod-secrets-abcf02e4-a50a-4443-84f1-b24271cd02ce": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01031813s Apr 21 13:02:18.596: INFO: Pod "pod-secrets-abcf02e4-a50a-4443-84f1-b24271cd02ce": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014646171s STEP: Saw pod success Apr 21 13:02:18.596: INFO: Pod "pod-secrets-abcf02e4-a50a-4443-84f1-b24271cd02ce" satisfied condition "success or failure" Apr 21 13:02:18.599: INFO: Trying to get logs from node iruya-worker pod pod-secrets-abcf02e4-a50a-4443-84f1-b24271cd02ce container secret-volume-test: STEP: delete the pod Apr 21 13:02:18.623: INFO: Waiting for pod pod-secrets-abcf02e4-a50a-4443-84f1-b24271cd02ce to disappear Apr 21 13:02:18.628: INFO: Pod pod-secrets-abcf02e4-a50a-4443-84f1-b24271cd02ce no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 21 13:02:18.628: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-9094" for this suite. Apr 21 13:02:24.643: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 21 13:02:24.721: INFO: namespace secrets-9094 deletion completed in 6.090198268s • [SLOW TEST:10.239 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 21 13:02:24.721: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-map-1ba84afb-5533-4bff-829c-980162ab2dcb STEP: Creating a pod to test consume secrets Apr 21 13:02:24.880: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-36c7370e-1ea3-4000-9e61-c87e5cbc8c05" in namespace "projected-9406" to be "success or failure" Apr 21 13:02:24.886: INFO: Pod "pod-projected-secrets-36c7370e-1ea3-4000-9e61-c87e5cbc8c05": Phase="Pending", Reason="", readiness=false. Elapsed: 6.326469ms Apr 21 13:02:26.890: INFO: Pod "pod-projected-secrets-36c7370e-1ea3-4000-9e61-c87e5cbc8c05": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010351956s Apr 21 13:02:28.894: INFO: Pod "pod-projected-secrets-36c7370e-1ea3-4000-9e61-c87e5cbc8c05": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014543072s STEP: Saw pod success Apr 21 13:02:28.894: INFO: Pod "pod-projected-secrets-36c7370e-1ea3-4000-9e61-c87e5cbc8c05" satisfied condition "success or failure" Apr 21 13:02:28.897: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-secrets-36c7370e-1ea3-4000-9e61-c87e5cbc8c05 container projected-secret-volume-test: STEP: delete the pod Apr 21 13:02:28.917: INFO: Waiting for pod pod-projected-secrets-36c7370e-1ea3-4000-9e61-c87e5cbc8c05 to disappear Apr 21 13:02:28.934: INFO: Pod pod-projected-secrets-36c7370e-1ea3-4000-9e61-c87e5cbc8c05 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 21 13:02:28.934: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9406" for this suite. Apr 21 13:02:34.972: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 21 13:02:35.048: INFO: namespace projected-9406 deletion completed in 6.110781367s • [SLOW TEST:10.327 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 21 13:02:35.049: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Apr 21 13:02:35.127: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7cdf3365-3872-430e-b56f-e4e33013519b" in namespace "downward-api-2004" to be "success or failure" Apr 21 13:02:35.160: INFO: Pod "downwardapi-volume-7cdf3365-3872-430e-b56f-e4e33013519b": Phase="Pending", Reason="", readiness=false. Elapsed: 32.480865ms Apr 21 13:02:37.177: INFO: Pod "downwardapi-volume-7cdf3365-3872-430e-b56f-e4e33013519b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.050190931s Apr 21 13:02:39.180: INFO: Pod "downwardapi-volume-7cdf3365-3872-430e-b56f-e4e33013519b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.053214087s STEP: Saw pod success Apr 21 13:02:39.180: INFO: Pod "downwardapi-volume-7cdf3365-3872-430e-b56f-e4e33013519b" satisfied condition "success or failure" Apr 21 13:02:39.182: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-7cdf3365-3872-430e-b56f-e4e33013519b container client-container: STEP: delete the pod Apr 21 13:02:39.198: INFO: Waiting for pod downwardapi-volume-7cdf3365-3872-430e-b56f-e4e33013519b to disappear Apr 21 13:02:39.255: INFO: Pod downwardapi-volume-7cdf3365-3872-430e-b56f-e4e33013519b no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 21 13:02:39.255: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2004" for this suite. Apr 21 13:02:45.272: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 21 13:02:45.352: INFO: namespace downward-api-2004 deletion completed in 6.093510899s • [SLOW TEST:10.303 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl expose should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 21 13:02:45.352: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating Redis RC Apr 21 13:02:45.406: INFO: namespace kubectl-4098 Apr 21 13:02:45.406: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4098' Apr 21 13:02:45.658: INFO: stderr: "" Apr 21 13:02:45.658: INFO: stdout: "replicationcontroller/redis-master created\n" STEP: Waiting for Redis master to start. Apr 21 13:02:46.662: INFO: Selector matched 1 pods for map[app:redis] Apr 21 13:02:46.662: INFO: Found 0 / 1 Apr 21 13:02:47.662: INFO: Selector matched 1 pods for map[app:redis] Apr 21 13:02:47.662: INFO: Found 0 / 1 Apr 21 13:02:48.668: INFO: Selector matched 1 pods for map[app:redis] Apr 21 13:02:48.668: INFO: Found 1 / 1 Apr 21 13:02:48.668: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Apr 21 13:02:48.672: INFO: Selector matched 1 pods for map[app:redis] Apr 21 13:02:48.672: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Apr 21 13:02:48.672: INFO: wait on redis-master startup in kubectl-4098 Apr 21 13:02:48.672: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-8vkl9 redis-master --namespace=kubectl-4098' Apr 21 13:02:48.772: INFO: stderr: "" Apr 21 13:02:48.772: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 21 Apr 13:02:48.080 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 21 Apr 13:02:48.080 # Server started, Redis version 3.2.12\n1:M 21 Apr 13:02:48.080 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 21 Apr 13:02:48.080 * The server is now ready to accept connections on port 6379\n" STEP: exposing RC Apr 21 13:02:48.773: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc redis-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-4098' Apr 21 13:02:48.918: INFO: stderr: "" Apr 21 13:02:48.918: INFO: stdout: "service/rm2 exposed\n" Apr 21 13:02:48.928: INFO: Service rm2 in namespace kubectl-4098 found. STEP: exposing service Apr 21 13:02:50.935: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-4098' Apr 21 13:02:51.072: INFO: stderr: "" Apr 21 13:02:51.072: INFO: stdout: "service/rm3 exposed\n" Apr 21 13:02:51.078: INFO: Service rm3 in namespace kubectl-4098 found. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 21 13:02:53.084: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4098" for this suite. Apr 21 13:03:17.105: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 21 13:03:17.200: INFO: namespace kubectl-4098 deletion completed in 24.112046953s • [SLOW TEST:31.848 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl expose /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 21 13:03:17.201: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-1ce0fce6-a9e3-4bcb-a461-bea8351a6b80 STEP: Creating a pod to test consume secrets Apr 21 13:03:17.265: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-dc6ff69e-a207-478f-93f8-c644708eb80e" in namespace "projected-1786" to be "success or failure" Apr 21 13:03:17.282: INFO: Pod "pod-projected-secrets-dc6ff69e-a207-478f-93f8-c644708eb80e": Phase="Pending", Reason="", readiness=false. Elapsed: 17.143596ms Apr 21 13:03:19.286: INFO: Pod "pod-projected-secrets-dc6ff69e-a207-478f-93f8-c644708eb80e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021296074s Apr 21 13:03:21.290: INFO: Pod "pod-projected-secrets-dc6ff69e-a207-478f-93f8-c644708eb80e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.024918482s Apr 21 13:03:23.294: INFO: Pod "pod-projected-secrets-dc6ff69e-a207-478f-93f8-c644708eb80e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.028807464s STEP: Saw pod success Apr 21 13:03:23.294: INFO: Pod "pod-projected-secrets-dc6ff69e-a207-478f-93f8-c644708eb80e" satisfied condition "success or failure" Apr 21 13:03:23.296: INFO: Trying to get logs from node iruya-worker pod pod-projected-secrets-dc6ff69e-a207-478f-93f8-c644708eb80e container projected-secret-volume-test: STEP: delete the pod Apr 21 13:03:23.324: INFO: Waiting for pod pod-projected-secrets-dc6ff69e-a207-478f-93f8-c644708eb80e to disappear Apr 21 13:03:23.336: INFO: Pod pod-projected-secrets-dc6ff69e-a207-478f-93f8-c644708eb80e no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 21 13:03:23.336: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1786" for this suite. Apr 21 13:03:29.351: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 21 13:03:29.447: INFO: namespace projected-1786 deletion completed in 6.107775163s • [SLOW TEST:12.246 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 21 13:03:29.447: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods STEP: Gathering metrics W0421 13:04:10.144538 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Apr 21 13:04:10.144: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 21 13:04:10.144: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-4163" for this suite. Apr 21 13:04:20.196: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 21 13:04:20.259: INFO: namespace gc-4163 deletion completed in 10.111880295s • [SLOW TEST:50.812 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 21 13:04:20.260: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the container STEP: wait for the container to reach Failed STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Apr 21 13:04:23.372: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 21 13:04:23.438: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-3415" for this suite. Apr 21 13:04:29.460: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 21 13:04:29.537: INFO: namespace container-runtime-3415 deletion completed in 6.095645346s • [SLOW TEST:9.278 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129 should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 21 13:04:29.538: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-projected-5h82 STEP: Creating a pod to test atomic-volume-subpath Apr 21 13:04:29.614: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-5h82" in namespace "subpath-7257" to be "success or failure" Apr 21 13:04:29.631: INFO: Pod "pod-subpath-test-projected-5h82": Phase="Pending", Reason="", readiness=false. Elapsed: 16.561118ms Apr 21 13:04:31.730: INFO: Pod "pod-subpath-test-projected-5h82": Phase="Pending", Reason="", readiness=false. Elapsed: 2.115912673s Apr 21 13:04:33.733: INFO: Pod "pod-subpath-test-projected-5h82": Phase="Running", Reason="", readiness=true. Elapsed: 4.119233202s Apr 21 13:04:35.738: INFO: Pod "pod-subpath-test-projected-5h82": Phase="Running", Reason="", readiness=true. Elapsed: 6.12382687s Apr 21 13:04:37.742: INFO: Pod "pod-subpath-test-projected-5h82": Phase="Running", Reason="", readiness=true. Elapsed: 8.127819688s Apr 21 13:04:39.749: INFO: Pod "pod-subpath-test-projected-5h82": Phase="Running", Reason="", readiness=true. Elapsed: 10.134794144s Apr 21 13:04:41.753: INFO: Pod "pod-subpath-test-projected-5h82": Phase="Running", Reason="", readiness=true. Elapsed: 12.138841066s Apr 21 13:04:43.758: INFO: Pod "pod-subpath-test-projected-5h82": Phase="Running", Reason="", readiness=true. Elapsed: 14.143818967s Apr 21 13:04:45.762: INFO: Pod "pod-subpath-test-projected-5h82": Phase="Running", Reason="", readiness=true. Elapsed: 16.148249834s Apr 21 13:04:47.766: INFO: Pod "pod-subpath-test-projected-5h82": Phase="Running", Reason="", readiness=true. Elapsed: 18.152355887s Apr 21 13:04:49.770: INFO: Pod "pod-subpath-test-projected-5h82": Phase="Running", Reason="", readiness=true. Elapsed: 20.155972899s Apr 21 13:04:51.774: INFO: Pod "pod-subpath-test-projected-5h82": Phase="Running", Reason="", readiness=true. Elapsed: 22.16039979s Apr 21 13:04:53.779: INFO: Pod "pod-subpath-test-projected-5h82": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.164954409s STEP: Saw pod success Apr 21 13:04:53.779: INFO: Pod "pod-subpath-test-projected-5h82" satisfied condition "success or failure" Apr 21 13:04:53.782: INFO: Trying to get logs from node iruya-worker pod pod-subpath-test-projected-5h82 container test-container-subpath-projected-5h82: STEP: delete the pod Apr 21 13:04:53.831: INFO: Waiting for pod pod-subpath-test-projected-5h82 to disappear Apr 21 13:04:53.861: INFO: Pod pod-subpath-test-projected-5h82 no longer exists STEP: Deleting pod pod-subpath-test-projected-5h82 Apr 21 13:04:53.861: INFO: Deleting pod "pod-subpath-test-projected-5h82" in namespace "subpath-7257" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 21 13:04:53.863: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-7257" for this suite. Apr 21 13:04:59.880: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 21 13:04:59.955: INFO: namespace subpath-7257 deletion completed in 6.089458144s • [SLOW TEST:30.418 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 21 13:04:59.956: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Performing setup for networking test in namespace pod-network-test-4654 STEP: creating a selector STEP: Creating the service pods in kubernetes Apr 21 13:04:59.986: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Apr 21 13:05:22.093: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.2.136 8081 | grep -v '^\s*$'] Namespace:pod-network-test-4654 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 21 13:05:22.093: INFO: >>> kubeConfig: /root/.kube/config I0421 13:05:22.125733 6 log.go:172] (0xc000a14f20) (0xc001685ae0) Create stream I0421 13:05:22.125770 6 log.go:172] (0xc000a14f20) (0xc001685ae0) Stream added, broadcasting: 1 I0421 13:05:22.127600 6 log.go:172] (0xc000a14f20) Reply frame received for 1 I0421 13:05:22.127636 6 log.go:172] (0xc000a14f20) (0xc0027a03c0) Create stream I0421 13:05:22.127647 6 log.go:172] (0xc000a14f20) (0xc0027a03c0) Stream added, broadcasting: 3 I0421 13:05:22.128510 6 log.go:172] (0xc000a14f20) Reply frame received for 3 I0421 13:05:22.128530 6 log.go:172] (0xc000a14f20) (0xc0027a0460) Create stream I0421 13:05:22.128537 6 log.go:172] (0xc000a14f20) (0xc0027a0460) Stream added, broadcasting: 5 I0421 13:05:22.129412 6 log.go:172] (0xc000a14f20) Reply frame received for 5 I0421 13:05:23.226718 6 log.go:172] (0xc000a14f20) Data frame received for 3 I0421 13:05:23.226744 6 log.go:172] (0xc0027a03c0) (3) Data frame handling I0421 13:05:23.226752 6 log.go:172] (0xc0027a03c0) (3) Data frame sent I0421 13:05:23.226757 6 log.go:172] (0xc000a14f20) Data frame received for 3 I0421 13:05:23.226778 6 log.go:172] (0xc000a14f20) Data frame received for 5 I0421 13:05:23.226814 6 log.go:172] (0xc0027a0460) (5) Data frame handling I0421 13:05:23.226850 6 log.go:172] (0xc0027a03c0) (3) Data frame handling I0421 13:05:23.228730 6 log.go:172] (0xc000a14f20) Data frame received for 1 I0421 13:05:23.228743 6 log.go:172] (0xc001685ae0) (1) Data frame handling I0421 13:05:23.228749 6 log.go:172] (0xc001685ae0) (1) Data frame sent I0421 13:05:23.228836 6 log.go:172] (0xc000a14f20) (0xc001685ae0) Stream removed, broadcasting: 1 I0421 13:05:23.228853 6 log.go:172] (0xc000a14f20) Go away received I0421 13:05:23.229644 6 log.go:172] (0xc000a14f20) (0xc001685ae0) Stream removed, broadcasting: 1 I0421 13:05:23.229681 6 log.go:172] (0xc000a14f20) (0xc0027a03c0) Stream removed, broadcasting: 3 I0421 13:05:23.229696 6 log.go:172] (0xc000a14f20) (0xc0027a0460) Stream removed, broadcasting: 5 Apr 21 13:05:23.229: INFO: Found all expected endpoints: [netserver-0] Apr 21 13:05:23.232: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.1.12 8081 | grep -v '^\s*$'] Namespace:pod-network-test-4654 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 21 13:05:23.232: INFO: >>> kubeConfig: /root/.kube/config I0421 13:05:23.263070 6 log.go:172] (0xc000a15ad0) (0xc001685cc0) Create stream I0421 13:05:23.263098 6 log.go:172] (0xc000a15ad0) (0xc001685cc0) Stream added, broadcasting: 1 I0421 13:05:23.264857 6 log.go:172] (0xc000a15ad0) Reply frame received for 1 I0421 13:05:23.264890 6 log.go:172] (0xc000a15ad0) (0xc0016968c0) Create stream I0421 13:05:23.264900 6 log.go:172] (0xc000a15ad0) (0xc0016968c0) Stream added, broadcasting: 3 I0421 13:05:23.265796 6 log.go:172] (0xc000a15ad0) Reply frame received for 3 I0421 13:05:23.265821 6 log.go:172] (0xc000a15ad0) (0xc001a0a3c0) Create stream I0421 13:05:23.265830 6 log.go:172] (0xc000a15ad0) (0xc001a0a3c0) Stream added, broadcasting: 5 I0421 13:05:23.266569 6 log.go:172] (0xc000a15ad0) Reply frame received for 5 I0421 13:05:24.365035 6 log.go:172] (0xc000a15ad0) Data frame received for 5 I0421 13:05:24.365093 6 log.go:172] (0xc001a0a3c0) (5) Data frame handling I0421 13:05:24.365302 6 log.go:172] (0xc000a15ad0) Data frame received for 3 I0421 13:05:24.365340 6 log.go:172] (0xc0016968c0) (3) Data frame handling I0421 13:05:24.365374 6 log.go:172] (0xc0016968c0) (3) Data frame sent I0421 13:05:24.365402 6 log.go:172] (0xc000a15ad0) Data frame received for 3 I0421 13:05:24.365412 6 log.go:172] (0xc0016968c0) (3) Data frame handling I0421 13:05:24.367305 6 log.go:172] (0xc000a15ad0) Data frame received for 1 I0421 13:05:24.367341 6 log.go:172] (0xc001685cc0) (1) Data frame handling I0421 13:05:24.367358 6 log.go:172] (0xc001685cc0) (1) Data frame sent I0421 13:05:24.367394 6 log.go:172] (0xc000a15ad0) (0xc001685cc0) Stream removed, broadcasting: 1 I0421 13:05:24.367423 6 log.go:172] (0xc000a15ad0) Go away received I0421 13:05:24.367531 6 log.go:172] (0xc000a15ad0) (0xc001685cc0) Stream removed, broadcasting: 1 I0421 13:05:24.367565 6 log.go:172] (0xc000a15ad0) (0xc0016968c0) Stream removed, broadcasting: 3 I0421 13:05:24.367577 6 log.go:172] (0xc000a15ad0) (0xc001a0a3c0) Stream removed, broadcasting: 5 Apr 21 13:05:24.367: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 21 13:05:24.367: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-4654" for this suite. Apr 21 13:05:48.390: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 21 13:05:48.457: INFO: namespace pod-network-test-4654 deletion completed in 24.084825807s • [SLOW TEST:48.501 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 21 13:05:48.457: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 21 13:06:14.727: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-2928" for this suite. Apr 21 13:06:20.742: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 21 13:06:20.817: INFO: namespace namespaces-2928 deletion completed in 6.086847407s STEP: Destroying namespace "nsdeletetest-4058" for this suite. Apr 21 13:06:20.819: INFO: Namespace nsdeletetest-4058 was already deleted STEP: Destroying namespace "nsdeletetest-4937" for this suite. Apr 21 13:06:26.845: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 21 13:06:26.923: INFO: namespace nsdeletetest-4937 deletion completed in 6.103969886s • [SLOW TEST:38.466 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 21 13:06:26.924: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Apr 21 13:06:26.956: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace STEP: Creating rc "condition-test" that asks for more than the allowed pod quota STEP: Checking rc "condition-test" has the desired failure condition set STEP: Scaling down rc "condition-test" to satisfy pod quota Apr 21 13:06:29.063: INFO: Updating replication controller "condition-test" STEP: Checking rc "condition-test" has no failure condition set [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 21 13:06:30.172: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-1720" for this suite. Apr 21 13:06:36.406: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 21 13:06:36.485: INFO: namespace replication-controller-1720 deletion completed in 6.309754686s • [SLOW TEST:9.561 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl rolling-update should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 21 13:06:36.485: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1516 [It] should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Apr 21 13:06:36.528: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=kubectl-6162' Apr 21 13:06:38.824: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Apr 21 13:06:38.824: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n" STEP: verifying the rc e2e-test-nginx-rc was created Apr 21 13:06:38.848: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 0 spec.replicas 1 status.replicas 0 Apr 21 13:06:38.868: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0 STEP: rolling-update to same image controller Apr 21 13:06:38.887: INFO: scanned /root for discovery docs: Apr 21 13:06:38.887: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-nginx-rc --update-period=1s --image=docker.io/library/nginx:1.14-alpine --image-pull-policy=IfNotPresent --namespace=kubectl-6162' Apr 21 13:06:54.862: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" Apr 21 13:06:54.863: INFO: stdout: "Created e2e-test-nginx-rc-589dfb579ae6098decc82ca2c6eb658f\nScaling up e2e-test-nginx-rc-589dfb579ae6098decc82ca2c6eb658f from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-589dfb579ae6098decc82ca2c6eb658f up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-589dfb579ae6098decc82ca2c6eb658f to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n" Apr 21 13:06:54.863: INFO: stdout: "Created e2e-test-nginx-rc-589dfb579ae6098decc82ca2c6eb658f\nScaling up e2e-test-nginx-rc-589dfb579ae6098decc82ca2c6eb658f from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-589dfb579ae6098decc82ca2c6eb658f up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-589dfb579ae6098decc82ca2c6eb658f to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n" STEP: waiting for all containers in run=e2e-test-nginx-rc pods to come up. Apr 21 13:06:54.863: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-6162' Apr 21 13:06:54.952: INFO: stderr: "" Apr 21 13:06:54.952: INFO: stdout: "e2e-test-nginx-rc-589dfb579ae6098decc82ca2c6eb658f-rtsp7 " Apr 21 13:06:54.952: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-589dfb579ae6098decc82ca2c6eb658f-rtsp7 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-nginx-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6162' Apr 21 13:06:55.041: INFO: stderr: "" Apr 21 13:06:55.041: INFO: stdout: "true" Apr 21 13:06:55.041: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-589dfb579ae6098decc82ca2c6eb658f-rtsp7 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-nginx-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6162' Apr 21 13:06:55.133: INFO: stderr: "" Apr 21 13:06:55.133: INFO: stdout: "docker.io/library/nginx:1.14-alpine" Apr 21 13:06:55.133: INFO: e2e-test-nginx-rc-589dfb579ae6098decc82ca2c6eb658f-rtsp7 is verified up and running [AfterEach] [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1522 Apr 21 13:06:55.133: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=kubectl-6162' Apr 21 13:06:55.265: INFO: stderr: "" Apr 21 13:06:55.265: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 21 13:06:55.265: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6162" for this suite. Apr 21 13:07:17.291: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 21 13:07:17.369: INFO: namespace kubectl-6162 deletion completed in 22.092923089s • [SLOW TEST:40.884 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run rc should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 21 13:07:17.369: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1456 [It] should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Apr 21 13:07:17.402: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=kubectl-6462' Apr 21 13:07:17.505: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Apr 21 13:07:17.505: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n" STEP: verifying the rc e2e-test-nginx-rc was created STEP: verifying the pod controlled by rc e2e-test-nginx-rc was created STEP: confirm that you can get logs from an rc Apr 21 13:07:17.563: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-nginx-rc-fcxvg] Apr 21 13:07:17.563: INFO: Waiting up to 5m0s for pod "e2e-test-nginx-rc-fcxvg" in namespace "kubectl-6462" to be "running and ready" Apr 21 13:07:17.575: INFO: Pod "e2e-test-nginx-rc-fcxvg": Phase="Pending", Reason="", readiness=false. Elapsed: 12.183552ms Apr 21 13:07:19.654: INFO: Pod "e2e-test-nginx-rc-fcxvg": Phase="Pending", Reason="", readiness=false. Elapsed: 2.091485388s Apr 21 13:07:21.659: INFO: Pod "e2e-test-nginx-rc-fcxvg": Phase="Running", Reason="", readiness=true. Elapsed: 4.096210981s Apr 21 13:07:21.659: INFO: Pod "e2e-test-nginx-rc-fcxvg" satisfied condition "running and ready" Apr 21 13:07:21.659: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-nginx-rc-fcxvg] Apr 21 13:07:21.659: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-nginx-rc --namespace=kubectl-6462' Apr 21 13:07:21.777: INFO: stderr: "" Apr 21 13:07:21.777: INFO: stdout: "" [AfterEach] [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1461 Apr 21 13:07:21.777: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=kubectl-6462' Apr 21 13:07:21.877: INFO: stderr: "" Apr 21 13:07:21.877: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 21 13:07:21.877: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6462" for this suite. Apr 21 13:07:43.894: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 21 13:07:43.972: INFO: namespace kubectl-6462 deletion completed in 22.092141788s • [SLOW TEST:26.603 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 21 13:07:43.973: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-8555 [It] Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating stateful set ss in namespace statefulset-8555 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-8555 Apr 21 13:07:44.047: INFO: Found 0 stateful pods, waiting for 1 Apr 21 13:07:54.051: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod Apr 21 13:07:54.055: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8555 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Apr 21 13:07:54.321: INFO: stderr: "I0421 13:07:54.191364 658 log.go:172] (0xc00013b080) (0xc00051cc80) Create stream\nI0421 13:07:54.191439 658 log.go:172] (0xc00013b080) (0xc00051cc80) Stream added, broadcasting: 1\nI0421 13:07:54.194364 658 log.go:172] (0xc00013b080) Reply frame received for 1\nI0421 13:07:54.194402 658 log.go:172] (0xc00013b080) (0xc000958000) Create stream\nI0421 13:07:54.194419 658 log.go:172] (0xc00013b080) (0xc000958000) Stream added, broadcasting: 3\nI0421 13:07:54.195497 658 log.go:172] (0xc00013b080) Reply frame received for 3\nI0421 13:07:54.195534 658 log.go:172] (0xc00013b080) (0xc0009580a0) Create stream\nI0421 13:07:54.195547 658 log.go:172] (0xc00013b080) (0xc0009580a0) Stream added, broadcasting: 5\nI0421 13:07:54.196799 658 log.go:172] (0xc00013b080) Reply frame received for 5\nI0421 13:07:54.286016 658 log.go:172] (0xc00013b080) Data frame received for 5\nI0421 13:07:54.286065 658 log.go:172] (0xc0009580a0) (5) Data frame handling\nI0421 13:07:54.286087 658 log.go:172] (0xc0009580a0) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0421 13:07:54.312800 658 log.go:172] (0xc00013b080) Data frame received for 3\nI0421 13:07:54.312832 658 log.go:172] (0xc000958000) (3) Data frame handling\nI0421 13:07:54.312877 658 log.go:172] (0xc000958000) (3) Data frame sent\nI0421 13:07:54.313624 658 log.go:172] (0xc00013b080) Data frame received for 5\nI0421 13:07:54.313663 658 log.go:172] (0xc0009580a0) (5) Data frame handling\nI0421 13:07:54.313700 658 log.go:172] (0xc00013b080) Data frame received for 3\nI0421 13:07:54.313729 658 log.go:172] (0xc000958000) (3) Data frame handling\nI0421 13:07:54.315575 658 log.go:172] (0xc00013b080) Data frame received for 1\nI0421 13:07:54.315598 658 log.go:172] (0xc00051cc80) (1) Data frame handling\nI0421 13:07:54.315619 658 log.go:172] (0xc00051cc80) (1) Data frame sent\nI0421 13:07:54.315637 658 log.go:172] (0xc00013b080) (0xc00051cc80) Stream removed, broadcasting: 1\nI0421 13:07:54.315657 658 log.go:172] (0xc00013b080) Go away received\nI0421 13:07:54.316161 658 log.go:172] (0xc00013b080) (0xc00051cc80) Stream removed, broadcasting: 1\nI0421 13:07:54.316196 658 log.go:172] (0xc00013b080) (0xc000958000) Stream removed, broadcasting: 3\nI0421 13:07:54.316208 658 log.go:172] (0xc00013b080) (0xc0009580a0) Stream removed, broadcasting: 5\n" Apr 21 13:07:54.321: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Apr 21 13:07:54.321: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Apr 21 13:07:54.326: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Apr 21 13:08:04.330: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Apr 21 13:08:04.330: INFO: Waiting for statefulset status.replicas updated to 0 Apr 21 13:08:04.349: INFO: POD NODE PHASE GRACE CONDITIONS Apr 21 13:08:04.349: INFO: ss-0 iruya-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-21 13:07:44 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-21 13:07:55 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-21 13:07:55 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-21 13:07:44 +0000 UTC }] Apr 21 13:08:04.349: INFO: Apr 21 13:08:04.349: INFO: StatefulSet ss has not reached scale 3, at 1 Apr 21 13:08:05.354: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.991849542s Apr 21 13:08:06.392: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.986579334s Apr 21 13:08:07.396: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.949432634s Apr 21 13:08:08.401: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.944725978s Apr 21 13:08:09.407: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.939522695s Apr 21 13:08:10.411: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.934139254s Apr 21 13:08:11.416: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.929807566s Apr 21 13:08:12.421: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.924956259s Apr 21 13:08:13.426: INFO: Verifying statefulset ss doesn't scale past 3 for another 920.375443ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-8555 Apr 21 13:08:14.431: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8555 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 21 13:08:14.668: INFO: stderr: "I0421 13:08:14.567044 679 log.go:172] (0xc00094a420) (0xc0006048c0) Create stream\nI0421 13:08:14.567107 679 log.go:172] (0xc00094a420) (0xc0006048c0) Stream added, broadcasting: 1\nI0421 13:08:14.569943 679 log.go:172] (0xc00094a420) Reply frame received for 1\nI0421 13:08:14.569996 679 log.go:172] (0xc00094a420) (0xc000604960) Create stream\nI0421 13:08:14.570012 679 log.go:172] (0xc00094a420) (0xc000604960) Stream added, broadcasting: 3\nI0421 13:08:14.570938 679 log.go:172] (0xc00094a420) Reply frame received for 3\nI0421 13:08:14.570966 679 log.go:172] (0xc00094a420) (0xc0009c6000) Create stream\nI0421 13:08:14.570978 679 log.go:172] (0xc00094a420) (0xc0009c6000) Stream added, broadcasting: 5\nI0421 13:08:14.571746 679 log.go:172] (0xc00094a420) Reply frame received for 5\nI0421 13:08:14.661433 679 log.go:172] (0xc00094a420) Data frame received for 5\nI0421 13:08:14.661468 679 log.go:172] (0xc0009c6000) (5) Data frame handling\nI0421 13:08:14.661480 679 log.go:172] (0xc0009c6000) (5) Data frame sent\nI0421 13:08:14.661488 679 log.go:172] (0xc00094a420) Data frame received for 5\nI0421 13:08:14.661494 679 log.go:172] (0xc0009c6000) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0421 13:08:14.661522 679 log.go:172] (0xc00094a420) Data frame received for 3\nI0421 13:08:14.661532 679 log.go:172] (0xc000604960) (3) Data frame handling\nI0421 13:08:14.661540 679 log.go:172] (0xc000604960) (3) Data frame sent\nI0421 13:08:14.661548 679 log.go:172] (0xc00094a420) Data frame received for 3\nI0421 13:08:14.661555 679 log.go:172] (0xc000604960) (3) Data frame handling\nI0421 13:08:14.662820 679 log.go:172] (0xc00094a420) Data frame received for 1\nI0421 13:08:14.662849 679 log.go:172] (0xc0006048c0) (1) Data frame handling\nI0421 13:08:14.662865 679 log.go:172] (0xc0006048c0) (1) Data frame sent\nI0421 13:08:14.662888 679 log.go:172] (0xc00094a420) (0xc0006048c0) Stream removed, broadcasting: 1\nI0421 13:08:14.662910 679 log.go:172] (0xc00094a420) Go away received\nI0421 13:08:14.663388 679 log.go:172] (0xc00094a420) (0xc0006048c0) Stream removed, broadcasting: 1\nI0421 13:08:14.663413 679 log.go:172] (0xc00094a420) (0xc000604960) Stream removed, broadcasting: 3\nI0421 13:08:14.663424 679 log.go:172] (0xc00094a420) (0xc0009c6000) Stream removed, broadcasting: 5\n" Apr 21 13:08:14.668: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Apr 21 13:08:14.668: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Apr 21 13:08:14.668: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8555 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 21 13:08:14.913: INFO: stderr: "I0421 13:08:14.815608 700 log.go:172] (0xc00093e420) (0xc0002de6e0) Create stream\nI0421 13:08:14.815691 700 log.go:172] (0xc00093e420) (0xc0002de6e0) Stream added, broadcasting: 1\nI0421 13:08:14.820858 700 log.go:172] (0xc00093e420) Reply frame received for 1\nI0421 13:08:14.821018 700 log.go:172] (0xc00093e420) (0xc00083a000) Create stream\nI0421 13:08:14.821311 700 log.go:172] (0xc00093e420) (0xc00083a000) Stream added, broadcasting: 3\nI0421 13:08:14.825290 700 log.go:172] (0xc00093e420) Reply frame received for 3\nI0421 13:08:14.825317 700 log.go:172] (0xc00093e420) (0xc0005cc460) Create stream\nI0421 13:08:14.825326 700 log.go:172] (0xc00093e420) (0xc0005cc460) Stream added, broadcasting: 5\nI0421 13:08:14.825954 700 log.go:172] (0xc00093e420) Reply frame received for 5\nI0421 13:08:14.906853 700 log.go:172] (0xc00093e420) Data frame received for 3\nI0421 13:08:14.906885 700 log.go:172] (0xc00083a000) (3) Data frame handling\nI0421 13:08:14.906914 700 log.go:172] (0xc00083a000) (3) Data frame sent\nI0421 13:08:14.906927 700 log.go:172] (0xc00093e420) Data frame received for 3\nI0421 13:08:14.906939 700 log.go:172] (0xc00083a000) (3) Data frame handling\nI0421 13:08:14.907156 700 log.go:172] (0xc00093e420) Data frame received for 5\nI0421 13:08:14.907174 700 log.go:172] (0xc0005cc460) (5) Data frame handling\nI0421 13:08:14.907188 700 log.go:172] (0xc0005cc460) (5) Data frame sent\nI0421 13:08:14.907195 700 log.go:172] (0xc00093e420) Data frame received for 5\nI0421 13:08:14.907200 700 log.go:172] (0xc0005cc460) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0421 13:08:14.908737 700 log.go:172] (0xc00093e420) Data frame received for 1\nI0421 13:08:14.908810 700 log.go:172] (0xc0002de6e0) (1) Data frame handling\nI0421 13:08:14.908868 700 log.go:172] (0xc0002de6e0) (1) Data frame sent\nI0421 13:08:14.908922 700 log.go:172] (0xc00093e420) (0xc0002de6e0) Stream removed, broadcasting: 1\nI0421 13:08:14.909717 700 log.go:172] (0xc00093e420) Go away received\nI0421 13:08:14.909923 700 log.go:172] (0xc00093e420) (0xc0002de6e0) Stream removed, broadcasting: 1\nI0421 13:08:14.909962 700 log.go:172] (0xc00093e420) (0xc00083a000) Stream removed, broadcasting: 3\nI0421 13:08:14.910022 700 log.go:172] (0xc00093e420) (0xc0005cc460) Stream removed, broadcasting: 5\n" Apr 21 13:08:14.913: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Apr 21 13:08:14.913: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Apr 21 13:08:14.914: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8555 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 21 13:08:15.137: INFO: stderr: "I0421 13:08:15.053329 720 log.go:172] (0xc000116fd0) (0xc0006d4a00) Create stream\nI0421 13:08:15.053387 720 log.go:172] (0xc000116fd0) (0xc0006d4a00) Stream added, broadcasting: 1\nI0421 13:08:15.056371 720 log.go:172] (0xc000116fd0) Reply frame received for 1\nI0421 13:08:15.056438 720 log.go:172] (0xc000116fd0) (0xc0008b8000) Create stream\nI0421 13:08:15.056469 720 log.go:172] (0xc000116fd0) (0xc0008b8000) Stream added, broadcasting: 3\nI0421 13:08:15.057645 720 log.go:172] (0xc000116fd0) Reply frame received for 3\nI0421 13:08:15.057684 720 log.go:172] (0xc000116fd0) (0xc0006d4aa0) Create stream\nI0421 13:08:15.057696 720 log.go:172] (0xc000116fd0) (0xc0006d4aa0) Stream added, broadcasting: 5\nI0421 13:08:15.058705 720 log.go:172] (0xc000116fd0) Reply frame received for 5\nI0421 13:08:15.128630 720 log.go:172] (0xc000116fd0) Data frame received for 5\nI0421 13:08:15.128657 720 log.go:172] (0xc0006d4aa0) (5) Data frame handling\nI0421 13:08:15.128665 720 log.go:172] (0xc0006d4aa0) (5) Data frame sent\nI0421 13:08:15.128672 720 log.go:172] (0xc000116fd0) Data frame received for 5\nI0421 13:08:15.128678 720 log.go:172] (0xc0006d4aa0) (5) Data frame handling\nI0421 13:08:15.128688 720 log.go:172] (0xc000116fd0) Data frame received for 3\nI0421 13:08:15.128697 720 log.go:172] (0xc0008b8000) (3) Data frame handling\nI0421 13:08:15.128704 720 log.go:172] (0xc0008b8000) (3) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0421 13:08:15.128710 720 log.go:172] (0xc000116fd0) Data frame received for 3\nI0421 13:08:15.128800 720 log.go:172] (0xc0008b8000) (3) Data frame handling\nI0421 13:08:15.132136 720 log.go:172] (0xc000116fd0) Data frame received for 1\nI0421 13:08:15.132163 720 log.go:172] (0xc0006d4a00) (1) Data frame handling\nI0421 13:08:15.132186 720 log.go:172] (0xc0006d4a00) (1) Data frame sent\nI0421 13:08:15.132206 720 log.go:172] (0xc000116fd0) (0xc0006d4a00) Stream removed, broadcasting: 1\nI0421 13:08:15.132237 720 log.go:172] (0xc000116fd0) Go away received\nI0421 13:08:15.132744 720 log.go:172] (0xc000116fd0) (0xc0006d4a00) Stream removed, broadcasting: 1\nI0421 13:08:15.132772 720 log.go:172] (0xc000116fd0) (0xc0008b8000) Stream removed, broadcasting: 3\nI0421 13:08:15.132783 720 log.go:172] (0xc000116fd0) (0xc0006d4aa0) Stream removed, broadcasting: 5\n" Apr 21 13:08:15.138: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Apr 21 13:08:15.138: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Apr 21 13:08:15.141: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false Apr 21 13:08:25.147: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Apr 21 13:08:25.147: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Apr 21 13:08:25.147: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod Apr 21 13:08:25.151: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8555 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Apr 21 13:08:25.331: INFO: stderr: "I0421 13:08:25.271733 741 log.go:172] (0xc00097a420) (0xc000444820) Create stream\nI0421 13:08:25.271778 741 log.go:172] (0xc00097a420) (0xc000444820) Stream added, broadcasting: 1\nI0421 13:08:25.274858 741 log.go:172] (0xc00097a420) Reply frame received for 1\nI0421 13:08:25.274922 741 log.go:172] (0xc00097a420) (0xc000444000) Create stream\nI0421 13:08:25.274942 741 log.go:172] (0xc00097a420) (0xc000444000) Stream added, broadcasting: 3\nI0421 13:08:25.276147 741 log.go:172] (0xc00097a420) Reply frame received for 3\nI0421 13:08:25.276215 741 log.go:172] (0xc00097a420) (0xc0006e4140) Create stream\nI0421 13:08:25.276247 741 log.go:172] (0xc00097a420) (0xc0006e4140) Stream added, broadcasting: 5\nI0421 13:08:25.277502 741 log.go:172] (0xc00097a420) Reply frame received for 5\nI0421 13:08:25.324656 741 log.go:172] (0xc00097a420) Data frame received for 3\nI0421 13:08:25.324696 741 log.go:172] (0xc000444000) (3) Data frame handling\nI0421 13:08:25.324709 741 log.go:172] (0xc000444000) (3) Data frame sent\nI0421 13:08:25.324717 741 log.go:172] (0xc00097a420) Data frame received for 3\nI0421 13:08:25.324723 741 log.go:172] (0xc000444000) (3) Data frame handling\nI0421 13:08:25.324754 741 log.go:172] (0xc00097a420) Data frame received for 5\nI0421 13:08:25.324782 741 log.go:172] (0xc0006e4140) (5) Data frame handling\nI0421 13:08:25.324798 741 log.go:172] (0xc0006e4140) (5) Data frame sent\nI0421 13:08:25.324819 741 log.go:172] (0xc00097a420) Data frame received for 5\nI0421 13:08:25.324833 741 log.go:172] (0xc0006e4140) (5) Data frame handling\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0421 13:08:25.326740 741 log.go:172] (0xc00097a420) Data frame received for 1\nI0421 13:08:25.326761 741 log.go:172] (0xc000444820) (1) Data frame handling\nI0421 13:08:25.326781 741 log.go:172] (0xc000444820) (1) Data frame sent\nI0421 13:08:25.326809 741 log.go:172] (0xc00097a420) (0xc000444820) Stream removed, broadcasting: 1\nI0421 13:08:25.327060 741 log.go:172] (0xc00097a420) (0xc000444820) Stream removed, broadcasting: 1\nI0421 13:08:25.327077 741 log.go:172] (0xc00097a420) (0xc000444000) Stream removed, broadcasting: 3\nI0421 13:08:25.327089 741 log.go:172] (0xc00097a420) (0xc0006e4140) Stream removed, broadcasting: 5\nI0421 13:08:25.327120 741 log.go:172] (0xc00097a420) Go away received\n" Apr 21 13:08:25.331: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Apr 21 13:08:25.331: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Apr 21 13:08:25.331: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8555 ss-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Apr 21 13:08:25.555: INFO: stderr: "I0421 13:08:25.458614 763 log.go:172] (0xc000130dc0) (0xc0003fe6e0) Create stream\nI0421 13:08:25.458677 763 log.go:172] (0xc000130dc0) (0xc0003fe6e0) Stream added, broadcasting: 1\nI0421 13:08:25.461555 763 log.go:172] (0xc000130dc0) Reply frame received for 1\nI0421 13:08:25.461628 763 log.go:172] (0xc000130dc0) (0xc000966000) Create stream\nI0421 13:08:25.461673 763 log.go:172] (0xc000130dc0) (0xc000966000) Stream added, broadcasting: 3\nI0421 13:08:25.463836 763 log.go:172] (0xc000130dc0) Reply frame received for 3\nI0421 13:08:25.463866 763 log.go:172] (0xc000130dc0) (0xc0005f43c0) Create stream\nI0421 13:08:25.463876 763 log.go:172] (0xc000130dc0) (0xc0005f43c0) Stream added, broadcasting: 5\nI0421 13:08:25.464682 763 log.go:172] (0xc000130dc0) Reply frame received for 5\nI0421 13:08:25.524299 763 log.go:172] (0xc000130dc0) Data frame received for 5\nI0421 13:08:25.524336 763 log.go:172] (0xc0005f43c0) (5) Data frame handling\nI0421 13:08:25.524357 763 log.go:172] (0xc0005f43c0) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0421 13:08:25.546008 763 log.go:172] (0xc000130dc0) Data frame received for 3\nI0421 13:08:25.546060 763 log.go:172] (0xc000966000) (3) Data frame handling\nI0421 13:08:25.546108 763 log.go:172] (0xc000966000) (3) Data frame sent\nI0421 13:08:25.546150 763 log.go:172] (0xc000130dc0) Data frame received for 3\nI0421 13:08:25.546189 763 log.go:172] (0xc000966000) (3) Data frame handling\nI0421 13:08:25.546358 763 log.go:172] (0xc000130dc0) Data frame received for 5\nI0421 13:08:25.546387 763 log.go:172] (0xc0005f43c0) (5) Data frame handling\nI0421 13:08:25.547855 763 log.go:172] (0xc000130dc0) Data frame received for 1\nI0421 13:08:25.547893 763 log.go:172] (0xc0003fe6e0) (1) Data frame handling\nI0421 13:08:25.547911 763 log.go:172] (0xc0003fe6e0) (1) Data frame sent\nI0421 13:08:25.548025 763 log.go:172] (0xc000130dc0) (0xc0003fe6e0) Stream removed, broadcasting: 1\nI0421 13:08:25.548074 763 log.go:172] (0xc000130dc0) Go away received\nI0421 13:08:25.548572 763 log.go:172] (0xc000130dc0) (0xc0003fe6e0) Stream removed, broadcasting: 1\nI0421 13:08:25.548594 763 log.go:172] (0xc000130dc0) (0xc000966000) Stream removed, broadcasting: 3\nI0421 13:08:25.548605 763 log.go:172] (0xc000130dc0) (0xc0005f43c0) Stream removed, broadcasting: 5\n" Apr 21 13:08:25.555: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Apr 21 13:08:25.555: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Apr 21 13:08:25.555: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8555 ss-2 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Apr 21 13:08:26.824: INFO: stderr: "I0421 13:08:25.683596 784 log.go:172] (0xc000118dc0) (0xc0009266e0) Create stream\nI0421 13:08:25.683652 784 log.go:172] (0xc000118dc0) (0xc0009266e0) Stream added, broadcasting: 1\nI0421 13:08:25.686367 784 log.go:172] (0xc000118dc0) Reply frame received for 1\nI0421 13:08:25.686397 784 log.go:172] (0xc000118dc0) (0xc00040c320) Create stream\nI0421 13:08:25.686405 784 log.go:172] (0xc000118dc0) (0xc00040c320) Stream added, broadcasting: 3\nI0421 13:08:25.687447 784 log.go:172] (0xc000118dc0) Reply frame received for 3\nI0421 13:08:25.687486 784 log.go:172] (0xc000118dc0) (0xc000926780) Create stream\nI0421 13:08:25.687497 784 log.go:172] (0xc000118dc0) (0xc000926780) Stream added, broadcasting: 5\nI0421 13:08:25.688728 784 log.go:172] (0xc000118dc0) Reply frame received for 5\nI0421 13:08:25.758749 784 log.go:172] (0xc000118dc0) Data frame received for 5\nI0421 13:08:25.758777 784 log.go:172] (0xc000926780) (5) Data frame handling\nI0421 13:08:25.758794 784 log.go:172] (0xc000926780) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0421 13:08:26.815825 784 log.go:172] (0xc000118dc0) Data frame received for 3\nI0421 13:08:26.815851 784 log.go:172] (0xc00040c320) (3) Data frame handling\nI0421 13:08:26.815869 784 log.go:172] (0xc00040c320) (3) Data frame sent\nI0421 13:08:26.815968 784 log.go:172] (0xc000118dc0) Data frame received for 5\nI0421 13:08:26.815995 784 log.go:172] (0xc000926780) (5) Data frame handling\nI0421 13:08:26.816175 784 log.go:172] (0xc000118dc0) Data frame received for 3\nI0421 13:08:26.816273 784 log.go:172] (0xc00040c320) (3) Data frame handling\nI0421 13:08:26.819061 784 log.go:172] (0xc000118dc0) Data frame received for 1\nI0421 13:08:26.819079 784 log.go:172] (0xc0009266e0) (1) Data frame handling\nI0421 13:08:26.819091 784 log.go:172] (0xc0009266e0) (1) Data frame sent\nI0421 13:08:26.819104 784 log.go:172] (0xc000118dc0) (0xc0009266e0) Stream removed, broadcasting: 1\nI0421 13:08:26.819256 784 log.go:172] (0xc000118dc0) Go away received\nI0421 13:08:26.819443 784 log.go:172] (0xc000118dc0) (0xc0009266e0) Stream removed, broadcasting: 1\nI0421 13:08:26.819457 784 log.go:172] (0xc000118dc0) (0xc00040c320) Stream removed, broadcasting: 3\nI0421 13:08:26.819466 784 log.go:172] (0xc000118dc0) (0xc000926780) Stream removed, broadcasting: 5\n" Apr 21 13:08:26.824: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Apr 21 13:08:26.824: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Apr 21 13:08:26.824: INFO: Waiting for statefulset status.replicas updated to 0 Apr 21 13:08:26.888: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 Apr 21 13:08:36.912: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Apr 21 13:08:36.912: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Apr 21 13:08:36.913: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Apr 21 13:08:36.970: INFO: POD NODE PHASE GRACE CONDITIONS Apr 21 13:08:36.970: INFO: ss-0 iruya-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-21 13:07:44 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-21 13:08:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-21 13:08:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-21 13:07:44 +0000 UTC }] Apr 21 13:08:36.970: INFO: ss-1 iruya-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-21 13:08:04 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-21 13:08:25 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-21 13:08:25 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-21 13:08:04 +0000 UTC }] Apr 21 13:08:36.970: INFO: ss-2 iruya-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-21 13:08:04 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-21 13:08:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-21 13:08:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-21 13:08:04 +0000 UTC }] Apr 21 13:08:36.970: INFO: Apr 21 13:08:36.970: INFO: StatefulSet ss has not reached scale 0, at 3 Apr 21 13:08:38.219: INFO: POD NODE PHASE GRACE CONDITIONS Apr 21 13:08:38.219: INFO: ss-0 iruya-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-21 13:07:44 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-21 13:08:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-21 13:08:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-21 13:07:44 +0000 UTC }] Apr 21 13:08:38.219: INFO: ss-1 iruya-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-21 13:08:04 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-21 13:08:25 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-21 13:08:25 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-21 13:08:04 +0000 UTC }] Apr 21 13:08:38.219: INFO: ss-2 iruya-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-21 13:08:04 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-21 13:08:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-21 13:08:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-21 13:08:04 +0000 UTC }] Apr 21 13:08:38.219: INFO: Apr 21 13:08:38.219: INFO: StatefulSet ss has not reached scale 0, at 3 Apr 21 13:08:39.224: INFO: POD NODE PHASE GRACE CONDITIONS Apr 21 13:08:39.224: INFO: ss-0 iruya-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-21 13:07:44 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-21 13:08:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-21 13:08:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-21 13:07:44 +0000 UTC }] Apr 21 13:08:39.224: INFO: ss-1 iruya-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-21 13:08:04 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-21 13:08:25 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-21 13:08:25 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-21 13:08:04 +0000 UTC }] Apr 21 13:08:39.224: INFO: ss-2 iruya-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-21 13:08:04 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-21 13:08:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-21 13:08:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-21 13:08:04 +0000 UTC }] Apr 21 13:08:39.224: INFO: Apr 21 13:08:39.224: INFO: StatefulSet ss has not reached scale 0, at 3 Apr 21 13:08:40.272: INFO: POD NODE PHASE GRACE CONDITIONS Apr 21 13:08:40.272: INFO: ss-0 iruya-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-21 13:07:44 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-21 13:08:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-21 13:08:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-21 13:07:44 +0000 UTC }] Apr 21 13:08:40.272: INFO: ss-1 iruya-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-21 13:08:04 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-21 13:08:25 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-21 13:08:25 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-21 13:08:04 +0000 UTC }] Apr 21 13:08:40.272: INFO: ss-2 iruya-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-21 13:08:04 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-21 13:08:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-21 13:08:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-21 13:08:04 +0000 UTC }] Apr 21 13:08:40.273: INFO: Apr 21 13:08:40.273: INFO: StatefulSet ss has not reached scale 0, at 3 Apr 21 13:08:41.276: INFO: POD NODE PHASE GRACE CONDITIONS Apr 21 13:08:41.276: INFO: ss-1 iruya-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-21 13:08:04 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-21 13:08:25 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-21 13:08:25 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-21 13:08:04 +0000 UTC }] Apr 21 13:08:41.276: INFO: Apr 21 13:08:41.276: INFO: StatefulSet ss has not reached scale 0, at 1 Apr 21 13:08:42.280: INFO: Verifying statefulset ss doesn't scale past 0 for another 4.643004925s Apr 21 13:08:43.285: INFO: Verifying statefulset ss doesn't scale past 0 for another 3.638759821s Apr 21 13:08:44.289: INFO: Verifying statefulset ss doesn't scale past 0 for another 2.63384769s Apr 21 13:08:45.294: INFO: Verifying statefulset ss doesn't scale past 0 for another 1.629515782s Apr 21 13:08:46.298: INFO: Verifying statefulset ss doesn't scale past 0 for another 624.966405ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-8555 Apr 21 13:08:47.303: INFO: Scaling statefulset ss to 0 Apr 21 13:08:47.313: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 Apr 21 13:08:47.315: INFO: Deleting all statefulset in ns statefulset-8555 Apr 21 13:08:47.318: INFO: Scaling statefulset ss to 0 Apr 21 13:08:47.326: INFO: Waiting for statefulset status.replicas updated to 0 Apr 21 13:08:47.328: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 21 13:08:47.342: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-8555" for this suite. Apr 21 13:08:53.356: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 21 13:08:53.425: INFO: namespace statefulset-8555 deletion completed in 6.08034689s • [SLOW TEST:69.453 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 21 13:08:53.426: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W0421 13:09:03.576726 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Apr 21 13:09:03.576: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 21 13:09:03.576: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-6079" for this suite. Apr 21 13:09:09.652: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 21 13:09:09.725: INFO: namespace gc-6079 deletion completed in 6.145534917s • [SLOW TEST:16.300 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 21 13:09:09.725: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Apr 21 13:09:09.868: INFO: Waiting up to 5m0s for pod "downwardapi-volume-09fc2e5f-abab-4bfc-ac37-ad50c517e633" in namespace "downward-api-4865" to be "success or failure" Apr 21 13:09:09.871: INFO: Pod "downwardapi-volume-09fc2e5f-abab-4bfc-ac37-ad50c517e633": Phase="Pending", Reason="", readiness=false. Elapsed: 3.514052ms Apr 21 13:09:11.876: INFO: Pod "downwardapi-volume-09fc2e5f-abab-4bfc-ac37-ad50c517e633": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007934133s Apr 21 13:09:13.880: INFO: Pod "downwardapi-volume-09fc2e5f-abab-4bfc-ac37-ad50c517e633": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01224222s STEP: Saw pod success Apr 21 13:09:13.880: INFO: Pod "downwardapi-volume-09fc2e5f-abab-4bfc-ac37-ad50c517e633" satisfied condition "success or failure" Apr 21 13:09:13.883: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-09fc2e5f-abab-4bfc-ac37-ad50c517e633 container client-container: STEP: delete the pod Apr 21 13:09:14.042: INFO: Waiting for pod downwardapi-volume-09fc2e5f-abab-4bfc-ac37-ad50c517e633 to disappear Apr 21 13:09:14.069: INFO: Pod downwardapi-volume-09fc2e5f-abab-4bfc-ac37-ad50c517e633 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 21 13:09:14.070: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4865" for this suite. Apr 21 13:09:20.186: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 21 13:09:20.272: INFO: namespace downward-api-4865 deletion completed in 6.198691784s • [SLOW TEST:10.546 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 21 13:09:20.272: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-8090 [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a new StatefulSet Apr 21 13:09:20.356: INFO: Found 0 stateful pods, waiting for 3 Apr 21 13:09:30.363: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Apr 21 13:09:30.363: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Apr 21 13:09:30.363: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Apr 21 13:09:40.365: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Apr 21 13:09:40.365: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Apr 21 13:09:40.365: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Updating stateful set template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine Apr 21 13:09:40.390: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Not applying an update when the partition is greater than the number of replicas STEP: Performing a canary update Apr 21 13:09:50.460: INFO: Updating stateful set ss2 Apr 21 13:09:50.546: INFO: Waiting for Pod statefulset-8090/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c STEP: Restoring Pods to the correct revision when they are deleted Apr 21 13:10:00.722: INFO: Found 2 stateful pods, waiting for 3 Apr 21 13:10:10.727: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Apr 21 13:10:10.727: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Apr 21 13:10:10.727: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Performing a phased rolling update Apr 21 13:10:10.750: INFO: Updating stateful set ss2 Apr 21 13:10:10.762: INFO: Waiting for Pod statefulset-8090/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Apr 21 13:10:20.787: INFO: Updating stateful set ss2 Apr 21 13:10:20.844: INFO: Waiting for StatefulSet statefulset-8090/ss2 to complete update Apr 21 13:10:20.844: INFO: Waiting for Pod statefulset-8090/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Apr 21 13:10:30.853: INFO: Waiting for StatefulSet statefulset-8090/ss2 to complete update Apr 21 13:10:30.853: INFO: Waiting for Pod statefulset-8090/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 Apr 21 13:10:40.852: INFO: Deleting all statefulset in ns statefulset-8090 Apr 21 13:10:40.855: INFO: Scaling statefulset ss2 to 0 Apr 21 13:11:10.873: INFO: Waiting for statefulset status.replicas updated to 0 Apr 21 13:11:10.877: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 21 13:11:10.905: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-8090" for this suite. Apr 21 13:11:16.951: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 21 13:11:17.021: INFO: namespace statefulset-8090 deletion completed in 6.112422928s • [SLOW TEST:116.749 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 21 13:11:17.022: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Apr 21 13:11:21.105: INFO: Expected: &{} to match Container's Termination Message: -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 21 13:11:21.169: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-6264" for this suite. Apr 21 13:11:27.185: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 21 13:11:27.267: INFO: namespace container-runtime-6264 deletion completed in 6.094173633s • [SLOW TEST:10.245 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129 should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-apps] ReplicationController should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 21 13:11:27.267: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change Apr 21 13:11:27.327: INFO: Pod name pod-release: Found 0 pods out of 1 Apr 21 13:11:32.331: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 21 13:11:33.349: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-997" for this suite. Apr 21 13:11:39.418: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 21 13:11:39.529: INFO: namespace replication-controller-997 deletion completed in 6.176050663s • [SLOW TEST:12.262 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 21 13:11:39.530: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0666 on tmpfs Apr 21 13:11:39.586: INFO: Waiting up to 5m0s for pod "pod-bf653fd6-cc9b-48a7-ba00-42e043267bb6" in namespace "emptydir-7434" to be "success or failure" Apr 21 13:11:39.601: INFO: Pod "pod-bf653fd6-cc9b-48a7-ba00-42e043267bb6": Phase="Pending", Reason="", readiness=false. Elapsed: 15.194288ms Apr 21 13:11:41.618: INFO: Pod "pod-bf653fd6-cc9b-48a7-ba00-42e043267bb6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031308465s Apr 21 13:11:43.621: INFO: Pod "pod-bf653fd6-cc9b-48a7-ba00-42e043267bb6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.034930085s STEP: Saw pod success Apr 21 13:11:43.621: INFO: Pod "pod-bf653fd6-cc9b-48a7-ba00-42e043267bb6" satisfied condition "success or failure" Apr 21 13:11:43.623: INFO: Trying to get logs from node iruya-worker pod pod-bf653fd6-cc9b-48a7-ba00-42e043267bb6 container test-container: STEP: delete the pod Apr 21 13:11:43.651: INFO: Waiting for pod pod-bf653fd6-cc9b-48a7-ba00-42e043267bb6 to disappear Apr 21 13:11:43.662: INFO: Pod pod-bf653fd6-cc9b-48a7-ba00-42e043267bb6 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 21 13:11:43.662: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7434" for this suite. Apr 21 13:11:49.683: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 21 13:11:49.763: INFO: namespace emptydir-7434 deletion completed in 6.096059659s • [SLOW TEST:10.233 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 21 13:11:49.763: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Apr 21 13:11:49.819: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 21 13:11:50.985: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-6122" for this suite. Apr 21 13:11:57.000: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 21 13:11:57.073: INFO: namespace custom-resource-definition-6122 deletion completed in 6.084587659s • [SLOW TEST:7.310 seconds] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Simple CustomResourceDefinition /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:35 creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 21 13:11:57.074: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:167 [It] should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating server pod server in namespace prestop-7234 STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace prestop-7234 STEP: Deleting pre-stop pod Apr 21 13:12:10.241: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 21 13:12:10.246: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "prestop-7234" for this suite. Apr 21 13:12:49.224: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 21 13:12:50.493: INFO: namespace prestop-7234 deletion completed in 40.226161239s • [SLOW TEST:53.419 seconds] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 21 13:12:50.493: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-6b7d5c0a-4cb4-4d31-9f63-e4ad85b11cab STEP: Creating a pod to test consume configMaps Apr 21 13:12:50.591: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-652da62e-95c7-44d0-9a33-aa6232ded81d" in namespace "projected-471" to be "success or failure" Apr 21 13:12:50.597: INFO: Pod "pod-projected-configmaps-652da62e-95c7-44d0-9a33-aa6232ded81d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.104982ms Apr 21 13:12:52.607: INFO: Pod "pod-projected-configmaps-652da62e-95c7-44d0-9a33-aa6232ded81d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01637511s Apr 21 13:12:54.612: INFO: Pod "pod-projected-configmaps-652da62e-95c7-44d0-9a33-aa6232ded81d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.020903811s STEP: Saw pod success Apr 21 13:12:54.612: INFO: Pod "pod-projected-configmaps-652da62e-95c7-44d0-9a33-aa6232ded81d" satisfied condition "success or failure" Apr 21 13:12:54.616: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-configmaps-652da62e-95c7-44d0-9a33-aa6232ded81d container projected-configmap-volume-test: STEP: delete the pod Apr 21 13:12:54.687: INFO: Waiting for pod pod-projected-configmaps-652da62e-95c7-44d0-9a33-aa6232ded81d to disappear Apr 21 13:12:54.699: INFO: Pod pod-projected-configmaps-652da62e-95c7-44d0-9a33-aa6232ded81d no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 21 13:12:54.699: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-471" for this suite. Apr 21 13:13:00.730: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 21 13:13:00.802: INFO: namespace projected-471 deletion completed in 6.100834326s • [SLOW TEST:10.309 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 21 13:13:00.803: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Apr 21 13:13:00.853: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3770' Apr 21 13:13:01.206: INFO: stderr: "" Apr 21 13:13:01.206: INFO: stdout: "replicationcontroller/redis-master created\n" Apr 21 13:13:01.206: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3770' Apr 21 13:13:01.505: INFO: stderr: "" Apr 21 13:13:01.505: INFO: stdout: "service/redis-master created\n" STEP: Waiting for Redis master to start. Apr 21 13:13:02.510: INFO: Selector matched 1 pods for map[app:redis] Apr 21 13:13:02.510: INFO: Found 0 / 1 Apr 21 13:13:03.509: INFO: Selector matched 1 pods for map[app:redis] Apr 21 13:13:03.510: INFO: Found 0 / 1 Apr 21 13:13:04.529: INFO: Selector matched 1 pods for map[app:redis] Apr 21 13:13:04.529: INFO: Found 1 / 1 Apr 21 13:13:04.530: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Apr 21 13:13:04.532: INFO: Selector matched 1 pods for map[app:redis] Apr 21 13:13:04.532: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Apr 21 13:13:04.532: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe pod redis-master-9krm4 --namespace=kubectl-3770' Apr 21 13:13:04.635: INFO: stderr: "" Apr 21 13:13:04.636: INFO: stdout: "Name: redis-master-9krm4\nNamespace: kubectl-3770\nPriority: 0\nNode: iruya-worker/172.17.0.6\nStart Time: Tue, 21 Apr 2020 13:13:01 +0000\nLabels: app=redis\n role=master\nAnnotations: \nStatus: Running\nIP: 10.244.2.150\nControlled By: ReplicationController/redis-master\nContainers:\n redis-master:\n Container ID: containerd://6fa691b0b62d25c598ad6fa1999e57887d2edcd28e441b39db892bb6c5c8570e\n Image: gcr.io/kubernetes-e2e-test-images/redis:1.0\n Image ID: gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Tue, 21 Apr 2020 13:13:03 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-jqrhp (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-jqrhp:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-jqrhp\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute for 300s\n node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 3s default-scheduler Successfully assigned kubectl-3770/redis-master-9krm4 to iruya-worker\n Normal Pulled 2s kubelet, iruya-worker Container image \"gcr.io/kubernetes-e2e-test-images/redis:1.0\" already present on machine\n Normal Created 1s kubelet, iruya-worker Created container redis-master\n Normal Started 1s kubelet, iruya-worker Started container redis-master\n" Apr 21 13:13:04.636: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe rc redis-master --namespace=kubectl-3770' Apr 21 13:13:04.750: INFO: stderr: "" Apr 21 13:13:04.751: INFO: stdout: "Name: redis-master\nNamespace: kubectl-3770\nSelector: app=redis,role=master\nLabels: app=redis\n role=master\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=redis\n role=master\n Containers:\n redis-master:\n Image: gcr.io/kubernetes-e2e-test-images/redis:1.0\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 3s replication-controller Created pod: redis-master-9krm4\n" Apr 21 13:13:04.751: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe service redis-master --namespace=kubectl-3770' Apr 21 13:13:04.856: INFO: stderr: "" Apr 21 13:13:04.856: INFO: stdout: "Name: redis-master\nNamespace: kubectl-3770\nLabels: app=redis\n role=master\nAnnotations: \nSelector: app=redis,role=master\nType: ClusterIP\nIP: 10.111.43.41\nPort: 6379/TCP\nTargetPort: redis-server/TCP\nEndpoints: 10.244.2.150:6379\nSession Affinity: None\nEvents: \n" Apr 21 13:13:04.859: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe node iruya-control-plane' Apr 21 13:13:04.978: INFO: stderr: "" Apr 21 13:13:04.978: INFO: stdout: "Name: iruya-control-plane\nRoles: master\nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=iruya-control-plane\n kubernetes.io/os=linux\n node-role.kubernetes.io/master=\nAnnotations: kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Sun, 15 Mar 2020 18:24:20 +0000\nTaints: node-role.kubernetes.io/master:NoSchedule\nUnschedulable: false\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n MemoryPressure False Tue, 21 Apr 2020 13:12:23 +0000 Sun, 15 Mar 2020 18:24:20 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Tue, 21 Apr 2020 13:12:23 +0000 Sun, 15 Mar 2020 18:24:20 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Tue, 21 Apr 2020 13:12:23 +0000 Sun, 15 Mar 2020 18:24:20 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Tue, 21 Apr 2020 13:12:23 +0000 Sun, 15 Mar 2020 18:25:00 +0000 KubeletReady kubelet is posting ready status\nAddresses:\n InternalIP: 172.17.0.7\n Hostname: iruya-control-plane\nCapacity:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nAllocatable:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nSystem Info:\n Machine ID: 09f14f6f4d1640fcaab2243401c9f154\n System UUID: 7c6ca533-492e-400c-b058-c282f97a69ec\n Boot ID: ca2aa731-f890-4956-92a1-ff8c7560d571\n Kernel Version: 4.15.0-88-generic\n OS Image: Ubuntu 19.10\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: containerd://1.3.2\n Kubelet Version: v1.15.7\n Kube-Proxy Version: v1.15.7\nPodCIDR: 10.244.0.0/24\nNon-terminated Pods: (7 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system etcd-iruya-control-plane 0 (0%) 0 (0%) 0 (0%) 0 (0%) 36d\n kube-system kindnet-zn8sx 100m (0%) 100m (0%) 50Mi (0%) 50Mi (0%) 36d\n kube-system kube-apiserver-iruya-control-plane 250m (1%) 0 (0%) 0 (0%) 0 (0%) 36d\n kube-system kube-controller-manager-iruya-control-plane 200m (1%) 0 (0%) 0 (0%) 0 (0%) 36d\n kube-system kube-proxy-46nsr 0 (0%) 0 (0%) 0 (0%) 0 (0%) 36d\n kube-system kube-scheduler-iruya-control-plane 100m (0%) 0 (0%) 0 (0%) 0 (0%) 36d\n local-path-storage local-path-provisioner-d4947b89c-72frh 0 (0%) 0 (0%) 0 (0%) 0 (0%) 36d\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 650m (4%) 100m (0%)\n memory 50Mi (0%) 50Mi (0%)\n ephemeral-storage 0 (0%) 0 (0%)\nEvents: \n" Apr 21 13:13:04.979: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe namespace kubectl-3770' Apr 21 13:13:05.079: INFO: stderr: "" Apr 21 13:13:05.079: INFO: stdout: "Name: kubectl-3770\nLabels: e2e-framework=kubectl\n e2e-run=58d493cb-a5ae-4aa7-a91a-ec37020c1d44\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo resource limits.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 21 13:13:05.079: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3770" for this suite. Apr 21 13:13:27.095: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 21 13:13:27.175: INFO: namespace kubectl-3770 deletion completed in 22.092261931s • [SLOW TEST:26.372 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl describe /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 21 13:13:27.176: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0644 on tmpfs Apr 21 13:13:27.255: INFO: Waiting up to 5m0s for pod "pod-164692ca-d8fd-46a7-b36c-0004965f6f48" in namespace "emptydir-3580" to be "success or failure" Apr 21 13:13:27.263: INFO: Pod "pod-164692ca-d8fd-46a7-b36c-0004965f6f48": Phase="Pending", Reason="", readiness=false. Elapsed: 7.577001ms Apr 21 13:13:29.266: INFO: Pod "pod-164692ca-d8fd-46a7-b36c-0004965f6f48": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010778467s Apr 21 13:13:31.270: INFO: Pod "pod-164692ca-d8fd-46a7-b36c-0004965f6f48": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014693925s STEP: Saw pod success Apr 21 13:13:31.270: INFO: Pod "pod-164692ca-d8fd-46a7-b36c-0004965f6f48" satisfied condition "success or failure" Apr 21 13:13:31.273: INFO: Trying to get logs from node iruya-worker2 pod pod-164692ca-d8fd-46a7-b36c-0004965f6f48 container test-container: STEP: delete the pod Apr 21 13:13:31.323: INFO: Waiting for pod pod-164692ca-d8fd-46a7-b36c-0004965f6f48 to disappear Apr 21 13:13:31.347: INFO: Pod pod-164692ca-d8fd-46a7-b36c-0004965f6f48 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 21 13:13:31.347: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3580" for this suite. Apr 21 13:13:37.363: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 21 13:13:37.430: INFO: namespace emptydir-3580 deletion completed in 6.079936814s • [SLOW TEST:10.254 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 21 13:13:37.431: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod liveness-bab734d1-7fd0-4687-b848-5f8c6ed434d3 in namespace container-probe-1622 Apr 21 13:13:41.562: INFO: Started pod liveness-bab734d1-7fd0-4687-b848-5f8c6ed434d3 in namespace container-probe-1622 STEP: checking the pod's current state and verifying that restartCount is present Apr 21 13:13:41.565: INFO: Initial restart count of pod liveness-bab734d1-7fd0-4687-b848-5f8c6ed434d3 is 0 Apr 21 13:13:53.593: INFO: Restart count of pod container-probe-1622/liveness-bab734d1-7fd0-4687-b848-5f8c6ed434d3 is now 1 (12.027591261s elapsed) Apr 21 13:14:13.642: INFO: Restart count of pod container-probe-1622/liveness-bab734d1-7fd0-4687-b848-5f8c6ed434d3 is now 2 (32.076845429s elapsed) Apr 21 13:14:33.683: INFO: Restart count of pod container-probe-1622/liveness-bab734d1-7fd0-4687-b848-5f8c6ed434d3 is now 3 (52.117821586s elapsed) Apr 21 13:14:53.724: INFO: Restart count of pod container-probe-1622/liveness-bab734d1-7fd0-4687-b848-5f8c6ed434d3 is now 4 (1m12.159467703s elapsed) Apr 21 13:15:54.582: INFO: Restart count of pod container-probe-1622/liveness-bab734d1-7fd0-4687-b848-5f8c6ed434d3 is now 5 (2m13.016724606s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 21 13:15:54.657: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-1622" for this suite. Apr 21 13:16:01.586: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 21 13:16:01.704: INFO: namespace container-probe-1622 deletion completed in 7.041566602s • [SLOW TEST:144.274 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl label should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 21 13:16:01.705: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1210 STEP: creating the pod Apr 21 13:16:01.735: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2489' Apr 21 13:16:02.025: INFO: stderr: "" Apr 21 13:16:02.026: INFO: stdout: "pod/pause created\n" Apr 21 13:16:02.026: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] Apr 21 13:16:02.026: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-2489" to be "running and ready" Apr 21 13:16:02.040: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 14.818226ms Apr 21 13:16:04.151: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.125111656s Apr 21 13:16:06.156: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 4.129923786s Apr 21 13:16:06.156: INFO: Pod "pause" satisfied condition "running and ready" Apr 21 13:16:06.156: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: adding the label testing-label with value testing-label-value to a pod Apr 21 13:16:06.156: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-2489' Apr 21 13:16:06.258: INFO: stderr: "" Apr 21 13:16:06.258: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value Apr 21 13:16:06.258: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-2489' Apr 21 13:16:06.359: INFO: stderr: "" Apr 21 13:16:06.359: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 4s testing-label-value\n" STEP: removing the label testing-label of a pod Apr 21 13:16:06.359: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-2489' Apr 21 13:16:06.462: INFO: stderr: "" Apr 21 13:16:06.462: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label Apr 21 13:16:06.462: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-2489' Apr 21 13:16:06.557: INFO: stderr: "" Apr 21 13:16:06.557: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 4s \n" [AfterEach] [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1217 STEP: using delete to clean up resources Apr 21 13:16:06.557: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2489' Apr 21 13:16:06.691: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 21 13:16:06.691: INFO: stdout: "pod \"pause\" force deleted\n" Apr 21 13:16:06.691: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-2489' Apr 21 13:16:06.793: INFO: stderr: "No resources found.\n" Apr 21 13:16:06.793: INFO: stdout: "" Apr 21 13:16:06.793: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-2489 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Apr 21 13:16:06.883: INFO: stderr: "" Apr 21 13:16:06.884: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 21 13:16:06.884: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2489" for this suite. Apr 21 13:16:13.053: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 21 13:16:13.128: INFO: namespace kubectl-2489 deletion completed in 6.240559721s • [SLOW TEST:11.423 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Guestbook application should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 21 13:16:13.128: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating all guestbook components Apr 21 13:16:13.200: INFO: apiVersion: v1 kind: Service metadata: name: redis-slave labels: app: redis role: slave tier: backend spec: ports: - port: 6379 selector: app: redis role: slave tier: backend Apr 21 13:16:13.200: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3576' Apr 21 13:16:14.564: INFO: stderr: "" Apr 21 13:16:14.564: INFO: stdout: "service/redis-slave created\n" Apr 21 13:16:14.564: INFO: apiVersion: v1 kind: Service metadata: name: redis-master labels: app: redis role: master tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: redis role: master tier: backend Apr 21 13:16:14.564: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3576' Apr 21 13:16:14.872: INFO: stderr: "" Apr 21 13:16:14.872: INFO: stdout: "service/redis-master created\n" Apr 21 13:16:14.872: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend Apr 21 13:16:14.872: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3576' Apr 21 13:16:15.191: INFO: stderr: "" Apr 21 13:16:15.191: INFO: stdout: "service/frontend created\n" Apr 21 13:16:15.191: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: frontend spec: replicas: 3 selector: matchLabels: app: guestbook tier: frontend template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: php-redis image: gcr.io/google-samples/gb-frontend:v6 resources: requests: cpu: 100m memory: 100Mi env: - name: GET_HOSTS_FROM value: dns # If your cluster config does not include a dns service, then to # instead access environment variables to find service host # info, comment out the 'value: dns' line above, and uncomment the # line below: # value: env ports: - containerPort: 80 Apr 21 13:16:15.192: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3576' Apr 21 13:16:15.447: INFO: stderr: "" Apr 21 13:16:15.447: INFO: stdout: "deployment.apps/frontend created\n" Apr 21 13:16:15.447: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: redis-master spec: replicas: 1 selector: matchLabels: app: redis role: master tier: backend template: metadata: labels: app: redis role: master tier: backend spec: containers: - name: master image: gcr.io/kubernetes-e2e-test-images/redis:1.0 resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Apr 21 13:16:15.447: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3576' Apr 21 13:16:15.768: INFO: stderr: "" Apr 21 13:16:15.768: INFO: stdout: "deployment.apps/redis-master created\n" Apr 21 13:16:15.768: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: redis-slave spec: replicas: 2 selector: matchLabels: app: redis role: slave tier: backend template: metadata: labels: app: redis role: slave tier: backend spec: containers: - name: slave image: gcr.io/google-samples/gb-redisslave:v3 resources: requests: cpu: 100m memory: 100Mi env: - name: GET_HOSTS_FROM value: dns # If your cluster config does not include a dns service, then to # instead access an environment variable to find the master # service's host, comment out the 'value: dns' line above, and # uncomment the line below: # value: env ports: - containerPort: 6379 Apr 21 13:16:15.768: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3576' Apr 21 13:16:16.073: INFO: stderr: "" Apr 21 13:16:16.073: INFO: stdout: "deployment.apps/redis-slave created\n" STEP: validating guestbook app Apr 21 13:16:16.073: INFO: Waiting for all frontend pods to be Running. Apr 21 13:16:26.124: INFO: Waiting for frontend to serve content. Apr 21 13:16:26.141: INFO: Trying to add a new entry to the guestbook. Apr 21 13:16:26.153: INFO: Verifying that added entry can be retrieved. STEP: using delete to clean up resources Apr 21 13:16:26.163: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-3576' Apr 21 13:16:26.347: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 21 13:16:26.347: INFO: stdout: "service \"redis-slave\" force deleted\n" STEP: using delete to clean up resources Apr 21 13:16:26.347: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-3576' Apr 21 13:16:26.508: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 21 13:16:26.508: INFO: stdout: "service \"redis-master\" force deleted\n" STEP: using delete to clean up resources Apr 21 13:16:26.509: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-3576' Apr 21 13:16:26.626: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 21 13:16:26.626: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources Apr 21 13:16:26.626: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-3576' Apr 21 13:16:26.729: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 21 13:16:26.729: INFO: stdout: "deployment.apps \"frontend\" force deleted\n" STEP: using delete to clean up resources Apr 21 13:16:26.729: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-3576' Apr 21 13:16:26.830: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 21 13:16:26.830: INFO: stdout: "deployment.apps \"redis-master\" force deleted\n" STEP: using delete to clean up resources Apr 21 13:16:26.830: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-3576' Apr 21 13:16:26.965: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 21 13:16:26.965: INFO: stdout: "deployment.apps \"redis-slave\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 21 13:16:26.965: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3576" for this suite. Apr 21 13:17:07.013: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 21 13:17:07.089: INFO: namespace kubectl-3576 deletion completed in 40.112038714s • [SLOW TEST:53.961 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Guestbook application /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 21 13:17:07.089: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a test externalName service STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-9378.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-9378.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-9378.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-9378.svc.cluster.local; sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 21 13:17:13.177: INFO: DNS probes using dns-test-20684160-4efe-4b15-8448-acfb7d8371b9 succeeded STEP: deleting the pod STEP: changing the externalName to bar.example.com STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-9378.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-9378.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-9378.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-9378.svc.cluster.local; sleep 1; done STEP: creating a second pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 21 13:17:19.271: INFO: File wheezy_udp@dns-test-service-3.dns-9378.svc.cluster.local from pod dns-9378/dns-test-6621e81c-4a36-4852-b097-1f8ea9bd29c7 contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 21 13:17:19.274: INFO: File jessie_udp@dns-test-service-3.dns-9378.svc.cluster.local from pod dns-9378/dns-test-6621e81c-4a36-4852-b097-1f8ea9bd29c7 contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 21 13:17:19.274: INFO: Lookups using dns-9378/dns-test-6621e81c-4a36-4852-b097-1f8ea9bd29c7 failed for: [wheezy_udp@dns-test-service-3.dns-9378.svc.cluster.local jessie_udp@dns-test-service-3.dns-9378.svc.cluster.local] Apr 21 13:17:24.279: INFO: File wheezy_udp@dns-test-service-3.dns-9378.svc.cluster.local from pod dns-9378/dns-test-6621e81c-4a36-4852-b097-1f8ea9bd29c7 contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 21 13:17:24.283: INFO: File jessie_udp@dns-test-service-3.dns-9378.svc.cluster.local from pod dns-9378/dns-test-6621e81c-4a36-4852-b097-1f8ea9bd29c7 contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 21 13:17:24.283: INFO: Lookups using dns-9378/dns-test-6621e81c-4a36-4852-b097-1f8ea9bd29c7 failed for: [wheezy_udp@dns-test-service-3.dns-9378.svc.cluster.local jessie_udp@dns-test-service-3.dns-9378.svc.cluster.local] Apr 21 13:17:29.279: INFO: File wheezy_udp@dns-test-service-3.dns-9378.svc.cluster.local from pod dns-9378/dns-test-6621e81c-4a36-4852-b097-1f8ea9bd29c7 contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 21 13:17:29.284: INFO: File jessie_udp@dns-test-service-3.dns-9378.svc.cluster.local from pod dns-9378/dns-test-6621e81c-4a36-4852-b097-1f8ea9bd29c7 contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 21 13:17:29.284: INFO: Lookups using dns-9378/dns-test-6621e81c-4a36-4852-b097-1f8ea9bd29c7 failed for: [wheezy_udp@dns-test-service-3.dns-9378.svc.cluster.local jessie_udp@dns-test-service-3.dns-9378.svc.cluster.local] Apr 21 13:17:34.280: INFO: File wheezy_udp@dns-test-service-3.dns-9378.svc.cluster.local from pod dns-9378/dns-test-6621e81c-4a36-4852-b097-1f8ea9bd29c7 contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 21 13:17:34.284: INFO: File jessie_udp@dns-test-service-3.dns-9378.svc.cluster.local from pod dns-9378/dns-test-6621e81c-4a36-4852-b097-1f8ea9bd29c7 contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 21 13:17:34.284: INFO: Lookups using dns-9378/dns-test-6621e81c-4a36-4852-b097-1f8ea9bd29c7 failed for: [wheezy_udp@dns-test-service-3.dns-9378.svc.cluster.local jessie_udp@dns-test-service-3.dns-9378.svc.cluster.local] Apr 21 13:17:39.280: INFO: File wheezy_udp@dns-test-service-3.dns-9378.svc.cluster.local from pod dns-9378/dns-test-6621e81c-4a36-4852-b097-1f8ea9bd29c7 contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 21 13:17:39.284: INFO: File jessie_udp@dns-test-service-3.dns-9378.svc.cluster.local from pod dns-9378/dns-test-6621e81c-4a36-4852-b097-1f8ea9bd29c7 contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 21 13:17:39.284: INFO: Lookups using dns-9378/dns-test-6621e81c-4a36-4852-b097-1f8ea9bd29c7 failed for: [wheezy_udp@dns-test-service-3.dns-9378.svc.cluster.local jessie_udp@dns-test-service-3.dns-9378.svc.cluster.local] Apr 21 13:17:44.283: INFO: DNS probes using dns-test-6621e81c-4a36-4852-b097-1f8ea9bd29c7 succeeded STEP: deleting the pod STEP: changing the service to type=ClusterIP STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-9378.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-9378.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-9378.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-9378.svc.cluster.local; sleep 1; done STEP: creating a third pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 21 13:17:50.815: INFO: DNS probes using dns-test-b62c0732-3eed-4b7b-9d2c-2b7034cf8a44 succeeded STEP: deleting the pod STEP: deleting the test externalName service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 21 13:17:50.929: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-9378" for this suite. Apr 21 13:17:56.952: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 21 13:17:57.030: INFO: namespace dns-9378 deletion completed in 6.087458722s • [SLOW TEST:49.941 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 21 13:17:57.030: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods Apr 21 13:17:57.710: INFO: Pod name wrapped-volume-race-206ad239-2a0d-42d3-a619-e50d2f7d7877: Found 0 pods out of 5 Apr 21 13:18:02.719: INFO: Pod name wrapped-volume-race-206ad239-2a0d-42d3-a619-e50d2f7d7877: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-206ad239-2a0d-42d3-a619-e50d2f7d7877 in namespace emptydir-wrapper-5370, will wait for the garbage collector to delete the pods Apr 21 13:18:14.856: INFO: Deleting ReplicationController wrapped-volume-race-206ad239-2a0d-42d3-a619-e50d2f7d7877 took: 5.856638ms Apr 21 13:18:15.157: INFO: Terminating ReplicationController wrapped-volume-race-206ad239-2a0d-42d3-a619-e50d2f7d7877 pods took: 300.403512ms STEP: Creating RC which spawns configmap-volume pods Apr 21 13:19:02.286: INFO: Pod name wrapped-volume-race-51e6539a-9270-4c02-b23f-ca8c3c3b8cbd: Found 0 pods out of 5 Apr 21 13:19:07.294: INFO: Pod name wrapped-volume-race-51e6539a-9270-4c02-b23f-ca8c3c3b8cbd: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-51e6539a-9270-4c02-b23f-ca8c3c3b8cbd in namespace emptydir-wrapper-5370, will wait for the garbage collector to delete the pods Apr 21 13:19:21.392: INFO: Deleting ReplicationController wrapped-volume-race-51e6539a-9270-4c02-b23f-ca8c3c3b8cbd took: 21.226305ms Apr 21 13:19:21.693: INFO: Terminating ReplicationController wrapped-volume-race-51e6539a-9270-4c02-b23f-ca8c3c3b8cbd pods took: 300.363485ms STEP: Creating RC which spawns configmap-volume pods Apr 21 13:20:02.436: INFO: Pod name wrapped-volume-race-c71535e3-4a8d-494e-a905-a635138df54a: Found 0 pods out of 5 Apr 21 13:20:07.446: INFO: Pod name wrapped-volume-race-c71535e3-4a8d-494e-a905-a635138df54a: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-c71535e3-4a8d-494e-a905-a635138df54a in namespace emptydir-wrapper-5370, will wait for the garbage collector to delete the pods Apr 21 13:20:21.552: INFO: Deleting ReplicationController wrapped-volume-race-c71535e3-4a8d-494e-a905-a635138df54a took: 7.463873ms Apr 21 13:20:21.852: INFO: Terminating ReplicationController wrapped-volume-race-c71535e3-4a8d-494e-a905-a635138df54a pods took: 300.278573ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 21 13:21:03.094: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-5370" for this suite. Apr 21 13:21:11.109: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 21 13:21:11.198: INFO: namespace emptydir-wrapper-5370 deletion completed in 8.098337143s • [SLOW TEST:194.168 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 21 13:21:11.198: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Apr 21 13:21:11.330: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"36d01cdf-cf01-4373-81df-c8f0cea5aff1", Controller:(*bool)(0xc001e9bf8a), BlockOwnerDeletion:(*bool)(0xc001e9bf8b)}} Apr 21 13:21:11.351: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"60670036-8777-4bbd-a2d8-a58534a2d1a7", Controller:(*bool)(0xc001e0ec12), BlockOwnerDeletion:(*bool)(0xc001e0ec13)}} Apr 21 13:21:11.373: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"0cbff9f0-cb21-49dd-b3ab-39fac6047f54", Controller:(*bool)(0xc0024aa1c2), BlockOwnerDeletion:(*bool)(0xc0024aa1c3)}} [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 21 13:21:16.469: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-2511" for this suite. Apr 21 13:21:22.486: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 21 13:21:22.567: INFO: namespace gc-2511 deletion completed in 6.093964334s • [SLOW TEST:11.368 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 21 13:21:22.567: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Apr 21 13:21:22.664: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. Apr 21 13:21:22.671: INFO: Number of nodes with available pods: 0 Apr 21 13:21:22.671: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. Apr 21 13:21:22.771: INFO: Number of nodes with available pods: 0 Apr 21 13:21:22.771: INFO: Node iruya-worker is running more than one daemon pod Apr 21 13:21:23.875: INFO: Number of nodes with available pods: 0 Apr 21 13:21:23.875: INFO: Node iruya-worker is running more than one daemon pod Apr 21 13:21:24.775: INFO: Number of nodes with available pods: 0 Apr 21 13:21:24.775: INFO: Node iruya-worker is running more than one daemon pod Apr 21 13:21:25.775: INFO: Number of nodes with available pods: 0 Apr 21 13:21:25.775: INFO: Node iruya-worker is running more than one daemon pod Apr 21 13:21:26.776: INFO: Number of nodes with available pods: 1 Apr 21 13:21:26.776: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled Apr 21 13:21:26.820: INFO: Number of nodes with available pods: 1 Apr 21 13:21:26.820: INFO: Number of running nodes: 0, number of available pods: 1 Apr 21 13:21:27.824: INFO: Number of nodes with available pods: 0 Apr 21 13:21:27.824: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate Apr 21 13:21:27.838: INFO: Number of nodes with available pods: 0 Apr 21 13:21:27.838: INFO: Node iruya-worker is running more than one daemon pod Apr 21 13:21:28.843: INFO: Number of nodes with available pods: 0 Apr 21 13:21:28.843: INFO: Node iruya-worker is running more than one daemon pod Apr 21 13:21:29.843: INFO: Number of nodes with available pods: 0 Apr 21 13:21:29.843: INFO: Node iruya-worker is running more than one daemon pod Apr 21 13:21:30.843: INFO: Number of nodes with available pods: 0 Apr 21 13:21:30.843: INFO: Node iruya-worker is running more than one daemon pod Apr 21 13:21:31.842: INFO: Number of nodes with available pods: 0 Apr 21 13:21:31.842: INFO: Node iruya-worker is running more than one daemon pod Apr 21 13:21:32.843: INFO: Number of nodes with available pods: 0 Apr 21 13:21:32.843: INFO: Node iruya-worker is running more than one daemon pod Apr 21 13:21:33.843: INFO: Number of nodes with available pods: 0 Apr 21 13:21:33.843: INFO: Node iruya-worker is running more than one daemon pod Apr 21 13:21:34.844: INFO: Number of nodes with available pods: 0 Apr 21 13:21:34.844: INFO: Node iruya-worker is running more than one daemon pod Apr 21 13:21:35.842: INFO: Number of nodes with available pods: 1 Apr 21 13:21:35.842: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-6048, will wait for the garbage collector to delete the pods Apr 21 13:21:35.922: INFO: Deleting DaemonSet.extensions daemon-set took: 6.401299ms Apr 21 13:21:36.222: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.272229ms Apr 21 13:21:40.226: INFO: Number of nodes with available pods: 0 Apr 21 13:21:40.226: INFO: Number of running nodes: 0, number of available pods: 0 Apr 21 13:21:40.230: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-6048/daemonsets","resourceVersion":"6640768"},"items":null} Apr 21 13:21:40.233: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-6048/pods","resourceVersion":"6640768"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 21 13:21:40.267: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-6048" for this suite. Apr 21 13:21:46.307: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 21 13:21:46.429: INFO: namespace daemonsets-6048 deletion completed in 6.148479379s • [SLOW TEST:23.862 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 21 13:21:46.430: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Apr 21 13:21:46.497: INFO: Creating ReplicaSet my-hostname-basic-62adb151-35d3-45ea-916a-a0ff5fe60d3c Apr 21 13:21:46.522: INFO: Pod name my-hostname-basic-62adb151-35d3-45ea-916a-a0ff5fe60d3c: Found 0 pods out of 1 Apr 21 13:21:51.527: INFO: Pod name my-hostname-basic-62adb151-35d3-45ea-916a-a0ff5fe60d3c: Found 1 pods out of 1 Apr 21 13:21:51.527: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-62adb151-35d3-45ea-916a-a0ff5fe60d3c" is running Apr 21 13:21:51.530: INFO: Pod "my-hostname-basic-62adb151-35d3-45ea-916a-a0ff5fe60d3c-vfhj7" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-21 13:21:46 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-21 13:21:49 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-21 13:21:49 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-21 13:21:46 +0000 UTC Reason: Message:}]) Apr 21 13:21:51.530: INFO: Trying to dial the pod Apr 21 13:21:56.542: INFO: Controller my-hostname-basic-62adb151-35d3-45ea-916a-a0ff5fe60d3c: Got expected result from replica 1 [my-hostname-basic-62adb151-35d3-45ea-916a-a0ff5fe60d3c-vfhj7]: "my-hostname-basic-62adb151-35d3-45ea-916a-a0ff5fe60d3c-vfhj7", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 21 13:21:56.542: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-6710" for this suite. Apr 21 13:22:02.561: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 21 13:22:02.646: INFO: namespace replicaset-6710 deletion completed in 6.100535414s • [SLOW TEST:16.216 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 21 13:22:02.647: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Performing setup for networking test in namespace pod-network-test-496 STEP: creating a selector STEP: Creating the service pods in kubernetes Apr 21 13:22:02.728: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Apr 21 13:22:28.819: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.1.36:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-496 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 21 13:22:28.819: INFO: >>> kubeConfig: /root/.kube/config I0421 13:22:28.857646 6 log.go:172] (0xc00062aa50) (0xc0011df040) Create stream I0421 13:22:28.857676 6 log.go:172] (0xc00062aa50) (0xc0011df040) Stream added, broadcasting: 1 I0421 13:22:28.859821 6 log.go:172] (0xc00062aa50) Reply frame received for 1 I0421 13:22:28.859884 6 log.go:172] (0xc00062aa50) (0xc0002f4140) Create stream I0421 13:22:28.859902 6 log.go:172] (0xc00062aa50) (0xc0002f4140) Stream added, broadcasting: 3 I0421 13:22:28.860882 6 log.go:172] (0xc00062aa50) Reply frame received for 3 I0421 13:22:28.860903 6 log.go:172] (0xc00062aa50) (0xc0011df220) Create stream I0421 13:22:28.860910 6 log.go:172] (0xc00062aa50) (0xc0011df220) Stream added, broadcasting: 5 I0421 13:22:28.862040 6 log.go:172] (0xc00062aa50) Reply frame received for 5 I0421 13:22:28.946935 6 log.go:172] (0xc00062aa50) Data frame received for 5 I0421 13:22:28.946994 6 log.go:172] (0xc0011df220) (5) Data frame handling I0421 13:22:28.947034 6 log.go:172] (0xc00062aa50) Data frame received for 3 I0421 13:22:28.947054 6 log.go:172] (0xc0002f4140) (3) Data frame handling I0421 13:22:28.947080 6 log.go:172] (0xc0002f4140) (3) Data frame sent I0421 13:22:28.947100 6 log.go:172] (0xc00062aa50) Data frame received for 3 I0421 13:22:28.947117 6 log.go:172] (0xc0002f4140) (3) Data frame handling I0421 13:22:28.948939 6 log.go:172] (0xc00062aa50) Data frame received for 1 I0421 13:22:28.948964 6 log.go:172] (0xc0011df040) (1) Data frame handling I0421 13:22:28.948983 6 log.go:172] (0xc0011df040) (1) Data frame sent I0421 13:22:28.949147 6 log.go:172] (0xc00062aa50) (0xc0011df040) Stream removed, broadcasting: 1 I0421 13:22:28.949270 6 log.go:172] (0xc00062aa50) (0xc0011df040) Stream removed, broadcasting: 1 I0421 13:22:28.949294 6 log.go:172] (0xc00062aa50) (0xc0002f4140) Stream removed, broadcasting: 3 I0421 13:22:28.949312 6 log.go:172] (0xc00062aa50) (0xc0011df220) Stream removed, broadcasting: 5 Apr 21 13:22:28.949: INFO: Found all expected endpoints: [netserver-0] I0421 13:22:28.949386 6 log.go:172] (0xc00062aa50) Go away received Apr 21 13:22:28.953: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.2.176:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-496 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 21 13:22:28.953: INFO: >>> kubeConfig: /root/.kube/config I0421 13:22:28.987938 6 log.go:172] (0xc000a14790) (0xc0002f4d20) Create stream I0421 13:22:28.987970 6 log.go:172] (0xc000a14790) (0xc0002f4d20) Stream added, broadcasting: 1 I0421 13:22:28.990439 6 log.go:172] (0xc000a14790) Reply frame received for 1 I0421 13:22:28.990486 6 log.go:172] (0xc000a14790) (0xc00054a960) Create stream I0421 13:22:28.990502 6 log.go:172] (0xc000a14790) (0xc00054a960) Stream added, broadcasting: 3 I0421 13:22:28.991567 6 log.go:172] (0xc000a14790) Reply frame received for 3 I0421 13:22:28.991611 6 log.go:172] (0xc000a14790) (0xc0010186e0) Create stream I0421 13:22:28.991626 6 log.go:172] (0xc000a14790) (0xc0010186e0) Stream added, broadcasting: 5 I0421 13:22:28.992911 6 log.go:172] (0xc000a14790) Reply frame received for 5 I0421 13:22:29.072917 6 log.go:172] (0xc000a14790) Data frame received for 5 I0421 13:22:29.072957 6 log.go:172] (0xc0010186e0) (5) Data frame handling I0421 13:22:29.072994 6 log.go:172] (0xc000a14790) Data frame received for 3 I0421 13:22:29.073019 6 log.go:172] (0xc00054a960) (3) Data frame handling I0421 13:22:29.073049 6 log.go:172] (0xc00054a960) (3) Data frame sent I0421 13:22:29.073060 6 log.go:172] (0xc000a14790) Data frame received for 3 I0421 13:22:29.073079 6 log.go:172] (0xc00054a960) (3) Data frame handling I0421 13:22:29.074734 6 log.go:172] (0xc000a14790) Data frame received for 1 I0421 13:22:29.074759 6 log.go:172] (0xc0002f4d20) (1) Data frame handling I0421 13:22:29.074777 6 log.go:172] (0xc0002f4d20) (1) Data frame sent I0421 13:22:29.074806 6 log.go:172] (0xc000a14790) (0xc0002f4d20) Stream removed, broadcasting: 1 I0421 13:22:29.074903 6 log.go:172] (0xc000a14790) (0xc0002f4d20) Stream removed, broadcasting: 1 I0421 13:22:29.074918 6 log.go:172] (0xc000a14790) (0xc00054a960) Stream removed, broadcasting: 3 I0421 13:22:29.075090 6 log.go:172] (0xc000a14790) Go away received I0421 13:22:29.075140 6 log.go:172] (0xc000a14790) (0xc0010186e0) Stream removed, broadcasting: 5 Apr 21 13:22:29.075: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 21 13:22:29.075: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-496" for this suite. Apr 21 13:22:51.113: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 21 13:22:51.192: INFO: namespace pod-network-test-496 deletion completed in 22.111906365s • [SLOW TEST:48.545 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 21 13:22:51.192: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-9734 [It] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace statefulset-9734 STEP: Creating statefulset with conflicting port in namespace statefulset-9734 STEP: Waiting until pod test-pod will start running in namespace statefulset-9734 STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-9734 Apr 21 13:22:55.302: INFO: Observed stateful pod in namespace: statefulset-9734, name: ss-0, uid: 1ee433fe-8a88-4797-a2e2-6c33416b2fba, status phase: Pending. Waiting for statefulset controller to delete. Apr 21 13:23:02.148: INFO: Observed stateful pod in namespace: statefulset-9734, name: ss-0, uid: 1ee433fe-8a88-4797-a2e2-6c33416b2fba, status phase: Failed. Waiting for statefulset controller to delete. Apr 21 13:23:02.200: INFO: Observed stateful pod in namespace: statefulset-9734, name: ss-0, uid: 1ee433fe-8a88-4797-a2e2-6c33416b2fba, status phase: Failed. Waiting for statefulset controller to delete. Apr 21 13:23:02.215: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-9734 STEP: Removing pod with conflicting port in namespace statefulset-9734 STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-9734 and will be in running state [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 Apr 21 13:23:16.325: INFO: Deleting all statefulset in ns statefulset-9734 Apr 21 13:23:16.329: INFO: Scaling statefulset ss to 0 Apr 21 13:23:26.346: INFO: Waiting for statefulset status.replicas updated to 0 Apr 21 13:23:26.349: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 21 13:23:26.367: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-9734" for this suite. Apr 21 13:23:32.394: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 21 13:23:32.471: INFO: namespace statefulset-9734 deletion completed in 6.101581196s • [SLOW TEST:41.279 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 21 13:23:32.472: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod Apr 21 13:23:32.562: INFO: PodSpec: initContainers in spec.initContainers Apr 21 13:24:18.412: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-0e9af978-03ab-4d74-8f97-62de8c0ef6ea", GenerateName:"", Namespace:"init-container-1249", SelfLink:"/api/v1/namespaces/init-container-1249/pods/pod-init-0e9af978-03ab-4d74-8f97-62de8c0ef6ea", UID:"4af70fa7-ff05-4b38-850a-967b9a3772f3", ResourceVersion:"6641330", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63723072212, loc:(*time.Location)(0x7ead8c0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"562304814"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-7rhg6", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc00168e0c0), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-7rhg6", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-7rhg6", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-7rhg6", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc00098a088), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"iruya-worker2", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc001acbbc0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc00098a1f0)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc00098a260)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc00098a268), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc00098a26c), PreemptionPolicy:(*v1.PreemptionPolicy)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723072212, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723072212, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723072212, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723072212, loc:(*time.Location)(0x7ead8c0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.17.0.5", PodIP:"10.244.1.38", StartTime:(*v1.Time)(0xc0030100a0), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0025ae070)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0025ae0e0)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://24d5ffe86e301c77d5f3e96205ea87465cb222cca01fff36a38fa652bac46fdd"}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc003010100), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:""}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0030100e0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:""}}, QOSClass:"Guaranteed"}} [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 21 13:24:18.413: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-1249" for this suite. Apr 21 13:24:40.481: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 21 13:24:40.571: INFO: namespace init-container-1249 deletion completed in 22.105155596s • [SLOW TEST:68.100 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 21 13:24:40.572: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-map-9032eb36-ee00-4149-9aed-689e396deb82 STEP: Creating a pod to test consume secrets Apr 21 13:24:40.674: INFO: Waiting up to 5m0s for pod "pod-secrets-5b4639c8-59a3-414e-8b91-0c0a77305133" in namespace "secrets-6878" to be "success or failure" Apr 21 13:24:40.678: INFO: Pod "pod-secrets-5b4639c8-59a3-414e-8b91-0c0a77305133": Phase="Pending", Reason="", readiness=false. Elapsed: 3.200945ms Apr 21 13:24:42.682: INFO: Pod "pod-secrets-5b4639c8-59a3-414e-8b91-0c0a77305133": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007578872s Apr 21 13:24:44.687: INFO: Pod "pod-secrets-5b4639c8-59a3-414e-8b91-0c0a77305133": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012076716s STEP: Saw pod success Apr 21 13:24:44.687: INFO: Pod "pod-secrets-5b4639c8-59a3-414e-8b91-0c0a77305133" satisfied condition "success or failure" Apr 21 13:24:44.690: INFO: Trying to get logs from node iruya-worker2 pod pod-secrets-5b4639c8-59a3-414e-8b91-0c0a77305133 container secret-volume-test: STEP: delete the pod Apr 21 13:24:44.709: INFO: Waiting for pod pod-secrets-5b4639c8-59a3-414e-8b91-0c0a77305133 to disappear Apr 21 13:24:44.714: INFO: Pod pod-secrets-5b4639c8-59a3-414e-8b91-0c0a77305133 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 21 13:24:44.714: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-6878" for this suite. Apr 21 13:24:50.729: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 21 13:24:50.800: INFO: namespace secrets-6878 deletion completed in 6.082324167s • [SLOW TEST:10.228 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 21 13:24:50.800: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted Apr 21 13:24:57.836: INFO: 0 pods remaining Apr 21 13:24:57.836: INFO: 0 pods has nil DeletionTimestamp Apr 21 13:24:57.836: INFO: Apr 21 13:24:58.380: INFO: 0 pods remaining Apr 21 13:24:58.380: INFO: 0 pods has nil DeletionTimestamp Apr 21 13:24:58.380: INFO: STEP: Gathering metrics W0421 13:24:59.514970 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Apr 21 13:24:59.515: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 21 13:24:59.515: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-8400" for this suite. Apr 21 13:25:05.544: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 21 13:25:05.621: INFO: namespace gc-8400 deletion completed in 6.103447236s • [SLOW TEST:14.822 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 21 13:25:05.622: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 21 13:25:09.723: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-9306" for this suite. Apr 21 13:25:15.742: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 21 13:25:15.819: INFO: namespace kubelet-test-9306 deletion completed in 6.0932899s • [SLOW TEST:10.197 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78 should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 21 13:25:15.819: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications Apr 21 13:25:15.936: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-3890,SelfLink:/api/v1/namespaces/watch-3890/configmaps/e2e-watch-test-watch-closed,UID:0f486f1e-809a-43e2-83a5-3bf96a02769c,ResourceVersion:6641677,Generation:0,CreationTimestamp:2020-04-21 13:25:15 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} Apr 21 13:25:15.936: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-3890,SelfLink:/api/v1/namespaces/watch-3890/configmaps/e2e-watch-test-watch-closed,UID:0f486f1e-809a-43e2-83a5-3bf96a02769c,ResourceVersion:6641678,Generation:0,CreationTimestamp:2020-04-21 13:25:15 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed Apr 21 13:25:15.996: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-3890,SelfLink:/api/v1/namespaces/watch-3890/configmaps/e2e-watch-test-watch-closed,UID:0f486f1e-809a-43e2-83a5-3bf96a02769c,ResourceVersion:6641679,Generation:0,CreationTimestamp:2020-04-21 13:25:15 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Apr 21 13:25:15.996: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-3890,SelfLink:/api/v1/namespaces/watch-3890/configmaps/e2e-watch-test-watch-closed,UID:0f486f1e-809a-43e2-83a5-3bf96a02769c,ResourceVersion:6641680,Generation:0,CreationTimestamp:2020-04-21 13:25:15 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 21 13:25:15.997: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-3890" for this suite. Apr 21 13:25:22.022: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 21 13:25:22.092: INFO: namespace watch-3890 deletion completed in 6.078912078s • [SLOW TEST:6.272 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 21 13:25:22.092: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 21 13:25:28.231: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-5879" for this suite. Apr 21 13:26:14.283: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 21 13:26:14.360: INFO: namespace kubelet-test-5879 deletion completed in 46.125731321s • [SLOW TEST:52.268 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a busybox Pod with hostAliases /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:136 should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run default should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 21 13:26:14.361: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1420 [It] should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Apr 21 13:26:14.400: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-4654' Apr 21 13:26:16.705: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Apr 21 13:26:16.705: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n" STEP: verifying the pod controlled by e2e-test-nginx-deployment gets created [AfterEach] [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1426 Apr 21 13:26:18.717: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=kubectl-4654' Apr 21 13:26:18.819: INFO: stderr: "" Apr 21 13:26:18.819: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 21 13:26:18.819: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4654" for this suite. Apr 21 13:27:40.837: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 21 13:27:40.936: INFO: namespace kubectl-4654 deletion completed in 1m22.11386232s • [SLOW TEST:86.575 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 21 13:27:40.936: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod test-webserver-e8c9d9cd-ae94-4b92-bc8f-aef04210de7a in namespace container-probe-7458 Apr 21 13:27:45.026: INFO: Started pod test-webserver-e8c9d9cd-ae94-4b92-bc8f-aef04210de7a in namespace container-probe-7458 STEP: checking the pod's current state and verifying that restartCount is present Apr 21 13:27:45.029: INFO: Initial restart count of pod test-webserver-e8c9d9cd-ae94-4b92-bc8f-aef04210de7a is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 21 13:31:45.828: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-7458" for this suite. Apr 21 13:31:51.884: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 21 13:31:51.965: INFO: namespace container-probe-7458 deletion completed in 6.130282764s • [SLOW TEST:251.029 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 21 13:31:51.966: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars Apr 21 13:31:52.055: INFO: Waiting up to 5m0s for pod "downward-api-72f30773-68b7-45b9-84e2-7a23b3e44b1f" in namespace "downward-api-75" to be "success or failure" Apr 21 13:31:52.062: INFO: Pod "downward-api-72f30773-68b7-45b9-84e2-7a23b3e44b1f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.892312ms Apr 21 13:31:54.065: INFO: Pod "downward-api-72f30773-68b7-45b9-84e2-7a23b3e44b1f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009747564s Apr 21 13:31:56.069: INFO: Pod "downward-api-72f30773-68b7-45b9-84e2-7a23b3e44b1f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013595038s STEP: Saw pod success Apr 21 13:31:56.069: INFO: Pod "downward-api-72f30773-68b7-45b9-84e2-7a23b3e44b1f" satisfied condition "success or failure" Apr 21 13:31:56.072: INFO: Trying to get logs from node iruya-worker2 pod downward-api-72f30773-68b7-45b9-84e2-7a23b3e44b1f container dapi-container: STEP: delete the pod Apr 21 13:31:56.111: INFO: Waiting for pod downward-api-72f30773-68b7-45b9-84e2-7a23b3e44b1f to disappear Apr 21 13:31:56.122: INFO: Pod downward-api-72f30773-68b7-45b9-84e2-7a23b3e44b1f no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 21 13:31:56.122: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-75" for this suite. Apr 21 13:32:02.144: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 21 13:32:02.224: INFO: namespace downward-api-75 deletion completed in 6.098596904s • [SLOW TEST:10.259 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 21 13:32:02.225: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap configmap-1933/configmap-test-888d58f4-6c79-42cb-bce4-1fd99df0f1e0 STEP: Creating a pod to test consume configMaps Apr 21 13:32:02.307: INFO: Waiting up to 5m0s for pod "pod-configmaps-eccacc44-8f91-437c-8d7e-cab19da44730" in namespace "configmap-1933" to be "success or failure" Apr 21 13:32:02.320: INFO: Pod "pod-configmaps-eccacc44-8f91-437c-8d7e-cab19da44730": Phase="Pending", Reason="", readiness=false. Elapsed: 12.647027ms Apr 21 13:32:04.365: INFO: Pod "pod-configmaps-eccacc44-8f91-437c-8d7e-cab19da44730": Phase="Pending", Reason="", readiness=false. Elapsed: 2.057795741s Apr 21 13:32:06.369: INFO: Pod "pod-configmaps-eccacc44-8f91-437c-8d7e-cab19da44730": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.062180814s STEP: Saw pod success Apr 21 13:32:06.370: INFO: Pod "pod-configmaps-eccacc44-8f91-437c-8d7e-cab19da44730" satisfied condition "success or failure" Apr 21 13:32:06.372: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-eccacc44-8f91-437c-8d7e-cab19da44730 container env-test: STEP: delete the pod Apr 21 13:32:06.393: INFO: Waiting for pod pod-configmaps-eccacc44-8f91-437c-8d7e-cab19da44730 to disappear Apr 21 13:32:06.397: INFO: Pod pod-configmaps-eccacc44-8f91-437c-8d7e-cab19da44730 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 21 13:32:06.397: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1933" for this suite. Apr 21 13:32:12.429: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 21 13:32:12.495: INFO: namespace configmap-1933 deletion completed in 6.093701426s • [SLOW TEST:10.270 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 21 13:32:12.495: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Apr 21 13:32:12.576: INFO: Waiting up to 5m0s for pod "downwardapi-volume-79a723ab-861a-4668-bc5b-604e84dae4d7" in namespace "downward-api-6259" to be "success or failure" Apr 21 13:32:12.583: INFO: Pod "downwardapi-volume-79a723ab-861a-4668-bc5b-604e84dae4d7": Phase="Pending", Reason="", readiness=false. Elapsed: 7.082778ms Apr 21 13:32:14.588: INFO: Pod "downwardapi-volume-79a723ab-861a-4668-bc5b-604e84dae4d7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011199743s Apr 21 13:32:16.592: INFO: Pod "downwardapi-volume-79a723ab-861a-4668-bc5b-604e84dae4d7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.015828221s STEP: Saw pod success Apr 21 13:32:16.592: INFO: Pod "downwardapi-volume-79a723ab-861a-4668-bc5b-604e84dae4d7" satisfied condition "success or failure" Apr 21 13:32:16.596: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-79a723ab-861a-4668-bc5b-604e84dae4d7 container client-container: STEP: delete the pod Apr 21 13:32:16.614: INFO: Waiting for pod downwardapi-volume-79a723ab-861a-4668-bc5b-604e84dae4d7 to disappear Apr 21 13:32:16.619: INFO: Pod downwardapi-volume-79a723ab-861a-4668-bc5b-604e84dae4d7 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 21 13:32:16.619: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6259" for this suite. Apr 21 13:32:22.650: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 21 13:32:22.726: INFO: namespace downward-api-6259 deletion completed in 6.104344245s • [SLOW TEST:10.231 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 21 13:32:22.726: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: getting the auto-created API token Apr 21 13:32:23.308: INFO: created pod pod-service-account-defaultsa Apr 21 13:32:23.308: INFO: pod pod-service-account-defaultsa service account token volume mount: true Apr 21 13:32:23.315: INFO: created pod pod-service-account-mountsa Apr 21 13:32:23.315: INFO: pod pod-service-account-mountsa service account token volume mount: true Apr 21 13:32:23.334: INFO: created pod pod-service-account-nomountsa Apr 21 13:32:23.334: INFO: pod pod-service-account-nomountsa service account token volume mount: false Apr 21 13:32:23.350: INFO: created pod pod-service-account-defaultsa-mountspec Apr 21 13:32:23.350: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true Apr 21 13:32:23.444: INFO: created pod pod-service-account-mountsa-mountspec Apr 21 13:32:23.444: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true Apr 21 13:32:23.452: INFO: created pod pod-service-account-nomountsa-mountspec Apr 21 13:32:23.452: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true Apr 21 13:32:23.479: INFO: created pod pod-service-account-defaultsa-nomountspec Apr 21 13:32:23.479: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false Apr 21 13:32:23.510: INFO: created pod pod-service-account-mountsa-nomountspec Apr 21 13:32:23.510: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false Apr 21 13:32:23.524: INFO: created pod pod-service-account-nomountsa-nomountspec Apr 21 13:32:23.524: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 21 13:32:23.524: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-1628" for this suite. Apr 21 13:32:49.653: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 21 13:32:49.725: INFO: namespace svcaccounts-1628 deletion completed in 26.13410809s • [SLOW TEST:26.999 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Job should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 21 13:32:49.726: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: delete a job STEP: deleting Job.batch foo in namespace job-2192, will wait for the garbage collector to delete the pods Apr 21 13:32:53.888: INFO: Deleting Job.batch foo took: 7.346378ms Apr 21 13:32:54.188: INFO: Terminating Job.batch foo pods took: 300.271006ms STEP: Ensuring job was deleted [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 21 13:33:31.991: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-2192" for this suite. Apr 21 13:33:38.008: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 21 13:33:38.088: INFO: namespace job-2192 deletion completed in 6.092826014s • [SLOW TEST:48.362 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 21 13:33:38.090: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted STEP: Gathering metrics W0421 13:33:49.211840 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Apr 21 13:33:49.211: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 21 13:33:49.211: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-203" for this suite. Apr 21 13:33:57.229: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 21 13:33:57.304: INFO: namespace gc-203 deletion completed in 8.088558287s • [SLOW TEST:19.214 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 21 13:33:57.305: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Apr 21 13:33:57.367: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0fcd8336-91b8-4f7b-9b89-dfd72b296169" in namespace "projected-6064" to be "success or failure" Apr 21 13:33:57.370: INFO: Pod "downwardapi-volume-0fcd8336-91b8-4f7b-9b89-dfd72b296169": Phase="Pending", Reason="", readiness=false. Elapsed: 3.26366ms Apr 21 13:33:59.377: INFO: Pod "downwardapi-volume-0fcd8336-91b8-4f7b-9b89-dfd72b296169": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009753129s Apr 21 13:34:01.381: INFO: Pod "downwardapi-volume-0fcd8336-91b8-4f7b-9b89-dfd72b296169": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01413455s STEP: Saw pod success Apr 21 13:34:01.381: INFO: Pod "downwardapi-volume-0fcd8336-91b8-4f7b-9b89-dfd72b296169" satisfied condition "success or failure" Apr 21 13:34:01.384: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-0fcd8336-91b8-4f7b-9b89-dfd72b296169 container client-container: STEP: delete the pod Apr 21 13:34:01.402: INFO: Waiting for pod downwardapi-volume-0fcd8336-91b8-4f7b-9b89-dfd72b296169 to disappear Apr 21 13:34:01.406: INFO: Pod downwardapi-volume-0fcd8336-91b8-4f7b-9b89-dfd72b296169 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 21 13:34:01.406: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6064" for this suite. Apr 21 13:34:07.429: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 21 13:34:07.499: INFO: namespace projected-6064 deletion completed in 6.090671578s • [SLOW TEST:10.195 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 21 13:34:07.500: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Apr 21 13:34:11.629: INFO: Waiting up to 5m0s for pod "client-envvars-af763cbc-fbf9-4740-8cd2-d0104ea6ef7b" in namespace "pods-5143" to be "success or failure" Apr 21 13:34:11.646: INFO: Pod "client-envvars-af763cbc-fbf9-4740-8cd2-d0104ea6ef7b": Phase="Pending", Reason="", readiness=false. Elapsed: 17.266748ms Apr 21 13:34:13.663: INFO: Pod "client-envvars-af763cbc-fbf9-4740-8cd2-d0104ea6ef7b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034658298s Apr 21 13:34:15.668: INFO: Pod "client-envvars-af763cbc-fbf9-4740-8cd2-d0104ea6ef7b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.039077559s STEP: Saw pod success Apr 21 13:34:15.668: INFO: Pod "client-envvars-af763cbc-fbf9-4740-8cd2-d0104ea6ef7b" satisfied condition "success or failure" Apr 21 13:34:15.671: INFO: Trying to get logs from node iruya-worker pod client-envvars-af763cbc-fbf9-4740-8cd2-d0104ea6ef7b container env3cont: STEP: delete the pod Apr 21 13:34:15.700: INFO: Waiting for pod client-envvars-af763cbc-fbf9-4740-8cd2-d0104ea6ef7b to disappear Apr 21 13:34:15.712: INFO: Pod client-envvars-af763cbc-fbf9-4740-8cd2-d0104ea6ef7b no longer exists [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 21 13:34:15.712: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-5143" for this suite. Apr 21 13:35:09.751: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 21 13:35:09.830: INFO: namespace pods-5143 deletion completed in 54.115141807s • [SLOW TEST:62.330 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 21 13:35:09.830: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0666 on node default medium Apr 21 13:35:09.902: INFO: Waiting up to 5m0s for pod "pod-9944fb50-c944-4099-a224-e0c0791e1cce" in namespace "emptydir-5871" to be "success or failure" Apr 21 13:35:09.920: INFO: Pod "pod-9944fb50-c944-4099-a224-e0c0791e1cce": Phase="Pending", Reason="", readiness=false. Elapsed: 17.972523ms Apr 21 13:35:12.242: INFO: Pod "pod-9944fb50-c944-4099-a224-e0c0791e1cce": Phase="Pending", Reason="", readiness=false. Elapsed: 2.339927543s Apr 21 13:35:14.247: INFO: Pod "pod-9944fb50-c944-4099-a224-e0c0791e1cce": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.344536205s STEP: Saw pod success Apr 21 13:35:14.247: INFO: Pod "pod-9944fb50-c944-4099-a224-e0c0791e1cce" satisfied condition "success or failure" Apr 21 13:35:14.250: INFO: Trying to get logs from node iruya-worker pod pod-9944fb50-c944-4099-a224-e0c0791e1cce container test-container: STEP: delete the pod Apr 21 13:35:14.270: INFO: Waiting for pod pod-9944fb50-c944-4099-a224-e0c0791e1cce to disappear Apr 21 13:35:14.292: INFO: Pod pod-9944fb50-c944-4099-a224-e0c0791e1cce no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 21 13:35:14.292: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5871" for this suite. Apr 21 13:35:20.381: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 21 13:35:20.457: INFO: namespace emptydir-5871 deletion completed in 6.113100721s • [SLOW TEST:10.627 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 21 13:35:20.458: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name secret-emptykey-test-85bb522b-e0fb-4800-905f-efb114dd2790 [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 21 13:35:20.533: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-6382" for this suite. Apr 21 13:35:26.548: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 21 13:35:26.620: INFO: namespace secrets-6382 deletion completed in 6.083165641s • [SLOW TEST:6.163 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31 should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 21 13:35:26.621: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0666 on tmpfs Apr 21 13:35:26.717: INFO: Waiting up to 5m0s for pod "pod-f06699db-4309-464e-94d8-4b688039deb0" in namespace "emptydir-2528" to be "success or failure" Apr 21 13:35:26.719: INFO: Pod "pod-f06699db-4309-464e-94d8-4b688039deb0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.051167ms Apr 21 13:35:28.723: INFO: Pod "pod-f06699db-4309-464e-94d8-4b688039deb0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006587624s Apr 21 13:35:30.728: INFO: Pod "pod-f06699db-4309-464e-94d8-4b688039deb0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01141758s STEP: Saw pod success Apr 21 13:35:30.728: INFO: Pod "pod-f06699db-4309-464e-94d8-4b688039deb0" satisfied condition "success or failure" Apr 21 13:35:30.731: INFO: Trying to get logs from node iruya-worker2 pod pod-f06699db-4309-464e-94d8-4b688039deb0 container test-container: STEP: delete the pod Apr 21 13:35:30.766: INFO: Waiting for pod pod-f06699db-4309-464e-94d8-4b688039deb0 to disappear Apr 21 13:35:30.779: INFO: Pod pod-f06699db-4309-464e-94d8-4b688039deb0 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 21 13:35:30.779: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2528" for this suite. Apr 21 13:35:36.794: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 21 13:35:36.870: INFO: namespace emptydir-2528 deletion completed in 6.088283893s • [SLOW TEST:10.250 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 21 13:35:36.871: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-78efe9d7-227a-4fec-a29b-a78549f1957b STEP: Creating a pod to test consume configMaps Apr 21 13:35:36.968: INFO: Waiting up to 5m0s for pod "pod-configmaps-8dee59b7-073c-4587-ae21-66b1c0dc0908" in namespace "configmap-1549" to be "success or failure" Apr 21 13:35:36.971: INFO: Pod "pod-configmaps-8dee59b7-073c-4587-ae21-66b1c0dc0908": Phase="Pending", Reason="", readiness=false. Elapsed: 3.129646ms Apr 21 13:35:38.975: INFO: Pod "pod-configmaps-8dee59b7-073c-4587-ae21-66b1c0dc0908": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007316607s Apr 21 13:35:40.980: INFO: Pod "pod-configmaps-8dee59b7-073c-4587-ae21-66b1c0dc0908": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011930036s STEP: Saw pod success Apr 21 13:35:40.980: INFO: Pod "pod-configmaps-8dee59b7-073c-4587-ae21-66b1c0dc0908" satisfied condition "success or failure" Apr 21 13:35:40.984: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-8dee59b7-073c-4587-ae21-66b1c0dc0908 container configmap-volume-test: STEP: delete the pod Apr 21 13:35:41.003: INFO: Waiting for pod pod-configmaps-8dee59b7-073c-4587-ae21-66b1c0dc0908 to disappear Apr 21 13:35:41.022: INFO: Pod pod-configmaps-8dee59b7-073c-4587-ae21-66b1c0dc0908 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 21 13:35:41.022: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1549" for this suite. Apr 21 13:35:47.042: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 21 13:35:47.118: INFO: namespace configmap-1549 deletion completed in 6.092922067s • [SLOW TEST:10.248 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 21 13:35:47.119: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-projected-all-test-volume-2d333d62-d649-49d0-b24b-aea4d62b4e12 STEP: Creating secret with name secret-projected-all-test-volume-31ad33bf-1e2c-462e-abd8-d9deb71d4ba4 STEP: Creating a pod to test Check all projections for projected volume plugin Apr 21 13:35:47.242: INFO: Waiting up to 5m0s for pod "projected-volume-395afd7d-f91c-4fcd-ac26-4a24e4a783d4" in namespace "projected-9211" to be "success or failure" Apr 21 13:35:47.248: INFO: Pod "projected-volume-395afd7d-f91c-4fcd-ac26-4a24e4a783d4": Phase="Pending", Reason="", readiness=false. Elapsed: 5.481336ms Apr 21 13:35:49.271: INFO: Pod "projected-volume-395afd7d-f91c-4fcd-ac26-4a24e4a783d4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029369248s Apr 21 13:35:51.276: INFO: Pod "projected-volume-395afd7d-f91c-4fcd-ac26-4a24e4a783d4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.033390306s STEP: Saw pod success Apr 21 13:35:51.276: INFO: Pod "projected-volume-395afd7d-f91c-4fcd-ac26-4a24e4a783d4" satisfied condition "success or failure" Apr 21 13:35:51.279: INFO: Trying to get logs from node iruya-worker2 pod projected-volume-395afd7d-f91c-4fcd-ac26-4a24e4a783d4 container projected-all-volume-test: STEP: delete the pod Apr 21 13:35:51.309: INFO: Waiting for pod projected-volume-395afd7d-f91c-4fcd-ac26-4a24e4a783d4 to disappear Apr 21 13:35:51.324: INFO: Pod projected-volume-395afd7d-f91c-4fcd-ac26-4a24e4a783d4 no longer exists [AfterEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 21 13:35:51.324: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9211" for this suite. Apr 21 13:35:57.339: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 21 13:35:57.413: INFO: namespace projected-9211 deletion completed in 6.085171493s • [SLOW TEST:10.294 seconds] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:31 should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 21 13:35:57.414: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating the pod Apr 21 13:36:02.024: INFO: Successfully updated pod "annotationupdatec62ee80b-d068-402f-8cc3-f10227096ea5" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 21 13:36:04.059: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2423" for this suite. Apr 21 13:36:26.082: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 21 13:36:26.160: INFO: namespace projected-2423 deletion completed in 22.098243861s • [SLOW TEST:28.746 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 21 13:36:26.160: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Apr 21 13:36:26.222: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) Apr 21 13:36:26.242: INFO: Pod name sample-pod: Found 0 pods out of 1 Apr 21 13:36:31.246: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Apr 21 13:36:31.247: INFO: Creating deployment "test-rolling-update-deployment" Apr 21 13:36:31.251: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has Apr 21 13:36:31.282: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created Apr 21 13:36:33.290: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected Apr 21 13:36:33.292: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723072991, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723072991, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723072991, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723072991, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-79f6b9d75c\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 21 13:36:35.297: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 Apr 21 13:36:35.307: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment,GenerateName:,Namespace:deployment-7821,SelfLink:/apis/apps/v1/namespaces/deployment-7821/deployments/test-rolling-update-deployment,UID:4ae1701d-8acb-4f2f-a699-48128b80d04a,ResourceVersion:6643745,Generation:1,CreationTimestamp:2020-04-21 13:36:31 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-04-21 13:36:31 +0000 UTC 2020-04-21 13:36:31 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-04-21 13:36:35 +0000 UTC 2020-04-21 13:36:31 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rolling-update-deployment-79f6b9d75c" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},} Apr 21 13:36:35.310: INFO: New ReplicaSet "test-rolling-update-deployment-79f6b9d75c" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-79f6b9d75c,GenerateName:,Namespace:deployment-7821,SelfLink:/apis/apps/v1/namespaces/deployment-7821/replicasets/test-rolling-update-deployment-79f6b9d75c,UID:539ead6f-8c9f-4d7b-9b31-2c98132008f3,ResourceVersion:6643733,Generation:1,CreationTimestamp:2020-04-21 13:36:31 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment 4ae1701d-8acb-4f2f-a699-48128b80d04a 0xc002d36d87 0xc002d36d88}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} Apr 21 13:36:35.310: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": Apr 21 13:36:35.310: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-controller,GenerateName:,Namespace:deployment-7821,SelfLink:/apis/apps/v1/namespaces/deployment-7821/replicasets/test-rolling-update-controller,UID:0b2260b7-cbfe-40f1-8423-b5819e57f82e,ResourceVersion:6643743,Generation:2,CreationTimestamp:2020-04-21 13:36:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305832,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment 4ae1701d-8acb-4f2f-a699-48128b80d04a 0xc002d36bb7 0xc002d36bb8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Apr 21 13:36:35.313: INFO: Pod "test-rolling-update-deployment-79f6b9d75c-n7fx8" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-79f6b9d75c-n7fx8,GenerateName:test-rolling-update-deployment-79f6b9d75c-,Namespace:deployment-7821,SelfLink:/api/v1/namespaces/deployment-7821/pods/test-rolling-update-deployment-79f6b9d75c-n7fx8,UID:158fbeb4-6d19-4a68-a1a7-82617653cd4f,ResourceVersion:6643732,Generation:0,CreationTimestamp:2020-04-21 13:36:31 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rolling-update-deployment-79f6b9d75c 539ead6f-8c9f-4d7b-9b31-2c98132008f3 0xc0023fe787 0xc0023fe788}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-rr2jv {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rr2jv,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-rr2jv true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0023fe810} {node.kubernetes.io/unreachable Exists NoExecute 0xc0023fe830}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-21 13:36:31 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-04-21 13:36:34 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-04-21 13:36:34 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-21 13:36:31 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:10.244.2.205,StartTime:2020-04-21 13:36:31 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-04-21 13:36:34 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 containerd://aebe8a545d8d8511deb6cf87f960767fa22ad398b642e6483b28d23fab9b29b6}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 21 13:36:35.313: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-7821" for this suite. Apr 21 13:36:41.429: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 21 13:36:41.513: INFO: namespace deployment-7821 deletion completed in 6.196704259s • [SLOW TEST:15.353 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 21 13:36:41.513: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Apr 21 13:36:49.609: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Apr 21 13:36:49.647: INFO: Pod pod-with-poststart-http-hook still exists Apr 21 13:36:51.647: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Apr 21 13:36:51.652: INFO: Pod pod-with-poststart-http-hook still exists Apr 21 13:36:53.647: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Apr 21 13:36:53.651: INFO: Pod pod-with-poststart-http-hook still exists Apr 21 13:36:55.647: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Apr 21 13:36:55.651: INFO: Pod pod-with-poststart-http-hook still exists Apr 21 13:36:57.647: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Apr 21 13:36:57.652: INFO: Pod pod-with-poststart-http-hook still exists Apr 21 13:36:59.647: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Apr 21 13:36:59.651: INFO: Pod pod-with-poststart-http-hook still exists Apr 21 13:37:01.647: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Apr 21 13:37:01.651: INFO: Pod pod-with-poststart-http-hook still exists Apr 21 13:37:03.647: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Apr 21 13:37:03.652: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 21 13:37:03.652: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-8794" for this suite. Apr 21 13:37:25.666: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 21 13:37:25.785: INFO: namespace container-lifecycle-hook-8794 deletion completed in 22.129198289s • [SLOW TEST:44.272 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 21 13:37:25.786: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating secret secrets-8999/secret-test-9e26d801-eaa3-4c03-b3f7-698615fbb228 STEP: Creating a pod to test consume secrets Apr 21 13:37:25.871: INFO: Waiting up to 5m0s for pod "pod-configmaps-3e1650af-b680-48ac-89fa-7f483240777e" in namespace "secrets-8999" to be "success or failure" Apr 21 13:37:25.876: INFO: Pod "pod-configmaps-3e1650af-b680-48ac-89fa-7f483240777e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.661019ms Apr 21 13:37:27.903: INFO: Pod "pod-configmaps-3e1650af-b680-48ac-89fa-7f483240777e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031372603s Apr 21 13:37:30.203: INFO: Pod "pod-configmaps-3e1650af-b680-48ac-89fa-7f483240777e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.33120321s STEP: Saw pod success Apr 21 13:37:30.203: INFO: Pod "pod-configmaps-3e1650af-b680-48ac-89fa-7f483240777e" satisfied condition "success or failure" Apr 21 13:37:30.211: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-3e1650af-b680-48ac-89fa-7f483240777e container env-test: STEP: delete the pod Apr 21 13:37:30.279: INFO: Waiting for pod pod-configmaps-3e1650af-b680-48ac-89fa-7f483240777e to disappear Apr 21 13:37:30.699: INFO: Pod pod-configmaps-3e1650af-b680-48ac-89fa-7f483240777e no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 21 13:37:30.699: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-8999" for this suite. Apr 21 13:37:36.718: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 21 13:37:36.794: INFO: namespace secrets-8999 deletion completed in 6.091105232s • [SLOW TEST:11.008 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl logs should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 21 13:37:36.794: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1292 STEP: creating an rc Apr 21 13:37:36.850: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7757' Apr 21 13:37:39.371: INFO: stderr: "" Apr 21 13:37:39.371: INFO: stdout: "replicationcontroller/redis-master created\n" [It] should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Waiting for Redis master to start. Apr 21 13:37:40.375: INFO: Selector matched 1 pods for map[app:redis] Apr 21 13:37:40.375: INFO: Found 0 / 1 Apr 21 13:37:41.376: INFO: Selector matched 1 pods for map[app:redis] Apr 21 13:37:41.376: INFO: Found 0 / 1 Apr 21 13:37:42.375: INFO: Selector matched 1 pods for map[app:redis] Apr 21 13:37:42.375: INFO: Found 1 / 1 Apr 21 13:37:42.375: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Apr 21 13:37:42.377: INFO: Selector matched 1 pods for map[app:redis] Apr 21 13:37:42.377: INFO: ForEach: Found 1 pods from the filter. Now looping through them. STEP: checking for a matching strings Apr 21 13:37:42.377: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-br67q redis-master --namespace=kubectl-7757' Apr 21 13:37:42.486: INFO: stderr: "" Apr 21 13:37:42.486: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 21 Apr 13:37:41.707 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 21 Apr 13:37:41.707 # Server started, Redis version 3.2.12\n1:M 21 Apr 13:37:41.707 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 21 Apr 13:37:41.707 * The server is now ready to accept connections on port 6379\n" STEP: limiting log lines Apr 21 13:37:42.486: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-br67q redis-master --namespace=kubectl-7757 --tail=1' Apr 21 13:37:42.597: INFO: stderr: "" Apr 21 13:37:42.597: INFO: stdout: "1:M 21 Apr 13:37:41.707 * The server is now ready to accept connections on port 6379\n" STEP: limiting log bytes Apr 21 13:37:42.597: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-br67q redis-master --namespace=kubectl-7757 --limit-bytes=1' Apr 21 13:37:42.706: INFO: stderr: "" Apr 21 13:37:42.706: INFO: stdout: " " STEP: exposing timestamps Apr 21 13:37:42.706: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-br67q redis-master --namespace=kubectl-7757 --tail=1 --timestamps' Apr 21 13:37:42.811: INFO: stderr: "" Apr 21 13:37:42.811: INFO: stdout: "2020-04-21T13:37:41.707947158Z 1:M 21 Apr 13:37:41.707 * The server is now ready to accept connections on port 6379\n" STEP: restricting to a time range Apr 21 13:37:45.311: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-br67q redis-master --namespace=kubectl-7757 --since=1s' Apr 21 13:37:45.428: INFO: stderr: "" Apr 21 13:37:45.428: INFO: stdout: "" Apr 21 13:37:45.428: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-br67q redis-master --namespace=kubectl-7757 --since=24h' Apr 21 13:37:45.534: INFO: stderr: "" Apr 21 13:37:45.534: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 21 Apr 13:37:41.707 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 21 Apr 13:37:41.707 # Server started, Redis version 3.2.12\n1:M 21 Apr 13:37:41.707 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 21 Apr 13:37:41.707 * The server is now ready to accept connections on port 6379\n" [AfterEach] [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1298 STEP: using delete to clean up resources Apr 21 13:37:45.534: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-7757' Apr 21 13:37:45.624: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 21 13:37:45.624: INFO: stdout: "replicationcontroller \"redis-master\" force deleted\n" Apr 21 13:37:45.624: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=nginx --no-headers --namespace=kubectl-7757' Apr 21 13:37:45.721: INFO: stderr: "No resources found.\n" Apr 21 13:37:45.721: INFO: stdout: "" Apr 21 13:37:45.721: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=nginx --namespace=kubectl-7757 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Apr 21 13:37:45.809: INFO: stderr: "" Apr 21 13:37:45.809: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 21 13:37:45.809: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7757" for this suite. Apr 21 13:38:07.828: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 21 13:38:07.907: INFO: namespace kubectl-7757 deletion completed in 22.093916362s • [SLOW TEST:31.113 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 21 13:38:07.907: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: starting a background goroutine to produce watch events STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 21 13:38:13.365: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-6708" for this suite. Apr 21 13:38:19.524: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 21 13:38:19.605: INFO: namespace watch-6708 deletion completed in 6.19143393s • [SLOW TEST:11.698 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 21 13:38:19.605: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Apr 21 13:38:19.655: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 21 13:38:23.851: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-9603" for this suite. Apr 21 13:39:13.887: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 21 13:39:13.969: INFO: namespace pods-9603 deletion completed in 50.100935804s • [SLOW TEST:54.364 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 21 13:39:13.970: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Apr 21 13:39:22.094: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 21 13:39:22.099: INFO: Pod pod-with-prestop-exec-hook still exists Apr 21 13:39:24.100: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 21 13:39:24.120: INFO: Pod pod-with-prestop-exec-hook still exists Apr 21 13:39:26.100: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 21 13:39:26.104: INFO: Pod pod-with-prestop-exec-hook still exists Apr 21 13:39:28.100: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 21 13:39:28.121: INFO: Pod pod-with-prestop-exec-hook still exists Apr 21 13:39:30.100: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 21 13:39:30.126: INFO: Pod pod-with-prestop-exec-hook still exists Apr 21 13:39:32.100: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 21 13:39:32.104: INFO: Pod pod-with-prestop-exec-hook still exists Apr 21 13:39:34.100: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 21 13:39:34.103: INFO: Pod pod-with-prestop-exec-hook still exists Apr 21 13:39:36.100: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 21 13:39:36.104: INFO: Pod pod-with-prestop-exec-hook still exists Apr 21 13:39:38.100: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 21 13:39:38.104: INFO: Pod pod-with-prestop-exec-hook still exists Apr 21 13:39:40.100: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 21 13:39:40.104: INFO: Pod pod-with-prestop-exec-hook still exists Apr 21 13:39:42.100: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 21 13:39:42.112: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 21 13:39:42.118: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-8667" for this suite. Apr 21 13:40:04.145: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 21 13:40:04.249: INFO: namespace container-lifecycle-hook-8667 deletion completed in 22.127954796s • [SLOW TEST:50.279 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 21 13:40:04.250: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Apr 21 13:40:04.308: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 21 13:40:08.372: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-5086" for this suite. Apr 21 13:40:54.391: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 21 13:40:54.482: INFO: namespace pods-5086 deletion completed in 46.106209395s • [SLOW TEST:50.232 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 21 13:40:54.482: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-bc585d30-f24a-4953-9ab7-9f8e45c56460 STEP: Creating a pod to test consume secrets Apr 21 13:40:54.555: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-414c814d-6496-40c5-aff1-626e5038ee50" in namespace "projected-9293" to be "success or failure" Apr 21 13:40:54.559: INFO: Pod "pod-projected-secrets-414c814d-6496-40c5-aff1-626e5038ee50": Phase="Pending", Reason="", readiness=false. Elapsed: 4.215392ms Apr 21 13:40:56.564: INFO: Pod "pod-projected-secrets-414c814d-6496-40c5-aff1-626e5038ee50": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008588637s Apr 21 13:40:58.568: INFO: Pod "pod-projected-secrets-414c814d-6496-40c5-aff1-626e5038ee50": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013164595s STEP: Saw pod success Apr 21 13:40:58.568: INFO: Pod "pod-projected-secrets-414c814d-6496-40c5-aff1-626e5038ee50" satisfied condition "success or failure" Apr 21 13:40:58.572: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-secrets-414c814d-6496-40c5-aff1-626e5038ee50 container projected-secret-volume-test: STEP: delete the pod Apr 21 13:40:58.596: INFO: Waiting for pod pod-projected-secrets-414c814d-6496-40c5-aff1-626e5038ee50 to disappear Apr 21 13:40:58.602: INFO: Pod pod-projected-secrets-414c814d-6496-40c5-aff1-626e5038ee50 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 21 13:40:58.602: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9293" for this suite. Apr 21 13:41:04.626: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 21 13:41:04.710: INFO: namespace projected-9293 deletion completed in 6.101628121s • [SLOW TEST:10.228 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 21 13:41:04.710: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-upd-65fea99a-d47b-4760-8269-2b9b696eb015 STEP: Creating the pod STEP: Updating configmap configmap-test-upd-65fea99a-d47b-4760-8269-2b9b696eb015 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 21 13:42:15.158: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6630" for this suite. Apr 21 13:42:37.176: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 21 13:42:37.250: INFO: namespace configmap-6630 deletion completed in 22.088288561s • [SLOW TEST:92.540 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 21 13:42:37.251: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 Apr 21 13:42:37.308: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Apr 21 13:42:37.315: INFO: Waiting for terminating namespaces to be deleted... Apr 21 13:42:37.318: INFO: Logging pods the kubelet thinks is on node iruya-worker before test Apr 21 13:42:37.323: INFO: kube-proxy-pmz4p from kube-system started at 2020-03-15 18:24:55 +0000 UTC (1 container statuses recorded) Apr 21 13:42:37.323: INFO: Container kube-proxy ready: true, restart count 0 Apr 21 13:42:37.323: INFO: kindnet-gwz5g from kube-system started at 2020-03-15 18:24:55 +0000 UTC (1 container statuses recorded) Apr 21 13:42:37.323: INFO: Container kindnet-cni ready: true, restart count 0 Apr 21 13:42:37.323: INFO: Logging pods the kubelet thinks is on node iruya-worker2 before test Apr 21 13:42:37.330: INFO: coredns-5d4dd4b4db-gm7vr from kube-system started at 2020-03-15 18:24:52 +0000 UTC (1 container statuses recorded) Apr 21 13:42:37.330: INFO: Container coredns ready: true, restart count 0 Apr 21 13:42:37.330: INFO: coredns-5d4dd4b4db-6jcgz from kube-system started at 2020-03-15 18:24:54 +0000 UTC (1 container statuses recorded) Apr 21 13:42:37.330: INFO: Container coredns ready: true, restart count 0 Apr 21 13:42:37.330: INFO: kube-proxy-vwbcj from kube-system started at 2020-03-15 18:24:42 +0000 UTC (1 container statuses recorded) Apr 21 13:42:37.330: INFO: Container kube-proxy ready: true, restart count 0 Apr 21 13:42:37.330: INFO: kindnet-mgd8b from kube-system started at 2020-03-15 18:24:43 +0000 UTC (1 container statuses recorded) Apr 21 13:42:37.330: INFO: Container kindnet-cni ready: true, restart count 0 [It] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-7e955720-f7c0-460c-98a5-ce0e6755f247 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-7e955720-f7c0-460c-98a5-ce0e6755f247 off the node iruya-worker2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-7e955720-f7c0-460c-98a5-ce0e6755f247 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 21 13:42:45.494: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-6362" for this suite. Apr 21 13:43:03.508: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 21 13:43:03.589: INFO: namespace sched-pred-6362 deletion completed in 18.09132036s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72 • [SLOW TEST:26.338 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23 validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 21 13:43:03.589: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 21 13:43:34.414: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-5969" for this suite. Apr 21 13:43:40.441: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 21 13:43:40.519: INFO: namespace container-runtime-5969 deletion completed in 6.102668517s • [SLOW TEST:36.930 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 when starting a container that exits /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:39 should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 21 13:43:40.520: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Apr 21 13:43:45.158: INFO: Successfully updated pod "pod-update-activedeadlineseconds-44c00d1a-7ab3-4fe6-991c-fa0e3da6c6a4" Apr 21 13:43:45.158: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-44c00d1a-7ab3-4fe6-991c-fa0e3da6c6a4" in namespace "pods-6070" to be "terminated due to deadline exceeded" Apr 21 13:43:45.162: INFO: Pod "pod-update-activedeadlineseconds-44c00d1a-7ab3-4fe6-991c-fa0e3da6c6a4": Phase="Running", Reason="", readiness=true. Elapsed: 3.964286ms Apr 21 13:43:47.166: INFO: Pod "pod-update-activedeadlineseconds-44c00d1a-7ab3-4fe6-991c-fa0e3da6c6a4": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.007848751s Apr 21 13:43:47.166: INFO: Pod "pod-update-activedeadlineseconds-44c00d1a-7ab3-4fe6-991c-fa0e3da6c6a4" satisfied condition "terminated due to deadline exceeded" [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 21 13:43:47.166: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-6070" for this suite. Apr 21 13:43:53.205: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 21 13:43:53.301: INFO: namespace pods-6070 deletion completed in 6.131592199s • [SLOW TEST:12.782 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 21 13:43:53.303: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-map-43c19ad8-ecc8-4418-8bcf-d915d759dafd STEP: Creating a pod to test consume configMaps Apr 21 13:43:53.374: INFO: Waiting up to 5m0s for pod "pod-configmaps-b1772d18-21e5-4653-9844-9417a9b77396" in namespace "configmap-5873" to be "success or failure" Apr 21 13:43:53.378: INFO: Pod "pod-configmaps-b1772d18-21e5-4653-9844-9417a9b77396": Phase="Pending", Reason="", readiness=false. Elapsed: 3.90271ms Apr 21 13:43:55.382: INFO: Pod "pod-configmaps-b1772d18-21e5-4653-9844-9417a9b77396": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007622695s Apr 21 13:43:57.386: INFO: Pod "pod-configmaps-b1772d18-21e5-4653-9844-9417a9b77396": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011778926s STEP: Saw pod success Apr 21 13:43:57.386: INFO: Pod "pod-configmaps-b1772d18-21e5-4653-9844-9417a9b77396" satisfied condition "success or failure" Apr 21 13:43:57.389: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-b1772d18-21e5-4653-9844-9417a9b77396 container configmap-volume-test: STEP: delete the pod Apr 21 13:43:57.427: INFO: Waiting for pod pod-configmaps-b1772d18-21e5-4653-9844-9417a9b77396 to disappear Apr 21 13:43:57.453: INFO: Pod pod-configmaps-b1772d18-21e5-4653-9844-9417a9b77396 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 21 13:43:57.453: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5873" for this suite. Apr 21 13:44:03.467: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 21 13:44:03.539: INFO: namespace configmap-5873 deletion completed in 6.083201296s • [SLOW TEST:10.236 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 21 13:44:03.539: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-b798bb42-af77-43bd-a14f-1baef7b6bf13 STEP: Creating a pod to test consume secrets Apr 21 13:44:03.674: INFO: Waiting up to 5m0s for pod "pod-secrets-29f7f5cd-8bf3-4894-9cdd-77d9aa2bddb5" in namespace "secrets-9449" to be "success or failure" Apr 21 13:44:03.678: INFO: Pod "pod-secrets-29f7f5cd-8bf3-4894-9cdd-77d9aa2bddb5": Phase="Pending", Reason="", readiness=false. Elapsed: 3.858072ms Apr 21 13:44:05.717: INFO: Pod "pod-secrets-29f7f5cd-8bf3-4894-9cdd-77d9aa2bddb5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.042844961s Apr 21 13:44:07.727: INFO: Pod "pod-secrets-29f7f5cd-8bf3-4894-9cdd-77d9aa2bddb5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.052625538s STEP: Saw pod success Apr 21 13:44:07.727: INFO: Pod "pod-secrets-29f7f5cd-8bf3-4894-9cdd-77d9aa2bddb5" satisfied condition "success or failure" Apr 21 13:44:07.729: INFO: Trying to get logs from node iruya-worker2 pod pod-secrets-29f7f5cd-8bf3-4894-9cdd-77d9aa2bddb5 container secret-volume-test: STEP: delete the pod Apr 21 13:44:07.756: INFO: Waiting for pod pod-secrets-29f7f5cd-8bf3-4894-9cdd-77d9aa2bddb5 to disappear Apr 21 13:44:07.773: INFO: Pod pod-secrets-29f7f5cd-8bf3-4894-9cdd-77d9aa2bddb5 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 21 13:44:07.773: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-9449" for this suite. Apr 21 13:44:13.788: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 21 13:44:13.868: INFO: namespace secrets-9449 deletion completed in 6.092386491s • [SLOW TEST:10.329 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 21 13:44:13.869: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod Apr 21 13:44:13.943: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 21 13:44:19.778: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-1579" for this suite. Apr 21 13:44:25.822: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 21 13:44:25.923: INFO: namespace init-container-1579 deletion completed in 6.129828903s • [SLOW TEST:12.054 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 21 13:44:25.923: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-map-010aa462-f452-409f-8685-46e31282259b STEP: Creating a pod to test consume configMaps Apr 21 13:44:26.004: INFO: Waiting up to 5m0s for pod "pod-configmaps-e999d676-b0b5-40ae-830e-f1c721f2b695" in namespace "configmap-2877" to be "success or failure" Apr 21 13:44:26.008: INFO: Pod "pod-configmaps-e999d676-b0b5-40ae-830e-f1c721f2b695": Phase="Pending", Reason="", readiness=false. Elapsed: 3.902869ms Apr 21 13:44:28.012: INFO: Pod "pod-configmaps-e999d676-b0b5-40ae-830e-f1c721f2b695": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007527912s Apr 21 13:44:30.015: INFO: Pod "pod-configmaps-e999d676-b0b5-40ae-830e-f1c721f2b695": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011245411s STEP: Saw pod success Apr 21 13:44:30.015: INFO: Pod "pod-configmaps-e999d676-b0b5-40ae-830e-f1c721f2b695" satisfied condition "success or failure" Apr 21 13:44:30.018: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-e999d676-b0b5-40ae-830e-f1c721f2b695 container configmap-volume-test: STEP: delete the pod Apr 21 13:44:30.047: INFO: Waiting for pod pod-configmaps-e999d676-b0b5-40ae-830e-f1c721f2b695 to disappear Apr 21 13:44:30.062: INFO: Pod pod-configmaps-e999d676-b0b5-40ae-830e-f1c721f2b695 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 21 13:44:30.062: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2877" for this suite. Apr 21 13:44:36.490: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 21 13:44:36.566: INFO: namespace configmap-2877 deletion completed in 6.501127248s • [SLOW TEST:10.643 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 21 13:44:36.567: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Setting up the test STEP: Creating hostNetwork=false pod STEP: Creating hostNetwork=true pod STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false Apr 21 13:44:47.062: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-3932 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 21 13:44:47.062: INFO: >>> kubeConfig: /root/.kube/config I0421 13:44:47.097316 6 log.go:172] (0xc001946580) (0xc001ea5720) Create stream I0421 13:44:47.097344 6 log.go:172] (0xc001946580) (0xc001ea5720) Stream added, broadcasting: 1 I0421 13:44:47.099044 6 log.go:172] (0xc001946580) Reply frame received for 1 I0421 13:44:47.099120 6 log.go:172] (0xc001946580) (0xc0023448c0) Create stream I0421 13:44:47.099148 6 log.go:172] (0xc001946580) (0xc0023448c0) Stream added, broadcasting: 3 I0421 13:44:47.099986 6 log.go:172] (0xc001946580) Reply frame received for 3 I0421 13:44:47.100010 6 log.go:172] (0xc001946580) (0xc001f79c20) Create stream I0421 13:44:47.100016 6 log.go:172] (0xc001946580) (0xc001f79c20) Stream added, broadcasting: 5 I0421 13:44:47.100897 6 log.go:172] (0xc001946580) Reply frame received for 5 I0421 13:44:47.179044 6 log.go:172] (0xc001946580) Data frame received for 3 I0421 13:44:47.179070 6 log.go:172] (0xc0023448c0) (3) Data frame handling I0421 13:44:47.179115 6 log.go:172] (0xc001946580) Data frame received for 5 I0421 13:44:47.179154 6 log.go:172] (0xc001f79c20) (5) Data frame handling I0421 13:44:47.179188 6 log.go:172] (0xc0023448c0) (3) Data frame sent I0421 13:44:47.179202 6 log.go:172] (0xc001946580) Data frame received for 3 I0421 13:44:47.179216 6 log.go:172] (0xc0023448c0) (3) Data frame handling I0421 13:44:47.180931 6 log.go:172] (0xc001946580) Data frame received for 1 I0421 13:44:47.180953 6 log.go:172] (0xc001ea5720) (1) Data frame handling I0421 13:44:47.180964 6 log.go:172] (0xc001ea5720) (1) Data frame sent I0421 13:44:47.180978 6 log.go:172] (0xc001946580) (0xc001ea5720) Stream removed, broadcasting: 1 I0421 13:44:47.181032 6 log.go:172] (0xc001946580) Go away received I0421 13:44:47.181078 6 log.go:172] (0xc001946580) (0xc001ea5720) Stream removed, broadcasting: 1 I0421 13:44:47.181256 6 log.go:172] (0xc001946580) (0xc0023448c0) Stream removed, broadcasting: 3 I0421 13:44:47.181292 6 log.go:172] (0xc001946580) (0xc001f79c20) Stream removed, broadcasting: 5 Apr 21 13:44:47.181: INFO: Exec stderr: "" Apr 21 13:44:47.181: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-3932 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 21 13:44:47.181: INFO: >>> kubeConfig: /root/.kube/config I0421 13:44:47.215122 6 log.go:172] (0xc00062b550) (0xc001f79e00) Create stream I0421 13:44:47.215151 6 log.go:172] (0xc00062b550) (0xc001f79e00) Stream added, broadcasting: 1 I0421 13:44:47.216851 6 log.go:172] (0xc00062b550) Reply frame received for 1 I0421 13:44:47.216920 6 log.go:172] (0xc00062b550) (0xc001f79ea0) Create stream I0421 13:44:47.216941 6 log.go:172] (0xc00062b550) (0xc001f79ea0) Stream added, broadcasting: 3 I0421 13:44:47.218127 6 log.go:172] (0xc00062b550) Reply frame received for 3 I0421 13:44:47.218176 6 log.go:172] (0xc00062b550) (0xc001f79f40) Create stream I0421 13:44:47.218192 6 log.go:172] (0xc00062b550) (0xc001f79f40) Stream added, broadcasting: 5 I0421 13:44:47.218870 6 log.go:172] (0xc00062b550) Reply frame received for 5 I0421 13:44:47.276656 6 log.go:172] (0xc00062b550) Data frame received for 5 I0421 13:44:47.276702 6 log.go:172] (0xc001f79f40) (5) Data frame handling I0421 13:44:47.276740 6 log.go:172] (0xc00062b550) Data frame received for 3 I0421 13:44:47.276780 6 log.go:172] (0xc001f79ea0) (3) Data frame handling I0421 13:44:47.276807 6 log.go:172] (0xc001f79ea0) (3) Data frame sent I0421 13:44:47.276820 6 log.go:172] (0xc00062b550) Data frame received for 3 I0421 13:44:47.276829 6 log.go:172] (0xc001f79ea0) (3) Data frame handling I0421 13:44:47.278588 6 log.go:172] (0xc00062b550) Data frame received for 1 I0421 13:44:47.278616 6 log.go:172] (0xc001f79e00) (1) Data frame handling I0421 13:44:47.278638 6 log.go:172] (0xc001f79e00) (1) Data frame sent I0421 13:44:47.278653 6 log.go:172] (0xc00062b550) (0xc001f79e00) Stream removed, broadcasting: 1 I0421 13:44:47.278675 6 log.go:172] (0xc00062b550) Go away received I0421 13:44:47.278814 6 log.go:172] (0xc00062b550) (0xc001f79e00) Stream removed, broadcasting: 1 I0421 13:44:47.278836 6 log.go:172] (0xc00062b550) (0xc001f79ea0) Stream removed, broadcasting: 3 I0421 13:44:47.278849 6 log.go:172] (0xc00062b550) (0xc001f79f40) Stream removed, broadcasting: 5 Apr 21 13:44:47.278: INFO: Exec stderr: "" Apr 21 13:44:47.278: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-3932 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 21 13:44:47.278: INFO: >>> kubeConfig: /root/.kube/config I0421 13:44:47.305043 6 log.go:172] (0xc00133f1e0) (0xc002344dc0) Create stream I0421 13:44:47.305071 6 log.go:172] (0xc00133f1e0) (0xc002344dc0) Stream added, broadcasting: 1 I0421 13:44:47.306975 6 log.go:172] (0xc00133f1e0) Reply frame received for 1 I0421 13:44:47.307016 6 log.go:172] (0xc00133f1e0) (0xc002344e60) Create stream I0421 13:44:47.307028 6 log.go:172] (0xc00133f1e0) (0xc002344e60) Stream added, broadcasting: 3 I0421 13:44:47.307975 6 log.go:172] (0xc00133f1e0) Reply frame received for 3 I0421 13:44:47.308010 6 log.go:172] (0xc00133f1e0) (0xc001ea5900) Create stream I0421 13:44:47.308021 6 log.go:172] (0xc00133f1e0) (0xc001ea5900) Stream added, broadcasting: 5 I0421 13:44:47.308688 6 log.go:172] (0xc00133f1e0) Reply frame received for 5 I0421 13:44:47.442912 6 log.go:172] (0xc00133f1e0) Data frame received for 3 I0421 13:44:47.442949 6 log.go:172] (0xc002344e60) (3) Data frame handling I0421 13:44:47.442959 6 log.go:172] (0xc002344e60) (3) Data frame sent I0421 13:44:47.442966 6 log.go:172] (0xc00133f1e0) Data frame received for 3 I0421 13:44:47.442984 6 log.go:172] (0xc002344e60) (3) Data frame handling I0421 13:44:47.443000 6 log.go:172] (0xc00133f1e0) Data frame received for 5 I0421 13:44:47.443029 6 log.go:172] (0xc001ea5900) (5) Data frame handling I0421 13:44:47.444437 6 log.go:172] (0xc00133f1e0) Data frame received for 1 I0421 13:44:47.444467 6 log.go:172] (0xc002344dc0) (1) Data frame handling I0421 13:44:47.444489 6 log.go:172] (0xc002344dc0) (1) Data frame sent I0421 13:44:47.444510 6 log.go:172] (0xc00133f1e0) (0xc002344dc0) Stream removed, broadcasting: 1 I0421 13:44:47.444599 6 log.go:172] (0xc00133f1e0) Go away received I0421 13:44:47.444653 6 log.go:172] (0xc00133f1e0) (0xc002344dc0) Stream removed, broadcasting: 1 I0421 13:44:47.444675 6 log.go:172] (0xc00133f1e0) (0xc002344e60) Stream removed, broadcasting: 3 I0421 13:44:47.444687 6 log.go:172] (0xc00133f1e0) (0xc001ea5900) Stream removed, broadcasting: 5 Apr 21 13:44:47.444: INFO: Exec stderr: "" Apr 21 13:44:47.444: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-3932 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 21 13:44:47.444: INFO: >>> kubeConfig: /root/.kube/config I0421 13:44:47.487174 6 log.go:172] (0xc002046420) (0xc002345180) Create stream I0421 13:44:47.487220 6 log.go:172] (0xc002046420) (0xc002345180) Stream added, broadcasting: 1 I0421 13:44:47.489739 6 log.go:172] (0xc002046420) Reply frame received for 1 I0421 13:44:47.489791 6 log.go:172] (0xc002046420) (0xc001018820) Create stream I0421 13:44:47.489813 6 log.go:172] (0xc002046420) (0xc001018820) Stream added, broadcasting: 3 I0421 13:44:47.490955 6 log.go:172] (0xc002046420) Reply frame received for 3 I0421 13:44:47.490991 6 log.go:172] (0xc002046420) (0xc00020e140) Create stream I0421 13:44:47.491003 6 log.go:172] (0xc002046420) (0xc00020e140) Stream added, broadcasting: 5 I0421 13:44:47.491937 6 log.go:172] (0xc002046420) Reply frame received for 5 I0421 13:44:47.542252 6 log.go:172] (0xc002046420) Data frame received for 5 I0421 13:44:47.542296 6 log.go:172] (0xc00020e140) (5) Data frame handling I0421 13:44:47.542335 6 log.go:172] (0xc002046420) Data frame received for 3 I0421 13:44:47.542372 6 log.go:172] (0xc001018820) (3) Data frame handling I0421 13:44:47.542391 6 log.go:172] (0xc001018820) (3) Data frame sent I0421 13:44:47.542405 6 log.go:172] (0xc002046420) Data frame received for 3 I0421 13:44:47.542418 6 log.go:172] (0xc001018820) (3) Data frame handling I0421 13:44:47.543712 6 log.go:172] (0xc002046420) Data frame received for 1 I0421 13:44:47.543756 6 log.go:172] (0xc002345180) (1) Data frame handling I0421 13:44:47.543785 6 log.go:172] (0xc002345180) (1) Data frame sent I0421 13:44:47.543801 6 log.go:172] (0xc002046420) (0xc002345180) Stream removed, broadcasting: 1 I0421 13:44:47.543832 6 log.go:172] (0xc002046420) Go away received I0421 13:44:47.544009 6 log.go:172] (0xc002046420) (0xc002345180) Stream removed, broadcasting: 1 I0421 13:44:47.544045 6 log.go:172] (0xc002046420) (0xc001018820) Stream removed, broadcasting: 3 I0421 13:44:47.544067 6 log.go:172] (0xc002046420) (0xc00020e140) Stream removed, broadcasting: 5 Apr 21 13:44:47.544: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount Apr 21 13:44:47.544: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-3932 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 21 13:44:47.544: INFO: >>> kubeConfig: /root/.kube/config I0421 13:44:47.579020 6 log.go:172] (0xc001f45080) (0xc00020e820) Create stream I0421 13:44:47.579048 6 log.go:172] (0xc001f45080) (0xc00020e820) Stream added, broadcasting: 1 I0421 13:44:47.581766 6 log.go:172] (0xc001f45080) Reply frame received for 1 I0421 13:44:47.581812 6 log.go:172] (0xc001f45080) (0xc00020e8c0) Create stream I0421 13:44:47.581824 6 log.go:172] (0xc001f45080) (0xc00020e8c0) Stream added, broadcasting: 3 I0421 13:44:47.582864 6 log.go:172] (0xc001f45080) Reply frame received for 3 I0421 13:44:47.582900 6 log.go:172] (0xc001f45080) (0xc002345220) Create stream I0421 13:44:47.582913 6 log.go:172] (0xc001f45080) (0xc002345220) Stream added, broadcasting: 5 I0421 13:44:47.583829 6 log.go:172] (0xc001f45080) Reply frame received for 5 I0421 13:44:47.633519 6 log.go:172] (0xc001f45080) Data frame received for 5 I0421 13:44:47.633561 6 log.go:172] (0xc001f45080) Data frame received for 3 I0421 13:44:47.633604 6 log.go:172] (0xc00020e8c0) (3) Data frame handling I0421 13:44:47.633618 6 log.go:172] (0xc00020e8c0) (3) Data frame sent I0421 13:44:47.633630 6 log.go:172] (0xc001f45080) Data frame received for 3 I0421 13:44:47.633641 6 log.go:172] (0xc00020e8c0) (3) Data frame handling I0421 13:44:47.633666 6 log.go:172] (0xc002345220) (5) Data frame handling I0421 13:44:47.635144 6 log.go:172] (0xc001f45080) Data frame received for 1 I0421 13:44:47.635165 6 log.go:172] (0xc00020e820) (1) Data frame handling I0421 13:44:47.635179 6 log.go:172] (0xc00020e820) (1) Data frame sent I0421 13:44:47.635194 6 log.go:172] (0xc001f45080) (0xc00020e820) Stream removed, broadcasting: 1 I0421 13:44:47.635213 6 log.go:172] (0xc001f45080) Go away received I0421 13:44:47.635433 6 log.go:172] (0xc001f45080) (0xc00020e820) Stream removed, broadcasting: 1 I0421 13:44:47.635461 6 log.go:172] (0xc001f45080) (0xc00020e8c0) Stream removed, broadcasting: 3 I0421 13:44:47.635475 6 log.go:172] (0xc001f45080) (0xc002345220) Stream removed, broadcasting: 5 Apr 21 13:44:47.635: INFO: Exec stderr: "" Apr 21 13:44:47.635: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-3932 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 21 13:44:47.635: INFO: >>> kubeConfig: /root/.kube/config I0421 13:44:47.668028 6 log.go:172] (0xc001211080) (0xc001efa780) Create stream I0421 13:44:47.668054 6 log.go:172] (0xc001211080) (0xc001efa780) Stream added, broadcasting: 1 I0421 13:44:47.670235 6 log.go:172] (0xc001211080) Reply frame received for 1 I0421 13:44:47.670261 6 log.go:172] (0xc001211080) (0xc001018960) Create stream I0421 13:44:47.670268 6 log.go:172] (0xc001211080) (0xc001018960) Stream added, broadcasting: 3 I0421 13:44:47.671142 6 log.go:172] (0xc001211080) Reply frame received for 3 I0421 13:44:47.671180 6 log.go:172] (0xc001211080) (0xc0023452c0) Create stream I0421 13:44:47.671194 6 log.go:172] (0xc001211080) (0xc0023452c0) Stream added, broadcasting: 5 I0421 13:44:47.671972 6 log.go:172] (0xc001211080) Reply frame received for 5 I0421 13:44:47.731795 6 log.go:172] (0xc001211080) Data frame received for 5 I0421 13:44:47.731842 6 log.go:172] (0xc0023452c0) (5) Data frame handling I0421 13:44:47.731869 6 log.go:172] (0xc001211080) Data frame received for 3 I0421 13:44:47.731887 6 log.go:172] (0xc001018960) (3) Data frame handling I0421 13:44:47.731909 6 log.go:172] (0xc001018960) (3) Data frame sent I0421 13:44:47.731922 6 log.go:172] (0xc001211080) Data frame received for 3 I0421 13:44:47.731932 6 log.go:172] (0xc001018960) (3) Data frame handling I0421 13:44:47.733056 6 log.go:172] (0xc001211080) Data frame received for 1 I0421 13:44:47.733073 6 log.go:172] (0xc001efa780) (1) Data frame handling I0421 13:44:47.733082 6 log.go:172] (0xc001efa780) (1) Data frame sent I0421 13:44:47.733091 6 log.go:172] (0xc001211080) (0xc001efa780) Stream removed, broadcasting: 1 I0421 13:44:47.733257 6 log.go:172] (0xc001211080) Go away received I0421 13:44:47.733357 6 log.go:172] (0xc001211080) (0xc001efa780) Stream removed, broadcasting: 1 I0421 13:44:47.733387 6 log.go:172] (0xc001211080) (0xc001018960) Stream removed, broadcasting: 3 I0421 13:44:47.733401 6 log.go:172] (0xc001211080) (0xc0023452c0) Stream removed, broadcasting: 5 Apr 21 13:44:47.733: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true Apr 21 13:44:47.733: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-3932 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 21 13:44:47.733: INFO: >>> kubeConfig: /root/.kube/config I0421 13:44:47.764697 6 log.go:172] (0xc001211970) (0xc001efaaa0) Create stream I0421 13:44:47.764740 6 log.go:172] (0xc001211970) (0xc001efaaa0) Stream added, broadcasting: 1 I0421 13:44:47.767709 6 log.go:172] (0xc001211970) Reply frame received for 1 I0421 13:44:47.767746 6 log.go:172] (0xc001211970) (0xc002345360) Create stream I0421 13:44:47.767759 6 log.go:172] (0xc001211970) (0xc002345360) Stream added, broadcasting: 3 I0421 13:44:47.768687 6 log.go:172] (0xc001211970) Reply frame received for 3 I0421 13:44:47.768719 6 log.go:172] (0xc001211970) (0xc002345400) Create stream I0421 13:44:47.768731 6 log.go:172] (0xc001211970) (0xc002345400) Stream added, broadcasting: 5 I0421 13:44:47.769811 6 log.go:172] (0xc001211970) Reply frame received for 5 I0421 13:44:47.834772 6 log.go:172] (0xc001211970) Data frame received for 5 I0421 13:44:47.834813 6 log.go:172] (0xc002345400) (5) Data frame handling I0421 13:44:47.834843 6 log.go:172] (0xc001211970) Data frame received for 3 I0421 13:44:47.834860 6 log.go:172] (0xc002345360) (3) Data frame handling I0421 13:44:47.834879 6 log.go:172] (0xc002345360) (3) Data frame sent I0421 13:44:47.834894 6 log.go:172] (0xc001211970) Data frame received for 3 I0421 13:44:47.834908 6 log.go:172] (0xc002345360) (3) Data frame handling I0421 13:44:47.836514 6 log.go:172] (0xc001211970) Data frame received for 1 I0421 13:44:47.836584 6 log.go:172] (0xc001efaaa0) (1) Data frame handling I0421 13:44:47.836636 6 log.go:172] (0xc001efaaa0) (1) Data frame sent I0421 13:44:47.836667 6 log.go:172] (0xc001211970) (0xc001efaaa0) Stream removed, broadcasting: 1 I0421 13:44:47.836704 6 log.go:172] (0xc001211970) Go away received I0421 13:44:47.836893 6 log.go:172] (0xc001211970) (0xc001efaaa0) Stream removed, broadcasting: 1 I0421 13:44:47.836927 6 log.go:172] (0xc001211970) (0xc002345360) Stream removed, broadcasting: 3 I0421 13:44:47.836953 6 log.go:172] (0xc001211970) (0xc002345400) Stream removed, broadcasting: 5 Apr 21 13:44:47.836: INFO: Exec stderr: "" Apr 21 13:44:47.836: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-3932 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 21 13:44:47.837: INFO: >>> kubeConfig: /root/.kube/config I0421 13:44:47.866693 6 log.go:172] (0xc0026562c0) (0xc001efadc0) Create stream I0421 13:44:47.866725 6 log.go:172] (0xc0026562c0) (0xc001efadc0) Stream added, broadcasting: 1 I0421 13:44:47.869904 6 log.go:172] (0xc0026562c0) Reply frame received for 1 I0421 13:44:47.869940 6 log.go:172] (0xc0026562c0) (0xc001018a00) Create stream I0421 13:44:47.869953 6 log.go:172] (0xc0026562c0) (0xc001018a00) Stream added, broadcasting: 3 I0421 13:44:47.870936 6 log.go:172] (0xc0026562c0) Reply frame received for 3 I0421 13:44:47.870977 6 log.go:172] (0xc0026562c0) (0xc0023454a0) Create stream I0421 13:44:47.870990 6 log.go:172] (0xc0026562c0) (0xc0023454a0) Stream added, broadcasting: 5 I0421 13:44:47.871893 6 log.go:172] (0xc0026562c0) Reply frame received for 5 I0421 13:44:47.941389 6 log.go:172] (0xc0026562c0) Data frame received for 5 I0421 13:44:47.941417 6 log.go:172] (0xc0023454a0) (5) Data frame handling I0421 13:44:47.941450 6 log.go:172] (0xc0026562c0) Data frame received for 3 I0421 13:44:47.941469 6 log.go:172] (0xc001018a00) (3) Data frame handling I0421 13:44:47.941480 6 log.go:172] (0xc001018a00) (3) Data frame sent I0421 13:44:47.941491 6 log.go:172] (0xc0026562c0) Data frame received for 3 I0421 13:44:47.941498 6 log.go:172] (0xc001018a00) (3) Data frame handling I0421 13:44:47.942973 6 log.go:172] (0xc0026562c0) Data frame received for 1 I0421 13:44:47.943018 6 log.go:172] (0xc001efadc0) (1) Data frame handling I0421 13:44:47.943045 6 log.go:172] (0xc001efadc0) (1) Data frame sent I0421 13:44:47.943057 6 log.go:172] (0xc0026562c0) (0xc001efadc0) Stream removed, broadcasting: 1 I0421 13:44:47.943072 6 log.go:172] (0xc0026562c0) Go away received I0421 13:44:47.943228 6 log.go:172] (0xc0026562c0) (0xc001efadc0) Stream removed, broadcasting: 1 I0421 13:44:47.943255 6 log.go:172] (0xc0026562c0) (0xc001018a00) Stream removed, broadcasting: 3 I0421 13:44:47.943282 6 log.go:172] (0xc0026562c0) (0xc0023454a0) Stream removed, broadcasting: 5 Apr 21 13:44:47.943: INFO: Exec stderr: "" Apr 21 13:44:47.943: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-3932 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 21 13:44:47.943: INFO: >>> kubeConfig: /root/.kube/config I0421 13:44:47.965390 6 log.go:172] (0xc002912210) (0xc00020ed20) Create stream I0421 13:44:47.965432 6 log.go:172] (0xc002912210) (0xc00020ed20) Stream added, broadcasting: 1 I0421 13:44:47.967662 6 log.go:172] (0xc002912210) Reply frame received for 1 I0421 13:44:47.967700 6 log.go:172] (0xc002912210) (0xc00020edc0) Create stream I0421 13:44:47.967713 6 log.go:172] (0xc002912210) (0xc00020edc0) Stream added, broadcasting: 3 I0421 13:44:47.968595 6 log.go:172] (0xc002912210) Reply frame received for 3 I0421 13:44:47.968635 6 log.go:172] (0xc002912210) (0xc001018b40) Create stream I0421 13:44:47.968649 6 log.go:172] (0xc002912210) (0xc001018b40) Stream added, broadcasting: 5 I0421 13:44:47.969566 6 log.go:172] (0xc002912210) Reply frame received for 5 I0421 13:44:48.036487 6 log.go:172] (0xc002912210) Data frame received for 5 I0421 13:44:48.036528 6 log.go:172] (0xc001018b40) (5) Data frame handling I0421 13:44:48.036551 6 log.go:172] (0xc002912210) Data frame received for 3 I0421 13:44:48.036563 6 log.go:172] (0xc00020edc0) (3) Data frame handling I0421 13:44:48.036582 6 log.go:172] (0xc00020edc0) (3) Data frame sent I0421 13:44:48.036594 6 log.go:172] (0xc002912210) Data frame received for 3 I0421 13:44:48.036604 6 log.go:172] (0xc00020edc0) (3) Data frame handling I0421 13:44:48.038466 6 log.go:172] (0xc002912210) Data frame received for 1 I0421 13:44:48.038485 6 log.go:172] (0xc00020ed20) (1) Data frame handling I0421 13:44:48.038495 6 log.go:172] (0xc00020ed20) (1) Data frame sent I0421 13:44:48.038505 6 log.go:172] (0xc002912210) (0xc00020ed20) Stream removed, broadcasting: 1 I0421 13:44:48.038522 6 log.go:172] (0xc002912210) Go away received I0421 13:44:48.038634 6 log.go:172] (0xc002912210) (0xc00020ed20) Stream removed, broadcasting: 1 I0421 13:44:48.038668 6 log.go:172] (0xc002912210) (0xc00020edc0) Stream removed, broadcasting: 3 I0421 13:44:48.038686 6 log.go:172] (0xc002912210) (0xc001018b40) Stream removed, broadcasting: 5 Apr 21 13:44:48.038: INFO: Exec stderr: "" Apr 21 13:44:48.038: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-3932 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 21 13:44:48.038: INFO: >>> kubeConfig: /root/.kube/config I0421 13:44:48.067361 6 log.go:172] (0xc002cd6370) (0xc001019360) Create stream I0421 13:44:48.067391 6 log.go:172] (0xc002cd6370) (0xc001019360) Stream added, broadcasting: 1 I0421 13:44:48.069865 6 log.go:172] (0xc002cd6370) Reply frame received for 1 I0421 13:44:48.069897 6 log.go:172] (0xc002cd6370) (0xc001ea5cc0) Create stream I0421 13:44:48.069907 6 log.go:172] (0xc002cd6370) (0xc001ea5cc0) Stream added, broadcasting: 3 I0421 13:44:48.070781 6 log.go:172] (0xc002cd6370) Reply frame received for 3 I0421 13:44:48.070806 6 log.go:172] (0xc002cd6370) (0xc001efae60) Create stream I0421 13:44:48.070817 6 log.go:172] (0xc002cd6370) (0xc001efae60) Stream added, broadcasting: 5 I0421 13:44:48.071713 6 log.go:172] (0xc002cd6370) Reply frame received for 5 I0421 13:44:48.131908 6 log.go:172] (0xc002cd6370) Data frame received for 5 I0421 13:44:48.131956 6 log.go:172] (0xc001efae60) (5) Data frame handling I0421 13:44:48.131987 6 log.go:172] (0xc002cd6370) Data frame received for 3 I0421 13:44:48.132034 6 log.go:172] (0xc001ea5cc0) (3) Data frame handling I0421 13:44:48.132078 6 log.go:172] (0xc001ea5cc0) (3) Data frame sent I0421 13:44:48.132114 6 log.go:172] (0xc002cd6370) Data frame received for 3 I0421 13:44:48.132140 6 log.go:172] (0xc001ea5cc0) (3) Data frame handling I0421 13:44:48.133607 6 log.go:172] (0xc002cd6370) Data frame received for 1 I0421 13:44:48.133631 6 log.go:172] (0xc001019360) (1) Data frame handling I0421 13:44:48.133654 6 log.go:172] (0xc001019360) (1) Data frame sent I0421 13:44:48.133685 6 log.go:172] (0xc002cd6370) (0xc001019360) Stream removed, broadcasting: 1 I0421 13:44:48.133827 6 log.go:172] (0xc002cd6370) (0xc001019360) Stream removed, broadcasting: 1 I0421 13:44:48.133891 6 log.go:172] (0xc002cd6370) (0xc001ea5cc0) Stream removed, broadcasting: 3 I0421 13:44:48.134047 6 log.go:172] (0xc002cd6370) Go away received I0421 13:44:48.134141 6 log.go:172] (0xc002cd6370) (0xc001efae60) Stream removed, broadcasting: 5 Apr 21 13:44:48.134: INFO: Exec stderr: "" [AfterEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 21 13:44:48.134: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-kubelet-etc-hosts-3932" for this suite. Apr 21 13:45:28.154: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 21 13:45:28.250: INFO: namespace e2e-kubelet-etc-hosts-3932 deletion completed in 40.111702863s • [SLOW TEST:51.682 seconds] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 21 13:45:28.250: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Apr 21 13:45:36.356: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 21 13:45:36.362: INFO: Pod pod-with-poststart-exec-hook still exists Apr 21 13:45:38.362: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 21 13:45:38.366: INFO: Pod pod-with-poststart-exec-hook still exists Apr 21 13:45:40.362: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 21 13:45:40.366: INFO: Pod pod-with-poststart-exec-hook still exists Apr 21 13:45:42.362: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 21 13:45:42.371: INFO: Pod pod-with-poststart-exec-hook still exists Apr 21 13:45:44.362: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 21 13:45:44.366: INFO: Pod pod-with-poststart-exec-hook still exists Apr 21 13:45:46.362: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 21 13:45:46.366: INFO: Pod pod-with-poststart-exec-hook still exists Apr 21 13:45:48.362: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 21 13:45:48.367: INFO: Pod pod-with-poststart-exec-hook still exists Apr 21 13:45:50.362: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 21 13:45:50.366: INFO: Pod pod-with-poststart-exec-hook still exists Apr 21 13:45:52.362: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 21 13:45:52.485: INFO: Pod pod-with-poststart-exec-hook still exists Apr 21 13:45:54.362: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 21 13:45:54.366: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 21 13:45:54.366: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-5548" for this suite. Apr 21 13:46:16.383: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 21 13:46:16.453: INFO: namespace container-lifecycle-hook-5548 deletion completed in 22.081942812s • [SLOW TEST:48.203 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 21 13:46:16.453: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-e3407936-82c4-4eea-8cd0-72234393b1a9 STEP: Creating a pod to test consume configMaps Apr 21 13:46:16.520: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-eca8be1e-9b5a-451f-aa95-7a08f46f59a1" in namespace "projected-599" to be "success or failure" Apr 21 13:46:16.536: INFO: Pod "pod-projected-configmaps-eca8be1e-9b5a-451f-aa95-7a08f46f59a1": Phase="Pending", Reason="", readiness=false. Elapsed: 16.043466ms Apr 21 13:46:18.651: INFO: Pod "pod-projected-configmaps-eca8be1e-9b5a-451f-aa95-7a08f46f59a1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.130860931s Apr 21 13:46:20.910: INFO: Pod "pod-projected-configmaps-eca8be1e-9b5a-451f-aa95-7a08f46f59a1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.390118375s Apr 21 13:46:22.914: INFO: Pod "pod-projected-configmaps-eca8be1e-9b5a-451f-aa95-7a08f46f59a1": Phase="Running", Reason="", readiness=true. Elapsed: 6.394345001s Apr 21 13:46:24.921: INFO: Pod "pod-projected-configmaps-eca8be1e-9b5a-451f-aa95-7a08f46f59a1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.401235316s STEP: Saw pod success Apr 21 13:46:24.921: INFO: Pod "pod-projected-configmaps-eca8be1e-9b5a-451f-aa95-7a08f46f59a1" satisfied condition "success or failure" Apr 21 13:46:24.923: INFO: Trying to get logs from node iruya-worker pod pod-projected-configmaps-eca8be1e-9b5a-451f-aa95-7a08f46f59a1 container projected-configmap-volume-test: STEP: delete the pod Apr 21 13:46:25.001: INFO: Waiting for pod pod-projected-configmaps-eca8be1e-9b5a-451f-aa95-7a08f46f59a1 to disappear Apr 21 13:46:25.045: INFO: Pod pod-projected-configmaps-eca8be1e-9b5a-451f-aa95-7a08f46f59a1 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 21 13:46:25.045: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-599" for this suite. Apr 21 13:46:31.090: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 21 13:46:31.201: INFO: namespace projected-599 deletion completed in 6.153063501s • [SLOW TEST:14.748 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 21 13:46:31.202: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 Apr 21 13:46:31.268: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Apr 21 13:46:31.276: INFO: Waiting for terminating namespaces to be deleted... Apr 21 13:46:31.293: INFO: Logging pods the kubelet thinks is on node iruya-worker before test Apr 21 13:46:31.297: INFO: kube-proxy-pmz4p from kube-system started at 2020-03-15 18:24:55 +0000 UTC (1 container statuses recorded) Apr 21 13:46:31.297: INFO: Container kube-proxy ready: true, restart count 0 Apr 21 13:46:31.298: INFO: kindnet-gwz5g from kube-system started at 2020-03-15 18:24:55 +0000 UTC (1 container statuses recorded) Apr 21 13:46:31.298: INFO: Container kindnet-cni ready: true, restart count 0 Apr 21 13:46:31.298: INFO: Logging pods the kubelet thinks is on node iruya-worker2 before test Apr 21 13:46:31.303: INFO: coredns-5d4dd4b4db-gm7vr from kube-system started at 2020-03-15 18:24:52 +0000 UTC (1 container statuses recorded) Apr 21 13:46:31.303: INFO: Container coredns ready: true, restart count 0 Apr 21 13:46:31.303: INFO: coredns-5d4dd4b4db-6jcgz from kube-system started at 2020-03-15 18:24:54 +0000 UTC (1 container statuses recorded) Apr 21 13:46:31.303: INFO: Container coredns ready: true, restart count 0 Apr 21 13:46:31.303: INFO: kube-proxy-vwbcj from kube-system started at 2020-03-15 18:24:42 +0000 UTC (1 container statuses recorded) Apr 21 13:46:31.303: INFO: Container kube-proxy ready: true, restart count 0 Apr 21 13:46:31.303: INFO: kindnet-mgd8b from kube-system started at 2020-03-15 18:24:43 +0000 UTC (1 container statuses recorded) Apr 21 13:46:31.303: INFO: Container kindnet-cni ready: true, restart count 0 [It] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: verifying the node has the label node iruya-worker STEP: verifying the node has the label node iruya-worker2 Apr 21 13:46:31.366: INFO: Pod coredns-5d4dd4b4db-6jcgz requesting resource cpu=100m on Node iruya-worker2 Apr 21 13:46:31.367: INFO: Pod coredns-5d4dd4b4db-gm7vr requesting resource cpu=100m on Node iruya-worker2 Apr 21 13:46:31.367: INFO: Pod kindnet-gwz5g requesting resource cpu=100m on Node iruya-worker Apr 21 13:46:31.367: INFO: Pod kindnet-mgd8b requesting resource cpu=100m on Node iruya-worker2 Apr 21 13:46:31.367: INFO: Pod kube-proxy-pmz4p requesting resource cpu=0m on Node iruya-worker Apr 21 13:46:31.367: INFO: Pod kube-proxy-vwbcj requesting resource cpu=0m on Node iruya-worker2 STEP: Starting Pods to consume most of the cluster CPU. STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-31c6f24a-cd1e-42da-92d2-27fcbfc0cc76.1607d9bb1f6c7ebb], Reason = [Scheduled], Message = [Successfully assigned sched-pred-8519/filler-pod-31c6f24a-cd1e-42da-92d2-27fcbfc0cc76 to iruya-worker] STEP: Considering event: Type = [Normal], Name = [filler-pod-31c6f24a-cd1e-42da-92d2-27fcbfc0cc76.1607d9bb6869d122], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-31c6f24a-cd1e-42da-92d2-27fcbfc0cc76.1607d9bbd04e0466], Reason = [Created], Message = [Created container filler-pod-31c6f24a-cd1e-42da-92d2-27fcbfc0cc76] STEP: Considering event: Type = [Normal], Name = [filler-pod-31c6f24a-cd1e-42da-92d2-27fcbfc0cc76.1607d9bbe84bc4ce], Reason = [Started], Message = [Started container filler-pod-31c6f24a-cd1e-42da-92d2-27fcbfc0cc76] STEP: Considering event: Type = [Normal], Name = [filler-pod-3886d4af-887d-4973-81ee-d506182b27cd.1607d9bb21684fd3], Reason = [Scheduled], Message = [Successfully assigned sched-pred-8519/filler-pod-3886d4af-887d-4973-81ee-d506182b27cd to iruya-worker2] STEP: Considering event: Type = [Normal], Name = [filler-pod-3886d4af-887d-4973-81ee-d506182b27cd.1607d9bb9f1869f7], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-3886d4af-887d-4973-81ee-d506182b27cd.1607d9bc031d5461], Reason = [Created], Message = [Created container filler-pod-3886d4af-887d-4973-81ee-d506182b27cd] STEP: Considering event: Type = [Normal], Name = [filler-pod-3886d4af-887d-4973-81ee-d506182b27cd.1607d9bc122abe5d], Reason = [Started], Message = [Started container filler-pod-3886d4af-887d-4973-81ee-d506182b27cd] STEP: Considering event: Type = [Warning], Name = [additional-pod.1607d9bc8907ec51], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taints that the pod didn't tolerate, 2 Insufficient cpu.] STEP: removing the label node off the node iruya-worker STEP: verifying the node doesn't have the label node STEP: removing the label node off the node iruya-worker2 STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 21 13:46:38.519: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-8519" for this suite. Apr 21 13:46:46.534: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 21 13:46:46.638: INFO: namespace sched-pred-8519 deletion completed in 8.116168526s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72 • [SLOW TEST:15.436 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23 validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 21 13:46:46.638: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Apr 21 13:47:10.761: INFO: Container started at 2020-04-21 13:46:49 +0000 UTC, pod became ready at 2020-04-21 13:47:09 +0000 UTC [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 21 13:47:10.761: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-7490" for this suite. Apr 21 13:47:32.779: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 21 13:47:32.881: INFO: namespace container-probe-7490 deletion completed in 22.116250953s • [SLOW TEST:46.243 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 21 13:47:32.882: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Apr 21 13:47:32.932: INFO: Creating deployment "test-recreate-deployment" Apr 21 13:47:32.964: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 Apr 21 13:47:32.974: INFO: deployment "test-recreate-deployment" doesn't have the required revision set Apr 21 13:47:35.037: INFO: Waiting deployment "test-recreate-deployment" to complete Apr 21 13:47:35.040: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723073653, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723073653, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723073653, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723073652, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 21 13:47:37.045: INFO: Triggering a new rollout for deployment "test-recreate-deployment" Apr 21 13:47:37.051: INFO: Updating deployment test-recreate-deployment Apr 21 13:47:37.051: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 Apr 21 13:47:37.592: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment,GenerateName:,Namespace:deployment-4954,SelfLink:/apis/apps/v1/namespaces/deployment-4954/deployments/test-recreate-deployment,UID:75875284-4c73-461d-9767-ba15d885354c,ResourceVersion:6645913,Generation:2,CreationTimestamp:2020-04-21 13:47:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[{Available False 2020-04-21 13:47:37 +0000 UTC 2020-04-21 13:47:37 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2020-04-21 13:47:37 +0000 UTC 2020-04-21 13:47:32 +0000 UTC ReplicaSetUpdated ReplicaSet "test-recreate-deployment-5c8c9cc69d" is progressing.}],ReadyReplicas:0,CollisionCount:nil,},} Apr 21 13:47:37.604: INFO: New ReplicaSet "test-recreate-deployment-5c8c9cc69d" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5c8c9cc69d,GenerateName:,Namespace:deployment-4954,SelfLink:/apis/apps/v1/namespaces/deployment-4954/replicasets/test-recreate-deployment-5c8c9cc69d,UID:e82c62e4-d9e1-48fd-aefb-855f93f78819,ResourceVersion:6645912,Generation:1,CreationTimestamp:2020-04-21 13:47:37 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 75875284-4c73-461d-9767-ba15d885354c 0xc0026b90f7 0xc0026b90f8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Apr 21 13:47:37.604: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": Apr 21 13:47:37.604: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-6df85df6b9,GenerateName:,Namespace:deployment-4954,SelfLink:/apis/apps/v1/namespaces/deployment-4954/replicasets/test-recreate-deployment-6df85df6b9,UID:2cff0b30-33f9-401d-855e-c26dba5ccfd1,ResourceVersion:6645902,Generation:2,CreationTimestamp:2020-04-21 13:47:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 75875284-4c73-461d-9767-ba15d885354c 0xc0026b91c7 0xc0026b91c8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Apr 21 13:47:37.615: INFO: Pod "test-recreate-deployment-5c8c9cc69d-kg6sx" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5c8c9cc69d-kg6sx,GenerateName:test-recreate-deployment-5c8c9cc69d-,Namespace:deployment-4954,SelfLink:/api/v1/namespaces/deployment-4954/pods/test-recreate-deployment-5c8c9cc69d-kg6sx,UID:ce8c3a14-b585-4c38-9b87-18aa05742a2e,ResourceVersion:6645915,Generation:0,CreationTimestamp:2020-04-21 13:47:37 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-recreate-deployment-5c8c9cc69d e82c62e4-d9e1-48fd-aefb-855f93f78819 0xc002e662c7 0xc002e662c8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-ss8kr {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ss8kr,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-ss8kr true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002e66340} {node.kubernetes.io/unreachable Exists NoExecute 0xc002e66360}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-21 13:47:37 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-21 13:47:37 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-21 13:47:37 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-21 13:47:37 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:,StartTime:2020-04-21 13:47:37 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 21 13:47:37.615: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-4954" for this suite. Apr 21 13:47:43.630: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 21 13:47:43.753: INFO: namespace deployment-4954 deletion completed in 6.134298667s • [SLOW TEST:10.871 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run deployment should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 21 13:47:43.753: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1557 [It] should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Apr 21 13:47:43.835: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --generator=deployment/apps.v1 --namespace=kubectl-7381' Apr 21 13:47:46.133: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Apr 21 13:47:46.133: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n" STEP: verifying the deployment e2e-test-nginx-deployment was created STEP: verifying the pod controlled by deployment e2e-test-nginx-deployment was created [AfterEach] [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1562 Apr 21 13:47:50.185: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=kubectl-7381' Apr 21 13:47:50.291: INFO: stderr: "" Apr 21 13:47:50.291: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 21 13:47:50.291: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7381" for this suite. Apr 21 13:48:12.325: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 21 13:48:12.396: INFO: namespace kubectl-7381 deletion completed in 22.102399972s • [SLOW TEST:28.643 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 21 13:48:12.397: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-8267 [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace statefulset-8267 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-8267 Apr 21 13:48:12.515: INFO: Found 0 stateful pods, waiting for 1 Apr 21 13:48:22.528: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod Apr 21 13:48:22.531: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8267 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Apr 21 13:48:22.794: INFO: stderr: "I0421 13:48:22.658785 1686 log.go:172] (0xc000966370) (0xc000a1c640) Create stream\nI0421 13:48:22.658849 1686 log.go:172] (0xc000966370) (0xc000a1c640) Stream added, broadcasting: 1\nI0421 13:48:22.661956 1686 log.go:172] (0xc000966370) Reply frame received for 1\nI0421 13:48:22.662022 1686 log.go:172] (0xc000966370) (0xc000692000) Create stream\nI0421 13:48:22.662066 1686 log.go:172] (0xc000966370) (0xc000692000) Stream added, broadcasting: 3\nI0421 13:48:22.663658 1686 log.go:172] (0xc000966370) Reply frame received for 3\nI0421 13:48:22.663717 1686 log.go:172] (0xc000966370) (0xc0004221e0) Create stream\nI0421 13:48:22.663735 1686 log.go:172] (0xc000966370) (0xc0004221e0) Stream added, broadcasting: 5\nI0421 13:48:22.664711 1686 log.go:172] (0xc000966370) Reply frame received for 5\nI0421 13:48:22.761973 1686 log.go:172] (0xc000966370) Data frame received for 5\nI0421 13:48:22.762001 1686 log.go:172] (0xc0004221e0) (5) Data frame handling\nI0421 13:48:22.762019 1686 log.go:172] (0xc0004221e0) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0421 13:48:22.785469 1686 log.go:172] (0xc000966370) Data frame received for 3\nI0421 13:48:22.785511 1686 log.go:172] (0xc000692000) (3) Data frame handling\nI0421 13:48:22.785538 1686 log.go:172] (0xc000692000) (3) Data frame sent\nI0421 13:48:22.785778 1686 log.go:172] (0xc000966370) Data frame received for 3\nI0421 13:48:22.785824 1686 log.go:172] (0xc000966370) Data frame received for 5\nI0421 13:48:22.785857 1686 log.go:172] (0xc0004221e0) (5) Data frame handling\nI0421 13:48:22.785891 1686 log.go:172] (0xc000692000) (3) Data frame handling\nI0421 13:48:22.787817 1686 log.go:172] (0xc000966370) Data frame received for 1\nI0421 13:48:22.787832 1686 log.go:172] (0xc000a1c640) (1) Data frame handling\nI0421 13:48:22.787839 1686 log.go:172] (0xc000a1c640) (1) Data frame sent\nI0421 13:48:22.787847 1686 log.go:172] (0xc000966370) (0xc000a1c640) Stream removed, broadcasting: 1\nI0421 13:48:22.788004 1686 log.go:172] (0xc000966370) Go away received\nI0421 13:48:22.788216 1686 log.go:172] (0xc000966370) (0xc000a1c640) Stream removed, broadcasting: 1\nI0421 13:48:22.788241 1686 log.go:172] (0xc000966370) (0xc000692000) Stream removed, broadcasting: 3\nI0421 13:48:22.788253 1686 log.go:172] (0xc000966370) (0xc0004221e0) Stream removed, broadcasting: 5\n" Apr 21 13:48:22.794: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Apr 21 13:48:22.794: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Apr 21 13:48:22.798: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Apr 21 13:48:32.810: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Apr 21 13:48:32.810: INFO: Waiting for statefulset status.replicas updated to 0 Apr 21 13:48:32.826: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999255s Apr 21 13:48:33.832: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.993467908s Apr 21 13:48:34.837: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.987603363s Apr 21 13:48:35.840: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.982381573s Apr 21 13:48:36.845: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.979268131s Apr 21 13:48:37.856: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.974665948s Apr 21 13:48:38.861: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.963548241s Apr 21 13:48:39.869: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.958563035s Apr 21 13:48:40.874: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.950060793s Apr 21 13:48:41.878: INFO: Verifying statefulset ss doesn't scale past 1 for another 945.407298ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-8267 Apr 21 13:48:42.883: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8267 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 21 13:48:43.098: INFO: stderr: "I0421 13:48:43.007287 1707 log.go:172] (0xc000a98420) (0xc00068c820) Create stream\nI0421 13:48:43.007367 1707 log.go:172] (0xc000a98420) (0xc00068c820) Stream added, broadcasting: 1\nI0421 13:48:43.010753 1707 log.go:172] (0xc000a98420) Reply frame received for 1\nI0421 13:48:43.010787 1707 log.go:172] (0xc000a98420) (0xc0006361e0) Create stream\nI0421 13:48:43.010799 1707 log.go:172] (0xc000a98420) (0xc0006361e0) Stream added, broadcasting: 3\nI0421 13:48:43.011629 1707 log.go:172] (0xc000a98420) Reply frame received for 3\nI0421 13:48:43.011674 1707 log.go:172] (0xc000a98420) (0xc00068c000) Create stream\nI0421 13:48:43.011690 1707 log.go:172] (0xc000a98420) (0xc00068c000) Stream added, broadcasting: 5\nI0421 13:48:43.012520 1707 log.go:172] (0xc000a98420) Reply frame received for 5\nI0421 13:48:43.089959 1707 log.go:172] (0xc000a98420) Data frame received for 3\nI0421 13:48:43.089993 1707 log.go:172] (0xc0006361e0) (3) Data frame handling\nI0421 13:48:43.090013 1707 log.go:172] (0xc0006361e0) (3) Data frame sent\nI0421 13:48:43.090025 1707 log.go:172] (0xc000a98420) Data frame received for 3\nI0421 13:48:43.090048 1707 log.go:172] (0xc0006361e0) (3) Data frame handling\nI0421 13:48:43.090197 1707 log.go:172] (0xc000a98420) Data frame received for 5\nI0421 13:48:43.090224 1707 log.go:172] (0xc00068c000) (5) Data frame handling\nI0421 13:48:43.090275 1707 log.go:172] (0xc00068c000) (5) Data frame sent\nI0421 13:48:43.090302 1707 log.go:172] (0xc000a98420) Data frame received for 5\nI0421 13:48:43.090320 1707 log.go:172] (0xc00068c000) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0421 13:48:43.092046 1707 log.go:172] (0xc000a98420) Data frame received for 1\nI0421 13:48:43.092061 1707 log.go:172] (0xc00068c820) (1) Data frame handling\nI0421 13:48:43.092069 1707 log.go:172] (0xc00068c820) (1) Data frame sent\nI0421 13:48:43.092078 1707 log.go:172] (0xc000a98420) (0xc00068c820) Stream removed, broadcasting: 1\nI0421 13:48:43.092087 1707 log.go:172] (0xc000a98420) Go away received\nI0421 13:48:43.092493 1707 log.go:172] (0xc000a98420) (0xc00068c820) Stream removed, broadcasting: 1\nI0421 13:48:43.092521 1707 log.go:172] (0xc000a98420) (0xc0006361e0) Stream removed, broadcasting: 3\nI0421 13:48:43.092537 1707 log.go:172] (0xc000a98420) (0xc00068c000) Stream removed, broadcasting: 5\n" Apr 21 13:48:43.098: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Apr 21 13:48:43.098: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Apr 21 13:48:43.102: INFO: Found 1 stateful pods, waiting for 3 Apr 21 13:48:53.106: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Apr 21 13:48:53.106: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Apr 21 13:48:53.106: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod Apr 21 13:48:53.110: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8267 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Apr 21 13:48:53.348: INFO: stderr: "I0421 13:48:53.251678 1730 log.go:172] (0xc000a8c160) (0xc0008de1e0) Create stream\nI0421 13:48:53.251761 1730 log.go:172] (0xc000a8c160) (0xc0008de1e0) Stream added, broadcasting: 1\nI0421 13:48:53.254339 1730 log.go:172] (0xc000a8c160) Reply frame received for 1\nI0421 13:48:53.254369 1730 log.go:172] (0xc000a8c160) (0xc0005a60a0) Create stream\nI0421 13:48:53.254378 1730 log.go:172] (0xc000a8c160) (0xc0005a60a0) Stream added, broadcasting: 3\nI0421 13:48:53.255437 1730 log.go:172] (0xc000a8c160) Reply frame received for 3\nI0421 13:48:53.255475 1730 log.go:172] (0xc000a8c160) (0xc0008de280) Create stream\nI0421 13:48:53.255491 1730 log.go:172] (0xc000a8c160) (0xc0008de280) Stream added, broadcasting: 5\nI0421 13:48:53.256407 1730 log.go:172] (0xc000a8c160) Reply frame received for 5\nI0421 13:48:53.340987 1730 log.go:172] (0xc000a8c160) Data frame received for 5\nI0421 13:48:53.341017 1730 log.go:172] (0xc0008de280) (5) Data frame handling\nI0421 13:48:53.341028 1730 log.go:172] (0xc0008de280) (5) Data frame sent\nI0421 13:48:53.341033 1730 log.go:172] (0xc000a8c160) Data frame received for 5\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0421 13:48:53.341057 1730 log.go:172] (0xc000a8c160) Data frame received for 3\nI0421 13:48:53.341091 1730 log.go:172] (0xc0005a60a0) (3) Data frame handling\nI0421 13:48:53.341274 1730 log.go:172] (0xc0005a60a0) (3) Data frame sent\nI0421 13:48:53.341305 1730 log.go:172] (0xc000a8c160) Data frame received for 3\nI0421 13:48:53.341327 1730 log.go:172] (0xc0005a60a0) (3) Data frame handling\nI0421 13:48:53.341391 1730 log.go:172] (0xc0008de280) (5) Data frame handling\nI0421 13:48:53.342931 1730 log.go:172] (0xc000a8c160) Data frame received for 1\nI0421 13:48:53.342963 1730 log.go:172] (0xc0008de1e0) (1) Data frame handling\nI0421 13:48:53.342991 1730 log.go:172] (0xc0008de1e0) (1) Data frame sent\nI0421 13:48:53.343013 1730 log.go:172] (0xc000a8c160) (0xc0008de1e0) Stream removed, broadcasting: 1\nI0421 13:48:53.343035 1730 log.go:172] (0xc000a8c160) Go away received\nI0421 13:48:53.343441 1730 log.go:172] (0xc000a8c160) (0xc0008de1e0) Stream removed, broadcasting: 1\nI0421 13:48:53.343465 1730 log.go:172] (0xc000a8c160) (0xc0005a60a0) Stream removed, broadcasting: 3\nI0421 13:48:53.343474 1730 log.go:172] (0xc000a8c160) (0xc0008de280) Stream removed, broadcasting: 5\n" Apr 21 13:48:53.348: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Apr 21 13:48:53.348: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Apr 21 13:48:53.349: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8267 ss-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Apr 21 13:48:53.583: INFO: stderr: "I0421 13:48:53.475391 1750 log.go:172] (0xc000a80420) (0xc0005ae6e0) Create stream\nI0421 13:48:53.475455 1750 log.go:172] (0xc000a80420) (0xc0005ae6e0) Stream added, broadcasting: 1\nI0421 13:48:53.479611 1750 log.go:172] (0xc000a80420) Reply frame received for 1\nI0421 13:48:53.479660 1750 log.go:172] (0xc000a80420) (0xc00059a000) Create stream\nI0421 13:48:53.479678 1750 log.go:172] (0xc000a80420) (0xc00059a000) Stream added, broadcasting: 3\nI0421 13:48:53.480653 1750 log.go:172] (0xc000a80420) Reply frame received for 3\nI0421 13:48:53.480703 1750 log.go:172] (0xc000a80420) (0xc0005ae000) Create stream\nI0421 13:48:53.480721 1750 log.go:172] (0xc000a80420) (0xc0005ae000) Stream added, broadcasting: 5\nI0421 13:48:53.481849 1750 log.go:172] (0xc000a80420) Reply frame received for 5\nI0421 13:48:53.552524 1750 log.go:172] (0xc000a80420) Data frame received for 5\nI0421 13:48:53.552560 1750 log.go:172] (0xc0005ae000) (5) Data frame handling\nI0421 13:48:53.552588 1750 log.go:172] (0xc0005ae000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0421 13:48:53.576683 1750 log.go:172] (0xc000a80420) Data frame received for 5\nI0421 13:48:53.576725 1750 log.go:172] (0xc0005ae000) (5) Data frame handling\nI0421 13:48:53.576755 1750 log.go:172] (0xc000a80420) Data frame received for 3\nI0421 13:48:53.576767 1750 log.go:172] (0xc00059a000) (3) Data frame handling\nI0421 13:48:53.576779 1750 log.go:172] (0xc00059a000) (3) Data frame sent\nI0421 13:48:53.576791 1750 log.go:172] (0xc000a80420) Data frame received for 3\nI0421 13:48:53.576803 1750 log.go:172] (0xc00059a000) (3) Data frame handling\nI0421 13:48:53.579123 1750 log.go:172] (0xc000a80420) Data frame received for 1\nI0421 13:48:53.579163 1750 log.go:172] (0xc0005ae6e0) (1) Data frame handling\nI0421 13:48:53.579190 1750 log.go:172] (0xc0005ae6e0) (1) Data frame sent\nI0421 13:48:53.579214 1750 log.go:172] (0xc000a80420) (0xc0005ae6e0) Stream removed, broadcasting: 1\nI0421 13:48:53.579246 1750 log.go:172] (0xc000a80420) Go away received\nI0421 13:48:53.579474 1750 log.go:172] (0xc000a80420) (0xc0005ae6e0) Stream removed, broadcasting: 1\nI0421 13:48:53.579487 1750 log.go:172] (0xc000a80420) (0xc00059a000) Stream removed, broadcasting: 3\nI0421 13:48:53.579493 1750 log.go:172] (0xc000a80420) (0xc0005ae000) Stream removed, broadcasting: 5\n" Apr 21 13:48:53.583: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Apr 21 13:48:53.583: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Apr 21 13:48:53.583: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8267 ss-2 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Apr 21 13:48:53.834: INFO: stderr: "I0421 13:48:53.702525 1770 log.go:172] (0xc000970420) (0xc000260820) Create stream\nI0421 13:48:53.702577 1770 log.go:172] (0xc000970420) (0xc000260820) Stream added, broadcasting: 1\nI0421 13:48:53.707178 1770 log.go:172] (0xc000970420) Reply frame received for 1\nI0421 13:48:53.707224 1770 log.go:172] (0xc000970420) (0xc000260000) Create stream\nI0421 13:48:53.707240 1770 log.go:172] (0xc000970420) (0xc000260000) Stream added, broadcasting: 3\nI0421 13:48:53.708162 1770 log.go:172] (0xc000970420) Reply frame received for 3\nI0421 13:48:53.708204 1770 log.go:172] (0xc000970420) (0xc0006121e0) Create stream\nI0421 13:48:53.708217 1770 log.go:172] (0xc000970420) (0xc0006121e0) Stream added, broadcasting: 5\nI0421 13:48:53.709323 1770 log.go:172] (0xc000970420) Reply frame received for 5\nI0421 13:48:53.781971 1770 log.go:172] (0xc000970420) Data frame received for 5\nI0421 13:48:53.782001 1770 log.go:172] (0xc0006121e0) (5) Data frame handling\nI0421 13:48:53.782017 1770 log.go:172] (0xc0006121e0) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0421 13:48:53.826208 1770 log.go:172] (0xc000970420) Data frame received for 3\nI0421 13:48:53.826242 1770 log.go:172] (0xc000260000) (3) Data frame handling\nI0421 13:48:53.826263 1770 log.go:172] (0xc000260000) (3) Data frame sent\nI0421 13:48:53.826601 1770 log.go:172] (0xc000970420) Data frame received for 3\nI0421 13:48:53.826634 1770 log.go:172] (0xc000260000) (3) Data frame handling\nI0421 13:48:53.826795 1770 log.go:172] (0xc000970420) Data frame received for 5\nI0421 13:48:53.826824 1770 log.go:172] (0xc0006121e0) (5) Data frame handling\nI0421 13:48:53.828621 1770 log.go:172] (0xc000970420) Data frame received for 1\nI0421 13:48:53.828646 1770 log.go:172] (0xc000260820) (1) Data frame handling\nI0421 13:48:53.828658 1770 log.go:172] (0xc000260820) (1) Data frame sent\nI0421 13:48:53.828699 1770 log.go:172] (0xc000970420) (0xc000260820) Stream removed, broadcasting: 1\nI0421 13:48:53.828727 1770 log.go:172] (0xc000970420) Go away received\nI0421 13:48:53.829048 1770 log.go:172] (0xc000970420) (0xc000260820) Stream removed, broadcasting: 1\nI0421 13:48:53.829075 1770 log.go:172] (0xc000970420) (0xc000260000) Stream removed, broadcasting: 3\nI0421 13:48:53.829088 1770 log.go:172] (0xc000970420) (0xc0006121e0) Stream removed, broadcasting: 5\n" Apr 21 13:48:53.834: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Apr 21 13:48:53.834: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Apr 21 13:48:53.834: INFO: Waiting for statefulset status.replicas updated to 0 Apr 21 13:48:53.837: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 3 Apr 21 13:49:03.847: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Apr 21 13:49:03.847: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Apr 21 13:49:03.847: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Apr 21 13:49:03.862: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.99999947s Apr 21 13:49:04.867: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.991328529s Apr 21 13:49:05.872: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.986640358s Apr 21 13:49:06.878: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.981131008s Apr 21 13:49:07.883: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.97551093s Apr 21 13:49:08.889: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.969985058s Apr 21 13:49:09.894: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.964371989s Apr 21 13:49:10.900: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.958921754s Apr 21 13:49:11.906: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.953469118s Apr 21 13:49:12.911: INFO: Verifying statefulset ss doesn't scale past 3 for another 947.806242ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-8267 Apr 21 13:49:13.915: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8267 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 21 13:49:14.155: INFO: stderr: "I0421 13:49:14.062666 1789 log.go:172] (0xc000702a50) (0xc000504640) Create stream\nI0421 13:49:14.062720 1789 log.go:172] (0xc000702a50) (0xc000504640) Stream added, broadcasting: 1\nI0421 13:49:14.065601 1789 log.go:172] (0xc000702a50) Reply frame received for 1\nI0421 13:49:14.065656 1789 log.go:172] (0xc000702a50) (0xc0005ea3c0) Create stream\nI0421 13:49:14.065669 1789 log.go:172] (0xc000702a50) (0xc0005ea3c0) Stream added, broadcasting: 3\nI0421 13:49:14.066827 1789 log.go:172] (0xc000702a50) Reply frame received for 3\nI0421 13:49:14.066862 1789 log.go:172] (0xc000702a50) (0xc0008ea000) Create stream\nI0421 13:49:14.066889 1789 log.go:172] (0xc000702a50) (0xc0008ea000) Stream added, broadcasting: 5\nI0421 13:49:14.067792 1789 log.go:172] (0xc000702a50) Reply frame received for 5\nI0421 13:49:14.149630 1789 log.go:172] (0xc000702a50) Data frame received for 5\nI0421 13:49:14.149696 1789 log.go:172] (0xc0008ea000) (5) Data frame handling\nI0421 13:49:14.149712 1789 log.go:172] (0xc0008ea000) (5) Data frame sent\nI0421 13:49:14.149721 1789 log.go:172] (0xc000702a50) Data frame received for 5\nI0421 13:49:14.149735 1789 log.go:172] (0xc0008ea000) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0421 13:49:14.149780 1789 log.go:172] (0xc000702a50) Data frame received for 3\nI0421 13:49:14.149801 1789 log.go:172] (0xc0005ea3c0) (3) Data frame handling\nI0421 13:49:14.149814 1789 log.go:172] (0xc0005ea3c0) (3) Data frame sent\nI0421 13:49:14.149828 1789 log.go:172] (0xc000702a50) Data frame received for 3\nI0421 13:49:14.149834 1789 log.go:172] (0xc0005ea3c0) (3) Data frame handling\nI0421 13:49:14.150752 1789 log.go:172] (0xc000702a50) Data frame received for 1\nI0421 13:49:14.150774 1789 log.go:172] (0xc000504640) (1) Data frame handling\nI0421 13:49:14.150798 1789 log.go:172] (0xc000504640) (1) Data frame sent\nI0421 13:49:14.150822 1789 log.go:172] (0xc000702a50) (0xc000504640) Stream removed, broadcasting: 1\nI0421 13:49:14.150834 1789 log.go:172] (0xc000702a50) Go away received\nI0421 13:49:14.151157 1789 log.go:172] (0xc000702a50) (0xc000504640) Stream removed, broadcasting: 1\nI0421 13:49:14.151180 1789 log.go:172] (0xc000702a50) (0xc0005ea3c0) Stream removed, broadcasting: 3\nI0421 13:49:14.151187 1789 log.go:172] (0xc000702a50) (0xc0008ea000) Stream removed, broadcasting: 5\n" Apr 21 13:49:14.155: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Apr 21 13:49:14.155: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Apr 21 13:49:14.155: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8267 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 21 13:49:14.335: INFO: stderr: "I0421 13:49:14.272213 1810 log.go:172] (0xc000938370) (0xc0003f5400) Create stream\nI0421 13:49:14.272267 1810 log.go:172] (0xc000938370) (0xc0003f5400) Stream added, broadcasting: 1\nI0421 13:49:14.274023 1810 log.go:172] (0xc000938370) Reply frame received for 1\nI0421 13:49:14.274063 1810 log.go:172] (0xc000938370) (0xc00084d900) Create stream\nI0421 13:49:14.274078 1810 log.go:172] (0xc000938370) (0xc00084d900) Stream added, broadcasting: 3\nI0421 13:49:14.274857 1810 log.go:172] (0xc000938370) Reply frame received for 3\nI0421 13:49:14.274933 1810 log.go:172] (0xc000938370) (0xc0004540a0) Create stream\nI0421 13:49:14.274957 1810 log.go:172] (0xc000938370) (0xc0004540a0) Stream added, broadcasting: 5\nI0421 13:49:14.275711 1810 log.go:172] (0xc000938370) Reply frame received for 5\nI0421 13:49:14.327668 1810 log.go:172] (0xc000938370) Data frame received for 3\nI0421 13:49:14.327695 1810 log.go:172] (0xc00084d900) (3) Data frame handling\nI0421 13:49:14.327717 1810 log.go:172] (0xc000938370) Data frame received for 5\nI0421 13:49:14.327768 1810 log.go:172] (0xc0004540a0) (5) Data frame handling\nI0421 13:49:14.327789 1810 log.go:172] (0xc0004540a0) (5) Data frame sent\nI0421 13:49:14.327803 1810 log.go:172] (0xc000938370) Data frame received for 5\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0421 13:49:14.327822 1810 log.go:172] (0xc00084d900) (3) Data frame sent\nI0421 13:49:14.327869 1810 log.go:172] (0xc0004540a0) (5) Data frame handling\nI0421 13:49:14.328209 1810 log.go:172] (0xc000938370) Data frame received for 3\nI0421 13:49:14.328225 1810 log.go:172] (0xc00084d900) (3) Data frame handling\nI0421 13:49:14.329849 1810 log.go:172] (0xc000938370) Data frame received for 1\nI0421 13:49:14.329876 1810 log.go:172] (0xc0003f5400) (1) Data frame handling\nI0421 13:49:14.329898 1810 log.go:172] (0xc0003f5400) (1) Data frame sent\nI0421 13:49:14.330311 1810 log.go:172] (0xc000938370) (0xc0003f5400) Stream removed, broadcasting: 1\nI0421 13:49:14.330366 1810 log.go:172] (0xc000938370) Go away received\nI0421 13:49:14.330700 1810 log.go:172] (0xc000938370) (0xc0003f5400) Stream removed, broadcasting: 1\nI0421 13:49:14.330722 1810 log.go:172] (0xc000938370) (0xc00084d900) Stream removed, broadcasting: 3\nI0421 13:49:14.330732 1810 log.go:172] (0xc000938370) (0xc0004540a0) Stream removed, broadcasting: 5\n" Apr 21 13:49:14.336: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Apr 21 13:49:14.336: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Apr 21 13:49:14.336: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8267 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 21 13:49:14.553: INFO: stderr: "I0421 13:49:14.478545 1830 log.go:172] (0xc000b2e420) (0xc000ac0000) Create stream\nI0421 13:49:14.478604 1830 log.go:172] (0xc000b2e420) (0xc000ac0000) Stream added, broadcasting: 1\nI0421 13:49:14.482141 1830 log.go:172] (0xc000b2e420) Reply frame received for 1\nI0421 13:49:14.482210 1830 log.go:172] (0xc000b2e420) (0xc000516000) Create stream\nI0421 13:49:14.482242 1830 log.go:172] (0xc000b2e420) (0xc000516000) Stream added, broadcasting: 3\nI0421 13:49:14.483100 1830 log.go:172] (0xc000b2e420) Reply frame received for 3\nI0421 13:49:14.483158 1830 log.go:172] (0xc000b2e420) (0xc0007d8460) Create stream\nI0421 13:49:14.483175 1830 log.go:172] (0xc000b2e420) (0xc0007d8460) Stream added, broadcasting: 5\nI0421 13:49:14.484216 1830 log.go:172] (0xc000b2e420) Reply frame received for 5\nI0421 13:49:14.545799 1830 log.go:172] (0xc000b2e420) Data frame received for 3\nI0421 13:49:14.545844 1830 log.go:172] (0xc000516000) (3) Data frame handling\nI0421 13:49:14.545880 1830 log.go:172] (0xc000516000) (3) Data frame sent\nI0421 13:49:14.545898 1830 log.go:172] (0xc000b2e420) Data frame received for 3\nI0421 13:49:14.545912 1830 log.go:172] (0xc000516000) (3) Data frame handling\nI0421 13:49:14.546229 1830 log.go:172] (0xc000b2e420) Data frame received for 5\nI0421 13:49:14.546253 1830 log.go:172] (0xc0007d8460) (5) Data frame handling\nI0421 13:49:14.546273 1830 log.go:172] (0xc0007d8460) (5) Data frame sent\nI0421 13:49:14.546284 1830 log.go:172] (0xc000b2e420) Data frame received for 5\nI0421 13:49:14.546294 1830 log.go:172] (0xc0007d8460) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0421 13:49:14.547616 1830 log.go:172] (0xc000b2e420) Data frame received for 1\nI0421 13:49:14.547635 1830 log.go:172] (0xc000ac0000) (1) Data frame handling\nI0421 13:49:14.547652 1830 log.go:172] (0xc000ac0000) (1) Data frame sent\nI0421 13:49:14.547674 1830 log.go:172] (0xc000b2e420) (0xc000ac0000) Stream removed, broadcasting: 1\nI0421 13:49:14.547808 1830 log.go:172] (0xc000b2e420) Go away received\nI0421 13:49:14.548109 1830 log.go:172] (0xc000b2e420) (0xc000ac0000) Stream removed, broadcasting: 1\nI0421 13:49:14.548131 1830 log.go:172] (0xc000b2e420) (0xc000516000) Stream removed, broadcasting: 3\nI0421 13:49:14.548142 1830 log.go:172] (0xc000b2e420) (0xc0007d8460) Stream removed, broadcasting: 5\n" Apr 21 13:49:14.553: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Apr 21 13:49:14.553: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Apr 21 13:49:14.553: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 Apr 21 13:49:34.569: INFO: Deleting all statefulset in ns statefulset-8267 Apr 21 13:49:34.573: INFO: Scaling statefulset ss to 0 Apr 21 13:49:34.582: INFO: Waiting for statefulset status.replicas updated to 0 Apr 21 13:49:34.585: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 21 13:49:34.603: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-8267" for this suite. Apr 21 13:49:40.614: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 21 13:49:40.692: INFO: namespace statefulset-8267 deletion completed in 6.086674916s • [SLOW TEST:88.296 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 21 13:49:40.693: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Apr 21 13:49:40.751: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ce14b51f-a09d-4f9b-9c96-11205e76a681" in namespace "projected-37" to be "success or failure" Apr 21 13:49:40.755: INFO: Pod "downwardapi-volume-ce14b51f-a09d-4f9b-9c96-11205e76a681": Phase="Pending", Reason="", readiness=false. Elapsed: 3.769026ms Apr 21 13:49:42.759: INFO: Pod "downwardapi-volume-ce14b51f-a09d-4f9b-9c96-11205e76a681": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007990527s Apr 21 13:49:44.763: INFO: Pod "downwardapi-volume-ce14b51f-a09d-4f9b-9c96-11205e76a681": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011982681s STEP: Saw pod success Apr 21 13:49:44.763: INFO: Pod "downwardapi-volume-ce14b51f-a09d-4f9b-9c96-11205e76a681" satisfied condition "success or failure" Apr 21 13:49:44.766: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-ce14b51f-a09d-4f9b-9c96-11205e76a681 container client-container: STEP: delete the pod Apr 21 13:49:44.803: INFO: Waiting for pod downwardapi-volume-ce14b51f-a09d-4f9b-9c96-11205e76a681 to disappear Apr 21 13:49:44.814: INFO: Pod downwardapi-volume-ce14b51f-a09d-4f9b-9c96-11205e76a681 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 21 13:49:44.814: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-37" for this suite. Apr 21 13:49:50.848: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 21 13:49:50.918: INFO: namespace projected-37 deletion completed in 6.100463408s • [SLOW TEST:10.225 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 21 13:49:50.918: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88 [It] should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 21 13:49:50.983: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-9552" for this suite. Apr 21 13:49:56.998: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 21 13:49:57.074: INFO: namespace services-9552 deletion completed in 6.087980189s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92 • [SLOW TEST:6.156 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 21 13:49:57.074: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Apr 21 13:50:00.154: INFO: Expected: &{OK} to match Container's Termination Message: OK -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 21 13:50:00.186: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-7229" for this suite. Apr 21 13:50:06.201: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 21 13:50:06.279: INFO: namespace container-runtime-7229 deletion completed in 6.089679995s • [SLOW TEST:9.204 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129 should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 21 13:50:06.279: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Performing setup for networking test in namespace pod-network-test-5773 STEP: creating a selector STEP: Creating the service pods in kubernetes Apr 21 13:50:06.350: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Apr 21 13:50:32.488: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.225:8080/dial?request=hostName&protocol=http&host=10.244.2.224&port=8080&tries=1'] Namespace:pod-network-test-5773 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 21 13:50:32.488: INFO: >>> kubeConfig: /root/.kube/config I0421 13:50:32.520429 6 log.go:172] (0xc00062a790) (0xc001ea48c0) Create stream I0421 13:50:32.520458 6 log.go:172] (0xc00062a790) (0xc001ea48c0) Stream added, broadcasting: 1 I0421 13:50:32.522579 6 log.go:172] (0xc00062a790) Reply frame received for 1 I0421 13:50:32.522635 6 log.go:172] (0xc00062a790) (0xc0011de960) Create stream I0421 13:50:32.522652 6 log.go:172] (0xc00062a790) (0xc0011de960) Stream added, broadcasting: 3 I0421 13:50:32.523561 6 log.go:172] (0xc00062a790) Reply frame received for 3 I0421 13:50:32.523584 6 log.go:172] (0xc00062a790) (0xc0011deb40) Create stream I0421 13:50:32.523592 6 log.go:172] (0xc00062a790) (0xc0011deb40) Stream added, broadcasting: 5 I0421 13:50:32.524599 6 log.go:172] (0xc00062a790) Reply frame received for 5 I0421 13:50:32.613837 6 log.go:172] (0xc00062a790) Data frame received for 3 I0421 13:50:32.613886 6 log.go:172] (0xc0011de960) (3) Data frame handling I0421 13:50:32.613920 6 log.go:172] (0xc0011de960) (3) Data frame sent I0421 13:50:32.614356 6 log.go:172] (0xc00062a790) Data frame received for 5 I0421 13:50:32.614384 6 log.go:172] (0xc0011deb40) (5) Data frame handling I0421 13:50:32.614410 6 log.go:172] (0xc00062a790) Data frame received for 3 I0421 13:50:32.614422 6 log.go:172] (0xc0011de960) (3) Data frame handling I0421 13:50:32.616186 6 log.go:172] (0xc00062a790) Data frame received for 1 I0421 13:50:32.616207 6 log.go:172] (0xc001ea48c0) (1) Data frame handling I0421 13:50:32.616219 6 log.go:172] (0xc001ea48c0) (1) Data frame sent I0421 13:50:32.616521 6 log.go:172] (0xc00062a790) (0xc001ea48c0) Stream removed, broadcasting: 1 I0421 13:50:32.616570 6 log.go:172] (0xc00062a790) Go away received I0421 13:50:32.616637 6 log.go:172] (0xc00062a790) (0xc001ea48c0) Stream removed, broadcasting: 1 I0421 13:50:32.616649 6 log.go:172] (0xc00062a790) (0xc0011de960) Stream removed, broadcasting: 3 I0421 13:50:32.616657 6 log.go:172] (0xc00062a790) (0xc0011deb40) Stream removed, broadcasting: 5 Apr 21 13:50:32.616: INFO: Waiting for endpoints: map[] Apr 21 13:50:32.619: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.225:8080/dial?request=hostName&protocol=http&host=10.244.1.78&port=8080&tries=1'] Namespace:pod-network-test-5773 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 21 13:50:32.619: INFO: >>> kubeConfig: /root/.kube/config I0421 13:50:32.672280 6 log.go:172] (0xc0017149a0) (0xc0011dedc0) Create stream I0421 13:50:32.672314 6 log.go:172] (0xc0017149a0) (0xc0011dedc0) Stream added, broadcasting: 1 I0421 13:50:32.673957 6 log.go:172] (0xc0017149a0) Reply frame received for 1 I0421 13:50:32.673985 6 log.go:172] (0xc0017149a0) (0xc001f78140) Create stream I0421 13:50:32.673993 6 log.go:172] (0xc0017149a0) (0xc001f78140) Stream added, broadcasting: 3 I0421 13:50:32.674995 6 log.go:172] (0xc0017149a0) Reply frame received for 3 I0421 13:50:32.675058 6 log.go:172] (0xc0017149a0) (0xc0011df040) Create stream I0421 13:50:32.675078 6 log.go:172] (0xc0017149a0) (0xc0011df040) Stream added, broadcasting: 5 I0421 13:50:32.676189 6 log.go:172] (0xc0017149a0) Reply frame received for 5 I0421 13:50:32.736284 6 log.go:172] (0xc0017149a0) Data frame received for 3 I0421 13:50:32.736316 6 log.go:172] (0xc001f78140) (3) Data frame handling I0421 13:50:32.736347 6 log.go:172] (0xc001f78140) (3) Data frame sent I0421 13:50:32.736708 6 log.go:172] (0xc0017149a0) Data frame received for 5 I0421 13:50:32.736745 6 log.go:172] (0xc0011df040) (5) Data frame handling I0421 13:50:32.736766 6 log.go:172] (0xc0017149a0) Data frame received for 3 I0421 13:50:32.736783 6 log.go:172] (0xc001f78140) (3) Data frame handling I0421 13:50:32.738444 6 log.go:172] (0xc0017149a0) Data frame received for 1 I0421 13:50:32.738468 6 log.go:172] (0xc0011dedc0) (1) Data frame handling I0421 13:50:32.738483 6 log.go:172] (0xc0011dedc0) (1) Data frame sent I0421 13:50:32.738495 6 log.go:172] (0xc0017149a0) (0xc0011dedc0) Stream removed, broadcasting: 1 I0421 13:50:32.738559 6 log.go:172] (0xc0017149a0) (0xc0011dedc0) Stream removed, broadcasting: 1 I0421 13:50:32.738570 6 log.go:172] (0xc0017149a0) (0xc001f78140) Stream removed, broadcasting: 3 I0421 13:50:32.738580 6 log.go:172] (0xc0017149a0) (0xc0011df040) Stream removed, broadcasting: 5 Apr 21 13:50:32.738: INFO: Waiting for endpoints: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 I0421 13:50:32.738663 6 log.go:172] (0xc0017149a0) Go away received Apr 21 13:50:32.738: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-5773" for this suite. Apr 21 13:50:54.752: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 21 13:50:54.836: INFO: namespace pod-network-test-5773 deletion completed in 22.094077104s • [SLOW TEST:48.556 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 21 13:50:54.836: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:179 [It] should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 21 13:50:54.913: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-1629" for this suite. Apr 21 13:51:17.030: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 21 13:51:17.110: INFO: namespace pods-1629 deletion completed in 22.168969399s • [SLOW TEST:22.274 seconds] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 21 13:51:17.110: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir volume type on node default medium Apr 21 13:51:17.172: INFO: Waiting up to 5m0s for pod "pod-d7fd4668-b2dc-4b2e-a3ec-434ae240ad4f" in namespace "emptydir-4555" to be "success or failure" Apr 21 13:51:17.181: INFO: Pod "pod-d7fd4668-b2dc-4b2e-a3ec-434ae240ad4f": Phase="Pending", Reason="", readiness=false. Elapsed: 8.778486ms Apr 21 13:51:19.375: INFO: Pod "pod-d7fd4668-b2dc-4b2e-a3ec-434ae240ad4f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.202915588s Apr 21 13:51:21.379: INFO: Pod "pod-d7fd4668-b2dc-4b2e-a3ec-434ae240ad4f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.207095641s Apr 21 13:51:23.384: INFO: Pod "pod-d7fd4668-b2dc-4b2e-a3ec-434ae240ad4f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.21146429s STEP: Saw pod success Apr 21 13:51:23.384: INFO: Pod "pod-d7fd4668-b2dc-4b2e-a3ec-434ae240ad4f" satisfied condition "success or failure" Apr 21 13:51:23.387: INFO: Trying to get logs from node iruya-worker2 pod pod-d7fd4668-b2dc-4b2e-a3ec-434ae240ad4f container test-container: STEP: delete the pod Apr 21 13:51:23.442: INFO: Waiting for pod pod-d7fd4668-b2dc-4b2e-a3ec-434ae240ad4f to disappear Apr 21 13:51:23.451: INFO: Pod pod-d7fd4668-b2dc-4b2e-a3ec-434ae240ad4f no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 21 13:51:23.451: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4555" for this suite. Apr 21 13:51:29.466: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 21 13:51:29.543: INFO: namespace emptydir-4555 deletion completed in 6.088337755s • [SLOW TEST:12.433 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 21 13:51:29.543: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 21 13:51:35.793: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-7891" for this suite. Apr 21 13:51:41.823: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 21 13:51:41.902: INFO: namespace namespaces-7891 deletion completed in 6.10511482s STEP: Destroying namespace "nsdeletetest-8702" for this suite. Apr 21 13:51:41.903: INFO: Namespace nsdeletetest-8702 was already deleted STEP: Destroying namespace "nsdeletetest-4261" for this suite. Apr 21 13:51:47.932: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 21 13:51:48.023: INFO: namespace nsdeletetest-4261 deletion completed in 6.11916853s • [SLOW TEST:18.480 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 21 13:51:48.023: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-b6eb73b6-13ec-44fd-978f-564fac1725c9 STEP: Creating a pod to test consume configMaps Apr 21 13:51:48.130: INFO: Waiting up to 5m0s for pod "pod-configmaps-7f58cbd1-63c5-460e-a7ed-2cdb24784ad7" in namespace "configmap-166" to be "success or failure" Apr 21 13:51:48.133: INFO: Pod "pod-configmaps-7f58cbd1-63c5-460e-a7ed-2cdb24784ad7": Phase="Pending", Reason="", readiness=false. Elapsed: 3.349187ms Apr 21 13:51:50.138: INFO: Pod "pod-configmaps-7f58cbd1-63c5-460e-a7ed-2cdb24784ad7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007812987s Apr 21 13:51:52.142: INFO: Pod "pod-configmaps-7f58cbd1-63c5-460e-a7ed-2cdb24784ad7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012042183s STEP: Saw pod success Apr 21 13:51:52.142: INFO: Pod "pod-configmaps-7f58cbd1-63c5-460e-a7ed-2cdb24784ad7" satisfied condition "success or failure" Apr 21 13:51:52.145: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-7f58cbd1-63c5-460e-a7ed-2cdb24784ad7 container configmap-volume-test: STEP: delete the pod Apr 21 13:51:52.177: INFO: Waiting for pod pod-configmaps-7f58cbd1-63c5-460e-a7ed-2cdb24784ad7 to disappear Apr 21 13:51:52.187: INFO: Pod pod-configmaps-7f58cbd1-63c5-460e-a7ed-2cdb24784ad7 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 21 13:51:52.187: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-166" for this suite. Apr 21 13:51:58.203: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 21 13:51:58.294: INFO: namespace configmap-166 deletion completed in 6.104019595s • [SLOW TEST:10.271 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 21 13:51:58.294: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Performing setup for networking test in namespace pod-network-test-2439 STEP: creating a selector STEP: Creating the service pods in kubernetes Apr 21 13:51:58.362: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Apr 21 13:52:22.457: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.82:8080/dial?request=hostName&protocol=udp&host=10.244.1.81&port=8081&tries=1'] Namespace:pod-network-test-2439 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 21 13:52:22.458: INFO: >>> kubeConfig: /root/.kube/config I0421 13:52:22.486121 6 log.go:172] (0xc0009c80b0) (0xc0006545a0) Create stream I0421 13:52:22.486148 6 log.go:172] (0xc0009c80b0) (0xc0006545a0) Stream added, broadcasting: 1 I0421 13:52:22.487808 6 log.go:172] (0xc0009c80b0) Reply frame received for 1 I0421 13:52:22.487855 6 log.go:172] (0xc0009c80b0) (0xc00111a0a0) Create stream I0421 13:52:22.487872 6 log.go:172] (0xc0009c80b0) (0xc00111a0a0) Stream added, broadcasting: 3 I0421 13:52:22.488673 6 log.go:172] (0xc0009c80b0) Reply frame received for 3 I0421 13:52:22.488704 6 log.go:172] (0xc0009c80b0) (0xc000654a00) Create stream I0421 13:52:22.488718 6 log.go:172] (0xc0009c80b0) (0xc000654a00) Stream added, broadcasting: 5 I0421 13:52:22.489645 6 log.go:172] (0xc0009c80b0) Reply frame received for 5 I0421 13:52:22.588270 6 log.go:172] (0xc0009c80b0) Data frame received for 3 I0421 13:52:22.588311 6 log.go:172] (0xc00111a0a0) (3) Data frame handling I0421 13:52:22.588341 6 log.go:172] (0xc00111a0a0) (3) Data frame sent I0421 13:52:22.588904 6 log.go:172] (0xc0009c80b0) Data frame received for 3 I0421 13:52:22.588949 6 log.go:172] (0xc00111a0a0) (3) Data frame handling I0421 13:52:22.588972 6 log.go:172] (0xc0009c80b0) Data frame received for 5 I0421 13:52:22.588990 6 log.go:172] (0xc000654a00) (5) Data frame handling I0421 13:52:22.590874 6 log.go:172] (0xc0009c80b0) Data frame received for 1 I0421 13:52:22.590945 6 log.go:172] (0xc0006545a0) (1) Data frame handling I0421 13:52:22.590986 6 log.go:172] (0xc0006545a0) (1) Data frame sent I0421 13:52:22.591007 6 log.go:172] (0xc0009c80b0) (0xc0006545a0) Stream removed, broadcasting: 1 I0421 13:52:22.591025 6 log.go:172] (0xc0009c80b0) Go away received I0421 13:52:22.591091 6 log.go:172] (0xc0009c80b0) (0xc0006545a0) Stream removed, broadcasting: 1 I0421 13:52:22.591104 6 log.go:172] (0xc0009c80b0) (0xc00111a0a0) Stream removed, broadcasting: 3 I0421 13:52:22.591109 6 log.go:172] (0xc0009c80b0) (0xc000654a00) Stream removed, broadcasting: 5 Apr 21 13:52:22.591: INFO: Waiting for endpoints: map[] Apr 21 13:52:22.596: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.82:8080/dial?request=hostName&protocol=udp&host=10.244.2.227&port=8081&tries=1'] Namespace:pod-network-test-2439 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 21 13:52:22.596: INFO: >>> kubeConfig: /root/.kube/config I0421 13:52:22.621533 6 log.go:172] (0xc00062ae70) (0xc0010181e0) Create stream I0421 13:52:22.621562 6 log.go:172] (0xc00062ae70) (0xc0010181e0) Stream added, broadcasting: 1 I0421 13:52:22.623058 6 log.go:172] (0xc00062ae70) Reply frame received for 1 I0421 13:52:22.623105 6 log.go:172] (0xc00062ae70) (0xc00111a1e0) Create stream I0421 13:52:22.623123 6 log.go:172] (0xc00062ae70) (0xc00111a1e0) Stream added, broadcasting: 3 I0421 13:52:22.623881 6 log.go:172] (0xc00062ae70) Reply frame received for 3 I0421 13:52:22.623934 6 log.go:172] (0xc00062ae70) (0xc00111a280) Create stream I0421 13:52:22.623949 6 log.go:172] (0xc00062ae70) (0xc00111a280) Stream added, broadcasting: 5 I0421 13:52:22.624611 6 log.go:172] (0xc00062ae70) Reply frame received for 5 I0421 13:52:22.699695 6 log.go:172] (0xc00062ae70) Data frame received for 3 I0421 13:52:22.699741 6 log.go:172] (0xc00111a1e0) (3) Data frame handling I0421 13:52:22.699791 6 log.go:172] (0xc00111a1e0) (3) Data frame sent I0421 13:52:22.700152 6 log.go:172] (0xc00062ae70) Data frame received for 5 I0421 13:52:22.700184 6 log.go:172] (0xc00111a280) (5) Data frame handling I0421 13:52:22.700230 6 log.go:172] (0xc00062ae70) Data frame received for 3 I0421 13:52:22.700267 6 log.go:172] (0xc00111a1e0) (3) Data frame handling I0421 13:52:22.702240 6 log.go:172] (0xc00062ae70) Data frame received for 1 I0421 13:52:22.702258 6 log.go:172] (0xc0010181e0) (1) Data frame handling I0421 13:52:22.702310 6 log.go:172] (0xc0010181e0) (1) Data frame sent I0421 13:52:22.702388 6 log.go:172] (0xc00062ae70) (0xc0010181e0) Stream removed, broadcasting: 1 I0421 13:52:22.702483 6 log.go:172] (0xc00062ae70) (0xc0010181e0) Stream removed, broadcasting: 1 I0421 13:52:22.702494 6 log.go:172] (0xc00062ae70) (0xc00111a1e0) Stream removed, broadcasting: 3 I0421 13:52:22.702632 6 log.go:172] (0xc00062ae70) (0xc00111a280) Stream removed, broadcasting: 5 Apr 21 13:52:22.702: INFO: Waiting for endpoints: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 I0421 13:52:22.702765 6 log.go:172] (0xc00062ae70) Go away received Apr 21 13:52:22.702: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-2439" for this suite. Apr 21 13:52:44.726: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 21 13:52:44.804: INFO: namespace pod-network-test-2439 deletion completed in 22.093580587s • [SLOW TEST:46.510 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 21 13:52:44.805: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating pod Apr 21 13:52:48.934: INFO: Pod pod-hostip-c272e18a-f7fa-481b-b5fe-8096dc31bb6b has hostIP: 172.17.0.5 [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 21 13:52:48.934: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-3090" for this suite. Apr 21 13:53:10.951: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 21 13:53:11.028: INFO: namespace pods-3090 deletion completed in 22.090655883s • [SLOW TEST:26.223 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Service endpoints latency should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 21 13:53:11.029: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating replication controller svc-latency-rc in namespace svc-latency-560 I0421 13:53:11.064994 6 runners.go:180] Created replication controller with name: svc-latency-rc, namespace: svc-latency-560, replica count: 1 I0421 13:53:12.115745 6 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0421 13:53:13.115954 6 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0421 13:53:14.116189 6 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Apr 21 13:53:14.246: INFO: Created: latency-svc-mh8j2 Apr 21 13:53:14.261: INFO: Got endpoints: latency-svc-mh8j2 [44.644846ms] Apr 21 13:53:14.306: INFO: Created: latency-svc-wvhtv Apr 21 13:53:14.321: INFO: Got endpoints: latency-svc-wvhtv [59.608243ms] Apr 21 13:53:14.342: INFO: Created: latency-svc-shqv5 Apr 21 13:53:14.360: INFO: Got endpoints: latency-svc-shqv5 [98.616696ms] Apr 21 13:53:14.378: INFO: Created: latency-svc-bzpnm Apr 21 13:53:14.442: INFO: Got endpoints: latency-svc-bzpnm [180.936892ms] Apr 21 13:53:14.444: INFO: Created: latency-svc-j9w8z Apr 21 13:53:14.453: INFO: Got endpoints: latency-svc-j9w8z [191.992393ms] Apr 21 13:53:14.475: INFO: Created: latency-svc-s5p4d Apr 21 13:53:14.486: INFO: Got endpoints: latency-svc-s5p4d [224.705727ms] Apr 21 13:53:14.511: INFO: Created: latency-svc-5rwjq Apr 21 13:53:14.540: INFO: Got endpoints: latency-svc-5rwjq [279.238588ms] Apr 21 13:53:14.598: INFO: Created: latency-svc-tg9xj Apr 21 13:53:14.606: INFO: Got endpoints: latency-svc-tg9xj [345.012046ms] Apr 21 13:53:14.631: INFO: Created: latency-svc-b62ph Apr 21 13:53:14.646: INFO: Got endpoints: latency-svc-b62ph [384.907946ms] Apr 21 13:53:14.667: INFO: Created: latency-svc-bg2t2 Apr 21 13:53:14.682: INFO: Got endpoints: latency-svc-bg2t2 [420.868137ms] Apr 21 13:53:14.730: INFO: Created: latency-svc-6q59p Apr 21 13:53:14.733: INFO: Got endpoints: latency-svc-6q59p [472.207955ms] Apr 21 13:53:14.762: INFO: Created: latency-svc-lkjn4 Apr 21 13:53:14.786: INFO: Got endpoints: latency-svc-lkjn4 [524.943369ms] Apr 21 13:53:14.828: INFO: Created: latency-svc-6fgzj Apr 21 13:53:14.873: INFO: Got endpoints: latency-svc-6fgzj [612.218149ms] Apr 21 13:53:14.882: INFO: Created: latency-svc-22xwb Apr 21 13:53:14.899: INFO: Got endpoints: latency-svc-22xwb [637.613536ms] Apr 21 13:53:14.919: INFO: Created: latency-svc-44g7d Apr 21 13:53:14.935: INFO: Got endpoints: latency-svc-44g7d [674.021176ms] Apr 21 13:53:14.961: INFO: Created: latency-svc-qh89m Apr 21 13:53:15.017: INFO: Got endpoints: latency-svc-qh89m [755.471618ms] Apr 21 13:53:15.050: INFO: Created: latency-svc-sbwtk Apr 21 13:53:15.070: INFO: Got endpoints: latency-svc-sbwtk [748.739757ms] Apr 21 13:53:15.099: INFO: Created: latency-svc-fz8mk Apr 21 13:53:15.179: INFO: Got endpoints: latency-svc-fz8mk [818.966886ms] Apr 21 13:53:15.206: INFO: Created: latency-svc-mwjdn Apr 21 13:53:15.224: INFO: Got endpoints: latency-svc-mwjdn [781.784709ms] Apr 21 13:53:15.260: INFO: Created: latency-svc-ch6l8 Apr 21 13:53:15.278: INFO: Got endpoints: latency-svc-ch6l8 [824.649332ms] Apr 21 13:53:15.323: INFO: Created: latency-svc-jgv2m Apr 21 13:53:15.338: INFO: Got endpoints: latency-svc-jgv2m [852.496517ms] Apr 21 13:53:15.375: INFO: Created: latency-svc-ndbqw Apr 21 13:53:15.402: INFO: Got endpoints: latency-svc-ndbqw [861.419218ms] Apr 21 13:53:15.423: INFO: Created: latency-svc-4xbzm Apr 21 13:53:15.454: INFO: Got endpoints: latency-svc-4xbzm [847.46743ms] Apr 21 13:53:15.464: INFO: Created: latency-svc-bw4vc Apr 21 13:53:15.479: INFO: Got endpoints: latency-svc-bw4vc [832.763812ms] Apr 21 13:53:15.501: INFO: Created: latency-svc-6n7np Apr 21 13:53:15.515: INFO: Got endpoints: latency-svc-6n7np [833.497692ms] Apr 21 13:53:15.537: INFO: Created: latency-svc-qm5cr Apr 21 13:53:15.545: INFO: Got endpoints: latency-svc-qm5cr [811.894296ms] Apr 21 13:53:15.593: INFO: Created: latency-svc-v2tmw Apr 21 13:53:15.600: INFO: Got endpoints: latency-svc-v2tmw [813.856142ms] Apr 21 13:53:15.627: INFO: Created: latency-svc-dpd22 Apr 21 13:53:15.642: INFO: Got endpoints: latency-svc-dpd22 [768.7919ms] Apr 21 13:53:15.668: INFO: Created: latency-svc-66z9h Apr 21 13:53:15.684: INFO: Got endpoints: latency-svc-66z9h [785.562019ms] Apr 21 13:53:15.737: INFO: Created: latency-svc-mxttj Apr 21 13:53:15.739: INFO: Got endpoints: latency-svc-mxttj [803.775184ms] Apr 21 13:53:15.783: INFO: Created: latency-svc-gx25f Apr 21 13:53:15.928: INFO: Got endpoints: latency-svc-gx25f [911.411001ms] Apr 21 13:53:15.976: INFO: Created: latency-svc-tb75h Apr 21 13:53:16.059: INFO: Got endpoints: latency-svc-tb75h [989.252683ms] Apr 21 13:53:16.089: INFO: Created: latency-svc-p6sv8 Apr 21 13:53:16.128: INFO: Got endpoints: latency-svc-p6sv8 [949.686502ms] Apr 21 13:53:16.148: INFO: Created: latency-svc-9lgnz Apr 21 13:53:16.220: INFO: Got endpoints: latency-svc-9lgnz [996.539003ms] Apr 21 13:53:16.223: INFO: Created: latency-svc-spkjc Apr 21 13:53:16.248: INFO: Got endpoints: latency-svc-spkjc [970.688495ms] Apr 21 13:53:16.299: INFO: Created: latency-svc-p6vgm Apr 21 13:53:16.314: INFO: Got endpoints: latency-svc-p6vgm [976.048062ms] Apr 21 13:53:16.364: INFO: Created: latency-svc-8wg92 Apr 21 13:53:16.368: INFO: Got endpoints: latency-svc-8wg92 [966.121439ms] Apr 21 13:53:16.389: INFO: Created: latency-svc-5wdvm Apr 21 13:53:16.419: INFO: Got endpoints: latency-svc-5wdvm [965.252252ms] Apr 21 13:53:16.455: INFO: Created: latency-svc-j9qgm Apr 21 13:53:16.489: INFO: Got endpoints: latency-svc-j9qgm [1.010653543s] Apr 21 13:53:16.533: INFO: Created: latency-svc-rhrsw Apr 21 13:53:16.549: INFO: Got endpoints: latency-svc-rhrsw [1.033836558s] Apr 21 13:53:16.576: INFO: Created: latency-svc-fq5d4 Apr 21 13:53:16.639: INFO: Got endpoints: latency-svc-fq5d4 [1.093964659s] Apr 21 13:53:16.641: INFO: Created: latency-svc-7s7zc Apr 21 13:53:16.645: INFO: Got endpoints: latency-svc-7s7zc [1.045265065s] Apr 21 13:53:16.671: INFO: Created: latency-svc-rdtn6 Apr 21 13:53:16.688: INFO: Got endpoints: latency-svc-rdtn6 [1.046123259s] Apr 21 13:53:16.712: INFO: Created: latency-svc-qf6xk Apr 21 13:53:16.725: INFO: Got endpoints: latency-svc-qf6xk [1.040776938s] Apr 21 13:53:16.783: INFO: Created: latency-svc-mk6mh Apr 21 13:53:16.790: INFO: Got endpoints: latency-svc-mk6mh [1.051297709s] Apr 21 13:53:16.809: INFO: Created: latency-svc-pph5v Apr 21 13:53:16.839: INFO: Got endpoints: latency-svc-pph5v [910.707453ms] Apr 21 13:53:16.863: INFO: Created: latency-svc-btdbc Apr 21 13:53:16.875: INFO: Got endpoints: latency-svc-btdbc [816.226633ms] Apr 21 13:53:16.927: INFO: Created: latency-svc-kkk94 Apr 21 13:53:16.930: INFO: Got endpoints: latency-svc-kkk94 [801.278127ms] Apr 21 13:53:16.960: INFO: Created: latency-svc-46ng5 Apr 21 13:53:16.978: INFO: Got endpoints: latency-svc-46ng5 [757.837855ms] Apr 21 13:53:17.007: INFO: Created: latency-svc-xjknl Apr 21 13:53:17.020: INFO: Got endpoints: latency-svc-xjknl [771.598926ms] Apr 21 13:53:17.055: INFO: Created: latency-svc-rswbr Apr 21 13:53:17.080: INFO: Got endpoints: latency-svc-rswbr [765.680277ms] Apr 21 13:53:17.109: INFO: Created: latency-svc-zztlr Apr 21 13:53:17.122: INFO: Got endpoints: latency-svc-zztlr [754.317444ms] Apr 21 13:53:17.196: INFO: Created: latency-svc-pzgsw Apr 21 13:53:17.199: INFO: Got endpoints: latency-svc-pzgsw [780.233815ms] Apr 21 13:53:17.253: INFO: Created: latency-svc-fhwf7 Apr 21 13:53:17.267: INFO: Got endpoints: latency-svc-fhwf7 [777.18233ms] Apr 21 13:53:17.289: INFO: Created: latency-svc-gtmk7 Apr 21 13:53:17.322: INFO: Got endpoints: latency-svc-gtmk7 [772.340438ms] Apr 21 13:53:17.338: INFO: Created: latency-svc-cpprg Apr 21 13:53:17.367: INFO: Got endpoints: latency-svc-cpprg [727.687035ms] Apr 21 13:53:17.399: INFO: Created: latency-svc-h7nx9 Apr 21 13:53:17.471: INFO: Got endpoints: latency-svc-h7nx9 [825.992627ms] Apr 21 13:53:17.487: INFO: Created: latency-svc-75vhj Apr 21 13:53:17.496: INFO: Got endpoints: latency-svc-75vhj [807.503951ms] Apr 21 13:53:17.517: INFO: Created: latency-svc-mklsw Apr 21 13:53:17.532: INFO: Got endpoints: latency-svc-mklsw [806.444012ms] Apr 21 13:53:17.554: INFO: Created: latency-svc-fx75f Apr 21 13:53:17.706: INFO: Got endpoints: latency-svc-fx75f [915.902186ms] Apr 21 13:53:17.708: INFO: Created: latency-svc-gczfd Apr 21 13:53:17.718: INFO: Got endpoints: latency-svc-gczfd [879.136914ms] Apr 21 13:53:17.740: INFO: Created: latency-svc-jstw4 Apr 21 13:53:17.754: INFO: Got endpoints: latency-svc-jstw4 [879.214605ms] Apr 21 13:53:17.855: INFO: Created: latency-svc-9g96k Apr 21 13:53:17.858: INFO: Got endpoints: latency-svc-9g96k [928.033267ms] Apr 21 13:53:17.895: INFO: Created: latency-svc-kwwzz Apr 21 13:53:17.905: INFO: Got endpoints: latency-svc-kwwzz [150.111158ms] Apr 21 13:53:17.925: INFO: Created: latency-svc-trdjd Apr 21 13:53:17.942: INFO: Got endpoints: latency-svc-trdjd [963.607856ms] Apr 21 13:53:18.011: INFO: Created: latency-svc-5wttj Apr 21 13:53:18.013: INFO: Got endpoints: latency-svc-5wttj [993.353169ms] Apr 21 13:53:18.063: INFO: Created: latency-svc-xg2jl Apr 21 13:53:18.074: INFO: Got endpoints: latency-svc-xg2jl [993.913313ms] Apr 21 13:53:18.106: INFO: Created: latency-svc-d6thm Apr 21 13:53:18.142: INFO: Got endpoints: latency-svc-d6thm [1.019455858s] Apr 21 13:53:18.153: INFO: Created: latency-svc-wvtv9 Apr 21 13:53:18.170: INFO: Got endpoints: latency-svc-wvtv9 [970.789146ms] Apr 21 13:53:18.211: INFO: Created: latency-svc-78znh Apr 21 13:53:18.218: INFO: Got endpoints: latency-svc-78znh [951.216076ms] Apr 21 13:53:18.280: INFO: Created: latency-svc-848gx Apr 21 13:53:18.283: INFO: Got endpoints: latency-svc-848gx [961.519203ms] Apr 21 13:53:18.309: INFO: Created: latency-svc-q5s5w Apr 21 13:53:18.334: INFO: Got endpoints: latency-svc-q5s5w [967.243893ms] Apr 21 13:53:18.363: INFO: Created: latency-svc-8q464 Apr 21 13:53:18.418: INFO: Got endpoints: latency-svc-8q464 [946.988683ms] Apr 21 13:53:18.428: INFO: Created: latency-svc-89g8n Apr 21 13:53:18.442: INFO: Got endpoints: latency-svc-89g8n [945.785928ms] Apr 21 13:53:18.466: INFO: Created: latency-svc-jfxpg Apr 21 13:53:18.478: INFO: Got endpoints: latency-svc-jfxpg [946.083904ms] Apr 21 13:53:18.501: INFO: Created: latency-svc-6w66n Apr 21 13:53:18.514: INFO: Got endpoints: latency-svc-6w66n [807.846543ms] Apr 21 13:53:18.556: INFO: Created: latency-svc-f6cpn Apr 21 13:53:18.559: INFO: Got endpoints: latency-svc-f6cpn [841.072911ms] Apr 21 13:53:18.592: INFO: Created: latency-svc-x8lq9 Apr 21 13:53:18.604: INFO: Got endpoints: latency-svc-x8lq9 [746.551158ms] Apr 21 13:53:18.633: INFO: Created: latency-svc-7dgjn Apr 21 13:53:18.647: INFO: Got endpoints: latency-svc-7dgjn [741.866996ms] Apr 21 13:53:18.687: INFO: Created: latency-svc-cv8rl Apr 21 13:53:18.691: INFO: Got endpoints: latency-svc-cv8rl [749.457175ms] Apr 21 13:53:18.717: INFO: Created: latency-svc-vvz9w Apr 21 13:53:18.731: INFO: Got endpoints: latency-svc-vvz9w [717.631083ms] Apr 21 13:53:18.754: INFO: Created: latency-svc-5mkmz Apr 21 13:53:18.767: INFO: Got endpoints: latency-svc-5mkmz [693.253648ms] Apr 21 13:53:18.819: INFO: Created: latency-svc-brzst Apr 21 13:53:18.823: INFO: Got endpoints: latency-svc-brzst [681.381279ms] Apr 21 13:53:18.849: INFO: Created: latency-svc-lv8wv Apr 21 13:53:18.864: INFO: Got endpoints: latency-svc-lv8wv [693.897004ms] Apr 21 13:53:18.885: INFO: Created: latency-svc-fplmv Apr 21 13:53:18.894: INFO: Got endpoints: latency-svc-fplmv [676.27056ms] Apr 21 13:53:18.916: INFO: Created: latency-svc-2m6ch Apr 21 13:53:18.950: INFO: Got endpoints: latency-svc-2m6ch [666.981509ms] Apr 21 13:53:18.982: INFO: Created: latency-svc-7wstq Apr 21 13:53:18.997: INFO: Got endpoints: latency-svc-7wstq [663.034352ms] Apr 21 13:53:19.095: INFO: Created: latency-svc-4xsmk Apr 21 13:53:19.099: INFO: Got endpoints: latency-svc-4xsmk [680.729499ms] Apr 21 13:53:19.131: INFO: Created: latency-svc-rz7kg Apr 21 13:53:19.141: INFO: Got endpoints: latency-svc-rz7kg [699.794541ms] Apr 21 13:53:19.161: INFO: Created: latency-svc-kcz4c Apr 21 13:53:19.191: INFO: Got endpoints: latency-svc-kcz4c [713.481967ms] Apr 21 13:53:19.275: INFO: Created: latency-svc-gpgnl Apr 21 13:53:19.279: INFO: Got endpoints: latency-svc-gpgnl [765.518276ms] Apr 21 13:53:19.329: INFO: Created: latency-svc-kwkj4 Apr 21 13:53:19.340: INFO: Got endpoints: latency-svc-kwkj4 [780.504724ms] Apr 21 13:53:19.365: INFO: Created: latency-svc-hl6m5 Apr 21 13:53:19.405: INFO: Got endpoints: latency-svc-hl6m5 [800.816656ms] Apr 21 13:53:19.414: INFO: Created: latency-svc-v7kwp Apr 21 13:53:19.431: INFO: Got endpoints: latency-svc-v7kwp [783.884452ms] Apr 21 13:53:19.454: INFO: Created: latency-svc-7d2dk Apr 21 13:53:19.466: INFO: Got endpoints: latency-svc-7d2dk [775.181016ms] Apr 21 13:53:19.491: INFO: Created: latency-svc-fllgd Apr 21 13:53:19.503: INFO: Got endpoints: latency-svc-fllgd [771.81312ms] Apr 21 13:53:19.562: INFO: Created: latency-svc-jbrkz Apr 21 13:53:19.565: INFO: Got endpoints: latency-svc-jbrkz [797.569246ms] Apr 21 13:53:19.611: INFO: Created: latency-svc-4zl99 Apr 21 13:53:19.624: INFO: Got endpoints: latency-svc-4zl99 [800.44248ms] Apr 21 13:53:19.641: INFO: Created: latency-svc-26gj7 Apr 21 13:53:19.673: INFO: Got endpoints: latency-svc-26gj7 [808.753966ms] Apr 21 13:53:19.705: INFO: Created: latency-svc-rvlxp Apr 21 13:53:19.707: INFO: Got endpoints: latency-svc-rvlxp [812.449555ms] Apr 21 13:53:19.730: INFO: Created: latency-svc-mnbhs Apr 21 13:53:19.744: INFO: Got endpoints: latency-svc-mnbhs [793.620947ms] Apr 21 13:53:19.767: INFO: Created: latency-svc-bz6m7 Apr 21 13:53:19.781: INFO: Got endpoints: latency-svc-bz6m7 [783.367942ms] Apr 21 13:53:19.803: INFO: Created: latency-svc-j5bsl Apr 21 13:53:19.842: INFO: Got endpoints: latency-svc-j5bsl [743.38647ms] Apr 21 13:53:19.851: INFO: Created: latency-svc-5stnd Apr 21 13:53:19.859: INFO: Got endpoints: latency-svc-5stnd [717.647309ms] Apr 21 13:53:19.887: INFO: Created: latency-svc-b8x5f Apr 21 13:53:19.895: INFO: Got endpoints: latency-svc-b8x5f [704.045528ms] Apr 21 13:53:19.917: INFO: Created: latency-svc-gd4xk Apr 21 13:53:19.919: INFO: Got endpoints: latency-svc-gd4xk [639.805448ms] Apr 21 13:53:19.981: INFO: Created: latency-svc-7ts8b Apr 21 13:53:19.984: INFO: Got endpoints: latency-svc-7ts8b [644.181319ms] Apr 21 13:53:20.019: INFO: Created: latency-svc-7n929 Apr 21 13:53:20.034: INFO: Got endpoints: latency-svc-7n929 [628.58684ms] Apr 21 13:53:20.067: INFO: Created: latency-svc-7dcnw Apr 21 13:53:20.146: INFO: Got endpoints: latency-svc-7dcnw [715.058834ms] Apr 21 13:53:20.181: INFO: Created: latency-svc-96qxp Apr 21 13:53:20.197: INFO: Got endpoints: latency-svc-96qxp [730.707753ms] Apr 21 13:53:20.217: INFO: Created: latency-svc-kr9lv Apr 21 13:53:20.250: INFO: Got endpoints: latency-svc-kr9lv [747.255267ms] Apr 21 13:53:20.260: INFO: Created: latency-svc-5l8px Apr 21 13:53:20.289: INFO: Got endpoints: latency-svc-5l8px [723.556717ms] Apr 21 13:53:20.325: INFO: Created: latency-svc-cwwgr Apr 21 13:53:20.335: INFO: Got endpoints: latency-svc-cwwgr [711.492459ms] Apr 21 13:53:20.382: INFO: Created: latency-svc-v7nvq Apr 21 13:53:20.385: INFO: Got endpoints: latency-svc-v7nvq [712.098096ms] Apr 21 13:53:20.434: INFO: Created: latency-svc-nt9bg Apr 21 13:53:20.450: INFO: Got endpoints: latency-svc-nt9bg [743.202631ms] Apr 21 13:53:20.469: INFO: Created: latency-svc-lblkn Apr 21 13:53:20.531: INFO: Got endpoints: latency-svc-lblkn [786.974245ms] Apr 21 13:53:20.533: INFO: Created: latency-svc-hdj5x Apr 21 13:53:20.540: INFO: Got endpoints: latency-svc-hdj5x [759.425707ms] Apr 21 13:53:20.577: INFO: Created: latency-svc-2tvck Apr 21 13:53:20.589: INFO: Got endpoints: latency-svc-2tvck [746.299773ms] Apr 21 13:53:20.613: INFO: Created: latency-svc-hbp7m Apr 21 13:53:20.693: INFO: Got endpoints: latency-svc-hbp7m [833.712063ms] Apr 21 13:53:20.695: INFO: Created: latency-svc-6sdqc Apr 21 13:53:20.703: INFO: Got endpoints: latency-svc-6sdqc [807.912236ms] Apr 21 13:53:20.738: INFO: Created: latency-svc-zcq5j Apr 21 13:53:20.752: INFO: Got endpoints: latency-svc-zcq5j [832.245855ms] Apr 21 13:53:20.787: INFO: Created: latency-svc-mx5d5 Apr 21 13:53:20.819: INFO: Got endpoints: latency-svc-mx5d5 [834.612569ms] Apr 21 13:53:20.829: INFO: Created: latency-svc-xdvpg Apr 21 13:53:20.842: INFO: Got endpoints: latency-svc-xdvpg [808.040294ms] Apr 21 13:53:20.865: INFO: Created: latency-svc-4whbs Apr 21 13:53:20.878: INFO: Got endpoints: latency-svc-4whbs [732.62211ms] Apr 21 13:53:20.895: INFO: Created: latency-svc-85kxn Apr 21 13:53:20.909: INFO: Got endpoints: latency-svc-85kxn [711.305827ms] Apr 21 13:53:20.951: INFO: Created: latency-svc-bgmp2 Apr 21 13:53:20.978: INFO: Got endpoints: latency-svc-bgmp2 [727.931631ms] Apr 21 13:53:20.979: INFO: Created: latency-svc-nktds Apr 21 13:53:21.045: INFO: Got endpoints: latency-svc-nktds [756.169166ms] Apr 21 13:53:21.107: INFO: Created: latency-svc-sxkcw Apr 21 13:53:21.108: INFO: Got endpoints: latency-svc-sxkcw [773.185725ms] Apr 21 13:53:21.135: INFO: Created: latency-svc-wlkk2 Apr 21 13:53:21.165: INFO: Got endpoints: latency-svc-wlkk2 [779.852055ms] Apr 21 13:53:21.189: INFO: Created: latency-svc-t9f22 Apr 21 13:53:21.268: INFO: Got endpoints: latency-svc-t9f22 [817.699917ms] Apr 21 13:53:21.271: INFO: Created: latency-svc-9ngpk Apr 21 13:53:21.276: INFO: Got endpoints: latency-svc-9ngpk [744.482925ms] Apr 21 13:53:21.310: INFO: Created: latency-svc-dlb4r Apr 21 13:53:21.324: INFO: Got endpoints: latency-svc-dlb4r [783.927749ms] Apr 21 13:53:21.345: INFO: Created: latency-svc-ktscc Apr 21 13:53:21.354: INFO: Got endpoints: latency-svc-ktscc [765.493747ms] Apr 21 13:53:21.413: INFO: Created: latency-svc-ct4nd Apr 21 13:53:21.417: INFO: Got endpoints: latency-svc-ct4nd [724.0686ms] Apr 21 13:53:21.441: INFO: Created: latency-svc-2b9r4 Apr 21 13:53:21.457: INFO: Got endpoints: latency-svc-2b9r4 [753.790538ms] Apr 21 13:53:21.478: INFO: Created: latency-svc-dbj5n Apr 21 13:53:21.493: INFO: Got endpoints: latency-svc-dbj5n [741.479629ms] Apr 21 13:53:21.555: INFO: Created: latency-svc-wsxdq Apr 21 13:53:21.559: INFO: Got endpoints: latency-svc-wsxdq [740.465439ms] Apr 21 13:53:21.590: INFO: Created: latency-svc-z5jxh Apr 21 13:53:21.602: INFO: Got endpoints: latency-svc-z5jxh [759.785823ms] Apr 21 13:53:21.626: INFO: Created: latency-svc-6fssq Apr 21 13:53:21.638: INFO: Got endpoints: latency-svc-6fssq [759.646211ms] Apr 21 13:53:21.693: INFO: Created: latency-svc-fxqlf Apr 21 13:53:21.698: INFO: Got endpoints: latency-svc-fxqlf [789.140471ms] Apr 21 13:53:21.718: INFO: Created: latency-svc-t8fll Apr 21 13:53:21.728: INFO: Got endpoints: latency-svc-t8fll [750.093189ms] Apr 21 13:53:21.748: INFO: Created: latency-svc-bmkmq Apr 21 13:53:21.759: INFO: Got endpoints: latency-svc-bmkmq [713.799778ms] Apr 21 13:53:21.776: INFO: Created: latency-svc-ttclt Apr 21 13:53:21.789: INFO: Got endpoints: latency-svc-ttclt [680.789104ms] Apr 21 13:53:21.837: INFO: Created: latency-svc-d7w99 Apr 21 13:53:21.878: INFO: Got endpoints: latency-svc-d7w99 [713.549355ms] Apr 21 13:53:21.879: INFO: Created: latency-svc-rwlc4 Apr 21 13:53:21.921: INFO: Got endpoints: latency-svc-rwlc4 [653.521245ms] Apr 21 13:53:21.993: INFO: Created: latency-svc-tnpjw Apr 21 13:53:22.004: INFO: Got endpoints: latency-svc-tnpjw [728.20586ms] Apr 21 13:53:22.035: INFO: Created: latency-svc-z5vj8 Apr 21 13:53:22.058: INFO: Got endpoints: latency-svc-z5vj8 [733.831627ms] Apr 21 13:53:22.137: INFO: Created: latency-svc-hh5jk Apr 21 13:53:22.155: INFO: Got endpoints: latency-svc-hh5jk [800.942122ms] Apr 21 13:53:22.184: INFO: Created: latency-svc-nj7lt Apr 21 13:53:22.199: INFO: Got endpoints: latency-svc-nj7lt [781.411581ms] Apr 21 13:53:22.221: INFO: Created: latency-svc-24gcj Apr 21 13:53:22.235: INFO: Got endpoints: latency-svc-24gcj [777.406725ms] Apr 21 13:53:22.292: INFO: Created: latency-svc-29ggw Apr 21 13:53:22.295: INFO: Got endpoints: latency-svc-29ggw [802.073199ms] Apr 21 13:53:22.323: INFO: Created: latency-svc-9hlj6 Apr 21 13:53:22.340: INFO: Got endpoints: latency-svc-9hlj6 [781.066758ms] Apr 21 13:53:22.365: INFO: Created: latency-svc-msqvn Apr 21 13:53:22.386: INFO: Got endpoints: latency-svc-msqvn [783.759296ms] Apr 21 13:53:22.455: INFO: Created: latency-svc-j6cdh Apr 21 13:53:22.472: INFO: Got endpoints: latency-svc-j6cdh [834.184415ms] Apr 21 13:53:22.903: INFO: Created: latency-svc-c8bdc Apr 21 13:53:22.916: INFO: Got endpoints: latency-svc-c8bdc [1.217917451s] Apr 21 13:53:23.484: INFO: Created: latency-svc-rj5nz Apr 21 13:53:23.503: INFO: Got endpoints: latency-svc-rj5nz [1.77469433s] Apr 21 13:53:23.534: INFO: Created: latency-svc-zr7jv Apr 21 13:53:23.564: INFO: Got endpoints: latency-svc-zr7jv [1.804790399s] Apr 21 13:53:23.639: INFO: Created: latency-svc-667sp Apr 21 13:53:23.651: INFO: Got endpoints: latency-svc-667sp [1.8616032s] Apr 21 13:53:23.678: INFO: Created: latency-svc-ldxpb Apr 21 13:53:23.694: INFO: Got endpoints: latency-svc-ldxpb [1.815238397s] Apr 21 13:53:23.714: INFO: Created: latency-svc-qljwz Apr 21 13:53:23.730: INFO: Got endpoints: latency-svc-qljwz [1.808483669s] Apr 21 13:53:23.777: INFO: Created: latency-svc-d7rft Apr 21 13:53:23.790: INFO: Got endpoints: latency-svc-d7rft [1.786257056s] Apr 21 13:53:23.821: INFO: Created: latency-svc-gwvf9 Apr 21 13:53:23.851: INFO: Got endpoints: latency-svc-gwvf9 [1.79240843s] Apr 21 13:53:23.933: INFO: Created: latency-svc-xtfvp Apr 21 13:53:23.959: INFO: Created: latency-svc-j7cgx Apr 21 13:53:23.959: INFO: Got endpoints: latency-svc-xtfvp [1.803827767s] Apr 21 13:53:23.976: INFO: Got endpoints: latency-svc-j7cgx [1.7778084s] Apr 21 13:53:23.994: INFO: Created: latency-svc-26g4w Apr 21 13:53:24.006: INFO: Got endpoints: latency-svc-26g4w [1.771694677s] Apr 21 13:53:24.031: INFO: Created: latency-svc-rnnzw Apr 21 13:53:24.094: INFO: Got endpoints: latency-svc-rnnzw [1.798922091s] Apr 21 13:53:24.096: INFO: Created: latency-svc-v8mlb Apr 21 13:53:24.103: INFO: Got endpoints: latency-svc-v8mlb [1.762368521s] Apr 21 13:53:24.140: INFO: Created: latency-svc-pk8sp Apr 21 13:53:24.151: INFO: Got endpoints: latency-svc-pk8sp [1.765665214s] Apr 21 13:53:24.187: INFO: Created: latency-svc-6pm6g Apr 21 13:53:24.227: INFO: Got endpoints: latency-svc-6pm6g [1.754787361s] Apr 21 13:53:24.241: INFO: Created: latency-svc-drsr4 Apr 21 13:53:24.266: INFO: Got endpoints: latency-svc-drsr4 [1.349766493s] Apr 21 13:53:24.314: INFO: Created: latency-svc-tvxnn Apr 21 13:53:24.376: INFO: Got endpoints: latency-svc-tvxnn [872.486175ms] Apr 21 13:53:24.381: INFO: Created: latency-svc-ch8mp Apr 21 13:53:24.386: INFO: Got endpoints: latency-svc-ch8mp [822.445905ms] Apr 21 13:53:24.415: INFO: Created: latency-svc-jlqzp Apr 21 13:53:24.441: INFO: Got endpoints: latency-svc-jlqzp [790.224646ms] Apr 21 13:53:24.463: INFO: Created: latency-svc-zk8np Apr 21 13:53:24.501: INFO: Got endpoints: latency-svc-zk8np [807.517734ms] Apr 21 13:53:24.517: INFO: Created: latency-svc-mvnk5 Apr 21 13:53:24.525: INFO: Got endpoints: latency-svc-mvnk5 [795.380321ms] Apr 21 13:53:24.547: INFO: Created: latency-svc-5ctvr Apr 21 13:53:24.562: INFO: Got endpoints: latency-svc-5ctvr [771.744346ms] Apr 21 13:53:24.584: INFO: Created: latency-svc-qkv4k Apr 21 13:53:24.598: INFO: Got endpoints: latency-svc-qkv4k [747.631289ms] Apr 21 13:53:24.640: INFO: Created: latency-svc-88sxn Apr 21 13:53:24.642: INFO: Got endpoints: latency-svc-88sxn [682.823337ms] Apr 21 13:53:24.679: INFO: Created: latency-svc-jmr99 Apr 21 13:53:24.688: INFO: Got endpoints: latency-svc-jmr99 [712.011015ms] Apr 21 13:53:24.715: INFO: Created: latency-svc-8s4x2 Apr 21 13:53:25.155: INFO: Got endpoints: latency-svc-8s4x2 [1.148032331s] Apr 21 13:53:25.160: INFO: Created: latency-svc-vpd4x Apr 21 13:53:25.622: INFO: Got endpoints: latency-svc-vpd4x [1.528028356s] Apr 21 13:53:25.663: INFO: Created: latency-svc-zgtvr Apr 21 13:53:25.678: INFO: Got endpoints: latency-svc-zgtvr [1.575126507s] Apr 21 13:53:25.699: INFO: Created: latency-svc-2rs4t Apr 21 13:53:25.714: INFO: Got endpoints: latency-svc-2rs4t [1.562097735s] Apr 21 13:53:25.801: INFO: Created: latency-svc-f24wq Apr 21 13:53:25.842: INFO: Got endpoints: latency-svc-f24wq [1.614888788s] Apr 21 13:53:25.844: INFO: Created: latency-svc-z42hh Apr 21 13:53:25.870: INFO: Got endpoints: latency-svc-z42hh [1.604321706s] Apr 21 13:53:25.890: INFO: Created: latency-svc-rlp2d Apr 21 13:53:25.962: INFO: Got endpoints: latency-svc-rlp2d [1.586577084s] Apr 21 13:53:25.965: INFO: Created: latency-svc-wtbj4 Apr 21 13:53:25.972: INFO: Got endpoints: latency-svc-wtbj4 [1.585859943s] Apr 21 13:53:26.019: INFO: Created: latency-svc-x4kq8 Apr 21 13:53:26.033: INFO: Got endpoints: latency-svc-x4kq8 [1.591594044s] Apr 21 13:53:26.058: INFO: Created: latency-svc-rsc64 Apr 21 13:53:26.118: INFO: Got endpoints: latency-svc-rsc64 [1.61690679s] Apr 21 13:53:26.121: INFO: Created: latency-svc-ff2s5 Apr 21 13:53:26.484: INFO: Got endpoints: latency-svc-ff2s5 [1.95840015s] Apr 21 13:53:26.507: INFO: Created: latency-svc-wngqx Apr 21 13:53:26.964: INFO: Got endpoints: latency-svc-wngqx [2.402008258s] Apr 21 13:53:27.006: INFO: Created: latency-svc-kvf2r Apr 21 13:53:27.031: INFO: Got endpoints: latency-svc-kvf2r [2.432184674s] Apr 21 13:53:27.052: INFO: Created: latency-svc-s7496 Apr 21 13:53:27.130: INFO: Got endpoints: latency-svc-s7496 [2.488151315s] Apr 21 13:53:27.132: INFO: Created: latency-svc-m96h5 Apr 21 13:53:27.136: INFO: Got endpoints: latency-svc-m96h5 [2.447838341s] Apr 21 13:53:27.161: INFO: Created: latency-svc-q2bvq Apr 21 13:53:27.173: INFO: Got endpoints: latency-svc-q2bvq [2.018516021s] Apr 21 13:53:27.190: INFO: Created: latency-svc-2hgbn Apr 21 13:53:27.210: INFO: Got endpoints: latency-svc-2hgbn [1.587216218s] Apr 21 13:53:27.287: INFO: Created: latency-svc-lrk97 Apr 21 13:53:27.290: INFO: Got endpoints: latency-svc-lrk97 [1.612102854s] Apr 21 13:53:27.311: INFO: Created: latency-svc-nd8x8 Apr 21 13:53:27.335: INFO: Got endpoints: latency-svc-nd8x8 [1.62130741s] Apr 21 13:53:27.373: INFO: Created: latency-svc-cpvkj Apr 21 13:53:27.385: INFO: Got endpoints: latency-svc-cpvkj [1.542726883s] Apr 21 13:53:27.430: INFO: Created: latency-svc-tzs67 Apr 21 13:53:27.454: INFO: Got endpoints: latency-svc-tzs67 [1.583902856s] Apr 21 13:53:27.455: INFO: Created: latency-svc-7rf6j Apr 21 13:53:27.465: INFO: Got endpoints: latency-svc-7rf6j [1.502451621s] Apr 21 13:53:27.465: INFO: Latencies: [59.608243ms 98.616696ms 150.111158ms 180.936892ms 191.992393ms 224.705727ms 279.238588ms 345.012046ms 384.907946ms 420.868137ms 472.207955ms 524.943369ms 612.218149ms 628.58684ms 637.613536ms 639.805448ms 644.181319ms 653.521245ms 663.034352ms 666.981509ms 674.021176ms 676.27056ms 680.729499ms 680.789104ms 681.381279ms 682.823337ms 693.253648ms 693.897004ms 699.794541ms 704.045528ms 711.305827ms 711.492459ms 712.011015ms 712.098096ms 713.481967ms 713.549355ms 713.799778ms 715.058834ms 717.631083ms 717.647309ms 723.556717ms 724.0686ms 727.687035ms 727.931631ms 728.20586ms 730.707753ms 732.62211ms 733.831627ms 740.465439ms 741.479629ms 741.866996ms 743.202631ms 743.38647ms 744.482925ms 746.299773ms 746.551158ms 747.255267ms 747.631289ms 748.739757ms 749.457175ms 750.093189ms 753.790538ms 754.317444ms 755.471618ms 756.169166ms 757.837855ms 759.425707ms 759.646211ms 759.785823ms 765.493747ms 765.518276ms 765.680277ms 768.7919ms 771.598926ms 771.744346ms 771.81312ms 772.340438ms 773.185725ms 775.181016ms 777.18233ms 777.406725ms 779.852055ms 780.233815ms 780.504724ms 781.066758ms 781.411581ms 781.784709ms 783.367942ms 783.759296ms 783.884452ms 783.927749ms 785.562019ms 786.974245ms 789.140471ms 790.224646ms 793.620947ms 795.380321ms 797.569246ms 800.44248ms 800.816656ms 800.942122ms 801.278127ms 802.073199ms 803.775184ms 806.444012ms 807.503951ms 807.517734ms 807.846543ms 807.912236ms 808.040294ms 808.753966ms 811.894296ms 812.449555ms 813.856142ms 816.226633ms 817.699917ms 818.966886ms 822.445905ms 824.649332ms 825.992627ms 832.245855ms 832.763812ms 833.497692ms 833.712063ms 834.184415ms 834.612569ms 841.072911ms 847.46743ms 852.496517ms 861.419218ms 872.486175ms 879.136914ms 879.214605ms 910.707453ms 911.411001ms 915.902186ms 928.033267ms 945.785928ms 946.083904ms 946.988683ms 949.686502ms 951.216076ms 961.519203ms 963.607856ms 965.252252ms 966.121439ms 967.243893ms 970.688495ms 970.789146ms 976.048062ms 989.252683ms 993.353169ms 993.913313ms 996.539003ms 1.010653543s 1.019455858s 1.033836558s 1.040776938s 1.045265065s 1.046123259s 1.051297709s 1.093964659s 1.148032331s 1.217917451s 1.349766493s 1.502451621s 1.528028356s 1.542726883s 1.562097735s 1.575126507s 1.583902856s 1.585859943s 1.586577084s 1.587216218s 1.591594044s 1.604321706s 1.612102854s 1.614888788s 1.61690679s 1.62130741s 1.754787361s 1.762368521s 1.765665214s 1.771694677s 1.77469433s 1.7778084s 1.786257056s 1.79240843s 1.798922091s 1.803827767s 1.804790399s 1.808483669s 1.815238397s 1.8616032s 1.95840015s 2.018516021s 2.402008258s 2.432184674s 2.447838341s 2.488151315s] Apr 21 13:53:27.465: INFO: 50 %ile: 800.942122ms Apr 21 13:53:27.465: INFO: 90 %ile: 1.754787361s Apr 21 13:53:27.465: INFO: 99 %ile: 2.447838341s Apr 21 13:53:27.465: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 21 13:53:27.465: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svc-latency-560" for this suite. Apr 21 13:53:55.488: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 21 13:53:55.558: INFO: namespace svc-latency-560 deletion completed in 28.089099985s • [SLOW TEST:44.529 seconds] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 21 13:53:55.559: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-configmap-7hrw STEP: Creating a pod to test atomic-volume-subpath Apr 21 13:53:55.645: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-7hrw" in namespace "subpath-8875" to be "success or failure" Apr 21 13:53:55.669: INFO: Pod "pod-subpath-test-configmap-7hrw": Phase="Pending", Reason="", readiness=false. Elapsed: 24.047166ms Apr 21 13:53:57.674: INFO: Pod "pod-subpath-test-configmap-7hrw": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029071764s Apr 21 13:53:59.706: INFO: Pod "pod-subpath-test-configmap-7hrw": Phase="Running", Reason="", readiness=true. Elapsed: 4.060642635s Apr 21 13:54:01.710: INFO: Pod "pod-subpath-test-configmap-7hrw": Phase="Running", Reason="", readiness=true. Elapsed: 6.064951722s Apr 21 13:54:03.714: INFO: Pod "pod-subpath-test-configmap-7hrw": Phase="Running", Reason="", readiness=true. Elapsed: 8.068625108s Apr 21 13:54:05.717: INFO: Pod "pod-subpath-test-configmap-7hrw": Phase="Running", Reason="", readiness=true. Elapsed: 10.07215044s Apr 21 13:54:07.721: INFO: Pod "pod-subpath-test-configmap-7hrw": Phase="Running", Reason="", readiness=true. Elapsed: 12.075945042s Apr 21 13:54:09.725: INFO: Pod "pod-subpath-test-configmap-7hrw": Phase="Running", Reason="", readiness=true. Elapsed: 14.080494029s Apr 21 13:54:11.729: INFO: Pod "pod-subpath-test-configmap-7hrw": Phase="Running", Reason="", readiness=true. Elapsed: 16.08400354s Apr 21 13:54:13.733: INFO: Pod "pod-subpath-test-configmap-7hrw": Phase="Running", Reason="", readiness=true. Elapsed: 18.087871681s Apr 21 13:54:15.736: INFO: Pod "pod-subpath-test-configmap-7hrw": Phase="Running", Reason="", readiness=true. Elapsed: 20.090949548s Apr 21 13:54:17.747: INFO: Pod "pod-subpath-test-configmap-7hrw": Phase="Running", Reason="", readiness=true. Elapsed: 22.10182808s Apr 21 13:54:19.751: INFO: Pod "pod-subpath-test-configmap-7hrw": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.10624475s STEP: Saw pod success Apr 21 13:54:19.751: INFO: Pod "pod-subpath-test-configmap-7hrw" satisfied condition "success or failure" Apr 21 13:54:19.754: INFO: Trying to get logs from node iruya-worker2 pod pod-subpath-test-configmap-7hrw container test-container-subpath-configmap-7hrw: STEP: delete the pod Apr 21 13:54:19.772: INFO: Waiting for pod pod-subpath-test-configmap-7hrw to disappear Apr 21 13:54:19.776: INFO: Pod pod-subpath-test-configmap-7hrw no longer exists STEP: Deleting pod pod-subpath-test-configmap-7hrw Apr 21 13:54:19.777: INFO: Deleting pod "pod-subpath-test-configmap-7hrw" in namespace "subpath-8875" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 21 13:54:19.783: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-8875" for this suite. Apr 21 13:54:25.844: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 21 13:54:25.920: INFO: namespace subpath-8875 deletion completed in 6.130816033s • [SLOW TEST:30.361 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 21 13:54:25.920: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test override command Apr 21 13:54:26.001: INFO: Waiting up to 5m0s for pod "client-containers-e1e9960e-63f0-4eba-8809-941ba5511486" in namespace "containers-5308" to be "success or failure" Apr 21 13:54:26.005: INFO: Pod "client-containers-e1e9960e-63f0-4eba-8809-941ba5511486": Phase="Pending", Reason="", readiness=false. Elapsed: 3.734314ms Apr 21 13:54:28.008: INFO: Pod "client-containers-e1e9960e-63f0-4eba-8809-941ba5511486": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007412069s Apr 21 13:54:30.013: INFO: Pod "client-containers-e1e9960e-63f0-4eba-8809-941ba5511486": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012159264s STEP: Saw pod success Apr 21 13:54:30.013: INFO: Pod "client-containers-e1e9960e-63f0-4eba-8809-941ba5511486" satisfied condition "success or failure" Apr 21 13:54:30.017: INFO: Trying to get logs from node iruya-worker2 pod client-containers-e1e9960e-63f0-4eba-8809-941ba5511486 container test-container: STEP: delete the pod Apr 21 13:54:30.035: INFO: Waiting for pod client-containers-e1e9960e-63f0-4eba-8809-941ba5511486 to disappear Apr 21 13:54:30.040: INFO: Pod client-containers-e1e9960e-63f0-4eba-8809-941ba5511486 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 21 13:54:30.040: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-5308" for this suite. Apr 21 13:54:36.103: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 21 13:54:36.208: INFO: namespace containers-5308 deletion completed in 6.16476674s • [SLOW TEST:10.288 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 21 13:54:36.208: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Apr 21 13:54:36.293: INFO: Create a RollingUpdate DaemonSet Apr 21 13:54:36.296: INFO: Check that daemon pods launch on every node of the cluster Apr 21 13:54:36.304: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 21 13:54:36.309: INFO: Number of nodes with available pods: 0 Apr 21 13:54:36.309: INFO: Node iruya-worker is running more than one daemon pod Apr 21 13:54:37.315: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 21 13:54:37.318: INFO: Number of nodes with available pods: 0 Apr 21 13:54:37.318: INFO: Node iruya-worker is running more than one daemon pod Apr 21 13:54:38.314: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 21 13:54:38.318: INFO: Number of nodes with available pods: 0 Apr 21 13:54:38.318: INFO: Node iruya-worker is running more than one daemon pod Apr 21 13:54:39.314: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 21 13:54:39.316: INFO: Number of nodes with available pods: 0 Apr 21 13:54:39.317: INFO: Node iruya-worker is running more than one daemon pod Apr 21 13:54:40.313: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 21 13:54:40.316: INFO: Number of nodes with available pods: 2 Apr 21 13:54:40.316: INFO: Number of running nodes: 2, number of available pods: 2 Apr 21 13:54:40.316: INFO: Update the DaemonSet to trigger a rollout Apr 21 13:54:40.322: INFO: Updating DaemonSet daemon-set Apr 21 13:54:45.340: INFO: Roll back the DaemonSet before rollout is complete Apr 21 13:54:45.347: INFO: Updating DaemonSet daemon-set Apr 21 13:54:45.347: INFO: Make sure DaemonSet rollback is complete Apr 21 13:54:45.352: INFO: Wrong image for pod: daemon-set-kjqkq. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent. Apr 21 13:54:45.352: INFO: Pod daemon-set-kjqkq is not available Apr 21 13:54:45.372: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 21 13:54:46.377: INFO: Wrong image for pod: daemon-set-kjqkq. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent. Apr 21 13:54:46.377: INFO: Pod daemon-set-kjqkq is not available Apr 21 13:54:46.382: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 21 13:54:47.376: INFO: Wrong image for pod: daemon-set-kjqkq. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent. Apr 21 13:54:47.376: INFO: Pod daemon-set-kjqkq is not available Apr 21 13:54:47.380: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 21 13:54:48.377: INFO: Wrong image for pod: daemon-set-kjqkq. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent. Apr 21 13:54:48.377: INFO: Pod daemon-set-kjqkq is not available Apr 21 13:54:48.381: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 21 13:54:49.376: INFO: Wrong image for pod: daemon-set-kjqkq. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent. Apr 21 13:54:49.377: INFO: Pod daemon-set-kjqkq is not available Apr 21 13:54:49.380: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 21 13:54:50.377: INFO: Wrong image for pod: daemon-set-kjqkq. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent. Apr 21 13:54:50.377: INFO: Pod daemon-set-kjqkq is not available Apr 21 13:54:50.381: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 21 13:54:51.376: INFO: Wrong image for pod: daemon-set-kjqkq. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent. Apr 21 13:54:51.376: INFO: Pod daemon-set-kjqkq is not available Apr 21 13:54:51.381: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 21 13:54:52.376: INFO: Pod daemon-set-km645 is not available Apr 21 13:54:52.379: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-840, will wait for the garbage collector to delete the pods Apr 21 13:54:52.444: INFO: Deleting DaemonSet.extensions daemon-set took: 6.847797ms Apr 21 13:54:52.744: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.22687ms Apr 21 13:55:01.970: INFO: Number of nodes with available pods: 0 Apr 21 13:55:01.970: INFO: Number of running nodes: 0, number of available pods: 0 Apr 21 13:55:01.972: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-840/daemonsets","resourceVersion":"6648716"},"items":null} Apr 21 13:55:01.975: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-840/pods","resourceVersion":"6648716"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 21 13:55:01.983: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-840" for this suite. Apr 21 13:55:08.008: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 21 13:55:08.082: INFO: namespace daemonsets-840 deletion completed in 6.096135217s • [SLOW TEST:31.874 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 21 13:55:08.083: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod Apr 21 13:55:12.215: INFO: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:send-events-81d2900d-623d-42a7-9ece-b1a71db5e4c9,GenerateName:,Namespace:events-7924,SelfLink:/api/v1/namespaces/events-7924/pods/send-events-81d2900d-623d-42a7-9ece-b1a71db5e4c9,UID:d029e2a6-3452-4cc8-aead-d24a334c2b28,ResourceVersion:6648772,Generation:0,CreationTimestamp:2020-04-21 13:55:08 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: foo,time: 138920181,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-wh48f {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-wh48f,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{p gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 [] [] [{ 0 80 TCP }] [] [] {map[] map[]} [{default-token-wh48f true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00239b5d0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00239b5f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-21 13:55:08 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-04-21 13:55:11 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-04-21 13:55:11 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-21 13:55:08 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:10.244.1.87,StartTime:2020-04-21 13:55:08 +0000 UTC,ContainerStatuses:[{p {nil ContainerStateRunning{StartedAt:2020-04-21 13:55:10 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 containerd://1462f267a4d8019091add4fffb016a878d23dd95421e5317e7f1e56835017d5d}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} STEP: checking for scheduler event about the pod Apr 21 13:55:14.219: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod Apr 21 13:55:16.223: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 21 13:55:16.228: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-7924" for this suite. Apr 21 13:55:54.272: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 21 13:55:54.356: INFO: namespace events-7924 deletion completed in 38.122601019s • [SLOW TEST:46.273 seconds] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 21 13:55:54.357: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-secret-2xgt STEP: Creating a pod to test atomic-volume-subpath Apr 21 13:55:54.446: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-2xgt" in namespace "subpath-1581" to be "success or failure" Apr 21 13:55:54.467: INFO: Pod "pod-subpath-test-secret-2xgt": Phase="Pending", Reason="", readiness=false. Elapsed: 21.289224ms Apr 21 13:55:56.471: INFO: Pod "pod-subpath-test-secret-2xgt": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025795788s Apr 21 13:55:58.476: INFO: Pod "pod-subpath-test-secret-2xgt": Phase="Running", Reason="", readiness=true. Elapsed: 4.030302106s Apr 21 13:56:00.480: INFO: Pod "pod-subpath-test-secret-2xgt": Phase="Running", Reason="", readiness=true. Elapsed: 6.034669313s Apr 21 13:56:02.485: INFO: Pod "pod-subpath-test-secret-2xgt": Phase="Running", Reason="", readiness=true. Elapsed: 8.039174355s Apr 21 13:56:04.489: INFO: Pod "pod-subpath-test-secret-2xgt": Phase="Running", Reason="", readiness=true. Elapsed: 10.043562528s Apr 21 13:56:06.493: INFO: Pod "pod-subpath-test-secret-2xgt": Phase="Running", Reason="", readiness=true. Elapsed: 12.046870973s Apr 21 13:56:08.496: INFO: Pod "pod-subpath-test-secret-2xgt": Phase="Running", Reason="", readiness=true. Elapsed: 14.05063619s Apr 21 13:56:10.501: INFO: Pod "pod-subpath-test-secret-2xgt": Phase="Running", Reason="", readiness=true. Elapsed: 16.055435362s Apr 21 13:56:12.506: INFO: Pod "pod-subpath-test-secret-2xgt": Phase="Running", Reason="", readiness=true. Elapsed: 18.059925737s Apr 21 13:56:14.509: INFO: Pod "pod-subpath-test-secret-2xgt": Phase="Running", Reason="", readiness=true. Elapsed: 20.063588902s Apr 21 13:56:16.514: INFO: Pod "pod-subpath-test-secret-2xgt": Phase="Running", Reason="", readiness=true. Elapsed: 22.068096675s Apr 21 13:56:18.523: INFO: Pod "pod-subpath-test-secret-2xgt": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.076979188s STEP: Saw pod success Apr 21 13:56:18.523: INFO: Pod "pod-subpath-test-secret-2xgt" satisfied condition "success or failure" Apr 21 13:56:18.525: INFO: Trying to get logs from node iruya-worker pod pod-subpath-test-secret-2xgt container test-container-subpath-secret-2xgt: STEP: delete the pod Apr 21 13:56:18.546: INFO: Waiting for pod pod-subpath-test-secret-2xgt to disappear Apr 21 13:56:18.550: INFO: Pod pod-subpath-test-secret-2xgt no longer exists STEP: Deleting pod pod-subpath-test-secret-2xgt Apr 21 13:56:18.550: INFO: Deleting pod "pod-subpath-test-secret-2xgt" in namespace "subpath-1581" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 21 13:56:18.553: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-1581" for this suite. Apr 21 13:56:24.587: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 21 13:56:24.667: INFO: namespace subpath-1581 deletion completed in 6.111009682s • [SLOW TEST:30.310 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 21 13:56:24.667: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 21 13:57:24.779: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-8677" for this suite. Apr 21 13:57:46.801: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 21 13:57:46.871: INFO: namespace container-probe-8677 deletion completed in 22.08708576s • [SLOW TEST:82.204 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl patch should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 21 13:57:46.871: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating Redis RC Apr 21 13:57:46.956: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5767' Apr 21 13:57:49.383: INFO: stderr: "" Apr 21 13:57:49.383: INFO: stdout: "replicationcontroller/redis-master created\n" STEP: Waiting for Redis master to start. Apr 21 13:57:50.388: INFO: Selector matched 1 pods for map[app:redis] Apr 21 13:57:50.388: INFO: Found 0 / 1 Apr 21 13:57:51.387: INFO: Selector matched 1 pods for map[app:redis] Apr 21 13:57:51.388: INFO: Found 0 / 1 Apr 21 13:57:52.388: INFO: Selector matched 1 pods for map[app:redis] Apr 21 13:57:52.388: INFO: Found 0 / 1 Apr 21 13:57:53.403: INFO: Selector matched 1 pods for map[app:redis] Apr 21 13:57:53.403: INFO: Found 1 / 1 Apr 21 13:57:53.403: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods Apr 21 13:57:53.406: INFO: Selector matched 1 pods for map[app:redis] Apr 21 13:57:53.406: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Apr 21 13:57:53.407: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod redis-master-v6hjq --namespace=kubectl-5767 -p {"metadata":{"annotations":{"x":"y"}}}' Apr 21 13:57:53.507: INFO: stderr: "" Apr 21 13:57:53.507: INFO: stdout: "pod/redis-master-v6hjq patched\n" STEP: checking annotations Apr 21 13:57:53.523: INFO: Selector matched 1 pods for map[app:redis] Apr 21 13:57:53.523: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 21 13:57:53.523: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5767" for this suite. Apr 21 13:58:15.538: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 21 13:58:15.619: INFO: namespace kubectl-5767 deletion completed in 22.093172383s • [SLOW TEST:28.748 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl patch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 21 13:58:15.620: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars Apr 21 13:58:15.704: INFO: Waiting up to 5m0s for pod "downward-api-7f19daf8-d71c-4cf4-ab2a-34ff6b70caf2" in namespace "downward-api-8552" to be "success or failure" Apr 21 13:58:15.719: INFO: Pod "downward-api-7f19daf8-d71c-4cf4-ab2a-34ff6b70caf2": Phase="Pending", Reason="", readiness=false. Elapsed: 14.867353ms Apr 21 13:58:17.723: INFO: Pod "downward-api-7f19daf8-d71c-4cf4-ab2a-34ff6b70caf2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018859689s Apr 21 13:58:19.728: INFO: Pod "downward-api-7f19daf8-d71c-4cf4-ab2a-34ff6b70caf2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023384707s STEP: Saw pod success Apr 21 13:58:19.728: INFO: Pod "downward-api-7f19daf8-d71c-4cf4-ab2a-34ff6b70caf2" satisfied condition "success or failure" Apr 21 13:58:19.731: INFO: Trying to get logs from node iruya-worker2 pod downward-api-7f19daf8-d71c-4cf4-ab2a-34ff6b70caf2 container dapi-container: STEP: delete the pod Apr 21 13:58:19.758: INFO: Waiting for pod downward-api-7f19daf8-d71c-4cf4-ab2a-34ff6b70caf2 to disappear Apr 21 13:58:19.762: INFO: Pod downward-api-7f19daf8-d71c-4cf4-ab2a-34ff6b70caf2 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 21 13:58:19.762: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8552" for this suite. Apr 21 13:58:25.778: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 21 13:58:25.882: INFO: namespace downward-api-8552 deletion completed in 6.115412476s • [SLOW TEST:10.262 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 21 13:58:25.883: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object Apr 21 13:58:25.980: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-1951,SelfLink:/api/v1/namespaces/watch-1951/configmaps/e2e-watch-test-label-changed,UID:4a293069-b3d5-4128-8719-47e4206a2917,ResourceVersion:6649287,Generation:0,CreationTimestamp:2020-04-21 13:58:25 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} Apr 21 13:58:25.980: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-1951,SelfLink:/api/v1/namespaces/watch-1951/configmaps/e2e-watch-test-label-changed,UID:4a293069-b3d5-4128-8719-47e4206a2917,ResourceVersion:6649288,Generation:0,CreationTimestamp:2020-04-21 13:58:25 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} Apr 21 13:58:25.980: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-1951,SelfLink:/api/v1/namespaces/watch-1951/configmaps/e2e-watch-test-label-changed,UID:4a293069-b3d5-4128-8719-47e4206a2917,ResourceVersion:6649289,Generation:0,CreationTimestamp:2020-04-21 13:58:25 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored Apr 21 13:58:36.004: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-1951,SelfLink:/api/v1/namespaces/watch-1951/configmaps/e2e-watch-test-label-changed,UID:4a293069-b3d5-4128-8719-47e4206a2917,ResourceVersion:6649311,Generation:0,CreationTimestamp:2020-04-21 13:58:25 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Apr 21 13:58:36.004: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-1951,SelfLink:/api/v1/namespaces/watch-1951/configmaps/e2e-watch-test-label-changed,UID:4a293069-b3d5-4128-8719-47e4206a2917,ResourceVersion:6649312,Generation:0,CreationTimestamp:2020-04-21 13:58:25 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} Apr 21 13:58:36.005: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-1951,SelfLink:/api/v1/namespaces/watch-1951/configmaps/e2e-watch-test-label-changed,UID:4a293069-b3d5-4128-8719-47e4206a2917,ResourceVersion:6649313,Generation:0,CreationTimestamp:2020-04-21 13:58:25 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 21 13:58:36.005: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-1951" for this suite. Apr 21 13:58:42.019: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 21 13:58:42.099: INFO: namespace watch-1951 deletion completed in 6.089460235s • [SLOW TEST:16.216 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 21 13:58:42.100: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44 [It] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod Apr 21 13:58:42.171: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 21 13:58:49.290: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-685" for this suite. Apr 21 13:58:55.356: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 21 13:58:55.425: INFO: namespace init-container-685 deletion completed in 6.085810113s • [SLOW TEST:13.326 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 21 13:58:55.425: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0644 on tmpfs Apr 21 13:58:55.500: INFO: Waiting up to 5m0s for pod "pod-db4601fe-37e1-43a9-8f6b-8a490fe58eb5" in namespace "emptydir-7547" to be "success or failure" Apr 21 13:58:55.505: INFO: Pod "pod-db4601fe-37e1-43a9-8f6b-8a490fe58eb5": Phase="Pending", Reason="", readiness=false. Elapsed: 5.016452ms Apr 21 13:58:57.509: INFO: Pod "pod-db4601fe-37e1-43a9-8f6b-8a490fe58eb5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008692147s Apr 21 13:58:59.512: INFO: Pod "pod-db4601fe-37e1-43a9-8f6b-8a490fe58eb5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011933126s STEP: Saw pod success Apr 21 13:58:59.512: INFO: Pod "pod-db4601fe-37e1-43a9-8f6b-8a490fe58eb5" satisfied condition "success or failure" Apr 21 13:58:59.514: INFO: Trying to get logs from node iruya-worker pod pod-db4601fe-37e1-43a9-8f6b-8a490fe58eb5 container test-container: STEP: delete the pod Apr 21 13:58:59.562: INFO: Waiting for pod pod-db4601fe-37e1-43a9-8f6b-8a490fe58eb5 to disappear Apr 21 13:58:59.619: INFO: Pod pod-db4601fe-37e1-43a9-8f6b-8a490fe58eb5 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 21 13:58:59.620: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7547" for this suite. Apr 21 13:59:05.761: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 21 13:59:05.839: INFO: namespace emptydir-7547 deletion completed in 6.216138629s • [SLOW TEST:10.414 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 21 13:59:05.839: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Apr 21 13:59:05.934: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d1adb69d-9960-4919-bad2-1b70699c3200" in namespace "downward-api-4752" to be "success or failure" Apr 21 13:59:05.937: INFO: Pod "downwardapi-volume-d1adb69d-9960-4919-bad2-1b70699c3200": Phase="Pending", Reason="", readiness=false. Elapsed: 3.861028ms Apr 21 13:59:07.942: INFO: Pod "downwardapi-volume-d1adb69d-9960-4919-bad2-1b70699c3200": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00805163s Apr 21 13:59:09.946: INFO: Pod "downwardapi-volume-d1adb69d-9960-4919-bad2-1b70699c3200": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012012471s STEP: Saw pod success Apr 21 13:59:09.946: INFO: Pod "downwardapi-volume-d1adb69d-9960-4919-bad2-1b70699c3200" satisfied condition "success or failure" Apr 21 13:59:09.949: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-d1adb69d-9960-4919-bad2-1b70699c3200 container client-container: STEP: delete the pod Apr 21 13:59:09.975: INFO: Waiting for pod downwardapi-volume-d1adb69d-9960-4919-bad2-1b70699c3200 to disappear Apr 21 13:59:10.006: INFO: Pod downwardapi-volume-d1adb69d-9960-4919-bad2-1b70699c3200 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 21 13:59:10.006: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4752" for this suite. Apr 21 13:59:16.031: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 21 13:59:16.109: INFO: namespace downward-api-4752 deletion completed in 6.098938199s • [SLOW TEST:10.270 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 21 13:59:16.110: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-de4a8659-a848-4595-991c-8bfb0f4d312c STEP: Creating a pod to test consume secrets Apr 21 13:59:16.233: INFO: Waiting up to 5m0s for pod "pod-secrets-6488fd2c-71be-489d-9824-cefb17278ba1" in namespace "secrets-5704" to be "success or failure" Apr 21 13:59:16.251: INFO: Pod "pod-secrets-6488fd2c-71be-489d-9824-cefb17278ba1": Phase="Pending", Reason="", readiness=false. Elapsed: 18.243476ms Apr 21 13:59:18.255: INFO: Pod "pod-secrets-6488fd2c-71be-489d-9824-cefb17278ba1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022059128s Apr 21 13:59:20.260: INFO: Pod "pod-secrets-6488fd2c-71be-489d-9824-cefb17278ba1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.026721145s STEP: Saw pod success Apr 21 13:59:20.260: INFO: Pod "pod-secrets-6488fd2c-71be-489d-9824-cefb17278ba1" satisfied condition "success or failure" Apr 21 13:59:20.263: INFO: Trying to get logs from node iruya-worker2 pod pod-secrets-6488fd2c-71be-489d-9824-cefb17278ba1 container secret-volume-test: STEP: delete the pod Apr 21 13:59:20.285: INFO: Waiting for pod pod-secrets-6488fd2c-71be-489d-9824-cefb17278ba1 to disappear Apr 21 13:59:20.290: INFO: Pod pod-secrets-6488fd2c-71be-489d-9824-cefb17278ba1 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 21 13:59:20.290: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-5704" for this suite. Apr 21 13:59:26.305: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 21 13:59:26.381: INFO: namespace secrets-5704 deletion completed in 6.087118403s • [SLOW TEST:10.271 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 21 13:59:26.381: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod busybox-a60a1846-ad4e-47ac-867f-8a9c2f261196 in namespace container-probe-8088 Apr 21 13:59:30.486: INFO: Started pod busybox-a60a1846-ad4e-47ac-867f-8a9c2f261196 in namespace container-probe-8088 STEP: checking the pod's current state and verifying that restartCount is present Apr 21 13:59:30.490: INFO: Initial restart count of pod busybox-a60a1846-ad4e-47ac-867f-8a9c2f261196 is 0 Apr 21 14:00:16.599: INFO: Restart count of pod container-probe-8088/busybox-a60a1846-ad4e-47ac-867f-8a9c2f261196 is now 1 (46.108962285s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 21 14:00:16.625: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-8088" for this suite. Apr 21 14:00:22.681: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 21 14:00:22.763: INFO: namespace container-probe-8088 deletion completed in 6.123087927s • [SLOW TEST:56.382 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 21 14:00:22.763: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-map-e15e5013-edb5-42c3-8656-dbc47948a7b5 STEP: Creating a pod to test consume configMaps Apr 21 14:00:22.818: INFO: Waiting up to 5m0s for pod "pod-configmaps-49b27e20-2122-4697-ae2c-d1d368a35c73" in namespace "configmap-6743" to be "success or failure" Apr 21 14:00:22.830: INFO: Pod "pod-configmaps-49b27e20-2122-4697-ae2c-d1d368a35c73": Phase="Pending", Reason="", readiness=false. Elapsed: 12.069668ms Apr 21 14:00:24.834: INFO: Pod "pod-configmaps-49b27e20-2122-4697-ae2c-d1d368a35c73": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015437776s Apr 21 14:00:26.838: INFO: Pod "pod-configmaps-49b27e20-2122-4697-ae2c-d1d368a35c73": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019987429s STEP: Saw pod success Apr 21 14:00:26.838: INFO: Pod "pod-configmaps-49b27e20-2122-4697-ae2c-d1d368a35c73" satisfied condition "success or failure" Apr 21 14:00:26.842: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-49b27e20-2122-4697-ae2c-d1d368a35c73 container configmap-volume-test: STEP: delete the pod Apr 21 14:00:26.898: INFO: Waiting for pod pod-configmaps-49b27e20-2122-4697-ae2c-d1d368a35c73 to disappear Apr 21 14:00:26.914: INFO: Pod pod-configmaps-49b27e20-2122-4697-ae2c-d1d368a35c73 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 21 14:00:26.914: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6743" for this suite. Apr 21 14:00:32.928: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 21 14:00:32.993: INFO: namespace configmap-6743 deletion completed in 6.076084543s • [SLOW TEST:10.230 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 21 14:00:32.994: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Apr 21 14:00:33.083: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9faa3c2e-e818-41cf-ae7a-e69ae6e1d6dd" in namespace "downward-api-2087" to be "success or failure" Apr 21 14:00:33.099: INFO: Pod "downwardapi-volume-9faa3c2e-e818-41cf-ae7a-e69ae6e1d6dd": Phase="Pending", Reason="", readiness=false. Elapsed: 15.939936ms Apr 21 14:00:35.104: INFO: Pod "downwardapi-volume-9faa3c2e-e818-41cf-ae7a-e69ae6e1d6dd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020245454s Apr 21 14:00:37.108: INFO: Pod "downwardapi-volume-9faa3c2e-e818-41cf-ae7a-e69ae6e1d6dd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024670926s STEP: Saw pod success Apr 21 14:00:37.108: INFO: Pod "downwardapi-volume-9faa3c2e-e818-41cf-ae7a-e69ae6e1d6dd" satisfied condition "success or failure" Apr 21 14:00:37.112: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-9faa3c2e-e818-41cf-ae7a-e69ae6e1d6dd container client-container: STEP: delete the pod Apr 21 14:00:37.143: INFO: Waiting for pod downwardapi-volume-9faa3c2e-e818-41cf-ae7a-e69ae6e1d6dd to disappear Apr 21 14:00:37.147: INFO: Pod downwardapi-volume-9faa3c2e-e818-41cf-ae7a-e69ae6e1d6dd no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 21 14:00:37.147: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2087" for this suite. Apr 21 14:00:43.162: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 21 14:00:43.238: INFO: namespace downward-api-2087 deletion completed in 6.085840472s • [SLOW TEST:10.244 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [k8s.io] Pods should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 21 14:00:43.238: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Apr 21 14:00:47.843: INFO: Successfully updated pod "pod-update-816fe87c-38f0-4a65-83a5-90a7baf31475" STEP: verifying the updated pod is in kubernetes Apr 21 14:00:47.852: INFO: Pod update OK [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 21 14:00:47.852: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-6299" for this suite. Apr 21 14:01:09.903: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 21 14:01:09.981: INFO: namespace pods-6299 deletion completed in 22.125624528s • [SLOW TEST:26.744 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 21 14:01:09.981: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod liveness-67094a10-713f-40ac-a01e-3d1fb497eaa4 in namespace container-probe-9248 Apr 21 14:01:14.068: INFO: Started pod liveness-67094a10-713f-40ac-a01e-3d1fb497eaa4 in namespace container-probe-9248 STEP: checking the pod's current state and verifying that restartCount is present Apr 21 14:01:14.076: INFO: Initial restart count of pod liveness-67094a10-713f-40ac-a01e-3d1fb497eaa4 is 0 Apr 21 14:01:30.114: INFO: Restart count of pod container-probe-9248/liveness-67094a10-713f-40ac-a01e-3d1fb497eaa4 is now 1 (16.038226736s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 21 14:01:30.126: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-9248" for this suite. Apr 21 14:01:36.187: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 21 14:01:36.267: INFO: namespace container-probe-9248 deletion completed in 6.135668816s • [SLOW TEST:26.285 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 21 14:01:36.267: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-map-04217e87-4663-4d2b-8861-8c71d2be436b STEP: Creating a pod to test consume secrets Apr 21 14:01:36.372: INFO: Waiting up to 5m0s for pod "pod-secrets-37a8e441-b14d-46c1-9cdd-8559d807344d" in namespace "secrets-2673" to be "success or failure" Apr 21 14:01:36.378: INFO: Pod "pod-secrets-37a8e441-b14d-46c1-9cdd-8559d807344d": Phase="Pending", Reason="", readiness=false. Elapsed: 5.357499ms Apr 21 14:01:38.381: INFO: Pod "pod-secrets-37a8e441-b14d-46c1-9cdd-8559d807344d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008488356s Apr 21 14:01:40.385: INFO: Pod "pod-secrets-37a8e441-b14d-46c1-9cdd-8559d807344d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012635858s STEP: Saw pod success Apr 21 14:01:40.385: INFO: Pod "pod-secrets-37a8e441-b14d-46c1-9cdd-8559d807344d" satisfied condition "success or failure" Apr 21 14:01:40.388: INFO: Trying to get logs from node iruya-worker2 pod pod-secrets-37a8e441-b14d-46c1-9cdd-8559d807344d container secret-volume-test: STEP: delete the pod Apr 21 14:01:40.432: INFO: Waiting for pod pod-secrets-37a8e441-b14d-46c1-9cdd-8559d807344d to disappear Apr 21 14:01:40.437: INFO: Pod pod-secrets-37a8e441-b14d-46c1-9cdd-8559d807344d no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 21 14:01:40.437: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-2673" for this suite. Apr 21 14:01:46.452: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 21 14:01:46.531: INFO: namespace secrets-2673 deletion completed in 6.089454214s • [SLOW TEST:10.264 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 21 14:01:46.532: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0777 on tmpfs Apr 21 14:01:46.640: INFO: Waiting up to 5m0s for pod "pod-e099b402-c280-4972-86ea-cc79537fadba" in namespace "emptydir-6442" to be "success or failure" Apr 21 14:01:46.643: INFO: Pod "pod-e099b402-c280-4972-86ea-cc79537fadba": Phase="Pending", Reason="", readiness=false. Elapsed: 3.025046ms Apr 21 14:01:48.650: INFO: Pod "pod-e099b402-c280-4972-86ea-cc79537fadba": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009492772s Apr 21 14:01:50.654: INFO: Pod "pod-e099b402-c280-4972-86ea-cc79537fadba": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013881798s STEP: Saw pod success Apr 21 14:01:50.654: INFO: Pod "pod-e099b402-c280-4972-86ea-cc79537fadba" satisfied condition "success or failure" Apr 21 14:01:50.657: INFO: Trying to get logs from node iruya-worker2 pod pod-e099b402-c280-4972-86ea-cc79537fadba container test-container: STEP: delete the pod Apr 21 14:01:50.714: INFO: Waiting for pod pod-e099b402-c280-4972-86ea-cc79537fadba to disappear Apr 21 14:01:50.725: INFO: Pod pod-e099b402-c280-4972-86ea-cc79537fadba no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 21 14:01:50.725: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6442" for this suite. Apr 21 14:01:56.740: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 21 14:01:56.811: INFO: namespace emptydir-6442 deletion completed in 6.083322585s • [SLOW TEST:10.279 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 21 14:01:56.811: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-map-90288c4b-56b6-413f-8887-f4556f66e8c0 STEP: Creating a pod to test consume configMaps Apr 21 14:01:56.905: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-9b8afc8a-c6e4-4b27-a3a7-dfdf93a62ec1" in namespace "projected-958" to be "success or failure" Apr 21 14:01:56.952: INFO: Pod "pod-projected-configmaps-9b8afc8a-c6e4-4b27-a3a7-dfdf93a62ec1": Phase="Pending", Reason="", readiness=false. Elapsed: 46.339168ms Apr 21 14:01:58.956: INFO: Pod "pod-projected-configmaps-9b8afc8a-c6e4-4b27-a3a7-dfdf93a62ec1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.050973395s Apr 21 14:02:00.960: INFO: Pod "pod-projected-configmaps-9b8afc8a-c6e4-4b27-a3a7-dfdf93a62ec1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.054596396s STEP: Saw pod success Apr 21 14:02:00.960: INFO: Pod "pod-projected-configmaps-9b8afc8a-c6e4-4b27-a3a7-dfdf93a62ec1" satisfied condition "success or failure" Apr 21 14:02:00.963: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-configmaps-9b8afc8a-c6e4-4b27-a3a7-dfdf93a62ec1 container projected-configmap-volume-test: STEP: delete the pod Apr 21 14:02:00.995: INFO: Waiting for pod pod-projected-configmaps-9b8afc8a-c6e4-4b27-a3a7-dfdf93a62ec1 to disappear Apr 21 14:02:01.005: INFO: Pod pod-projected-configmaps-9b8afc8a-c6e4-4b27-a3a7-dfdf93a62ec1 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 21 14:02:01.005: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-958" for this suite. Apr 21 14:02:07.020: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 21 14:02:07.095: INFO: namespace projected-958 deletion completed in 6.086827493s • [SLOW TEST:10.284 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Proxy server should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 21 14:02:07.096: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: starting the proxy server Apr 21 14:02:07.144: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter' STEP: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 21 14:02:07.229: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5688" for this suite. Apr 21 14:02:13.241: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 21 14:02:13.318: INFO: namespace kubectl-5688 deletion completed in 6.08612081s • [SLOW TEST:6.222 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Proxy server /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 21 14:02:13.319: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap that has name configmap-test-emptyKey-0ccdd159-5790-44e4-9ca5-218cd12d15d2 [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 21 14:02:13.387: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5601" for this suite. Apr 21 14:02:19.416: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 21 14:02:19.510: INFO: namespace configmap-5601 deletion completed in 6.109753961s • [SLOW TEST:6.191 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 21 14:02:19.511: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-3585 [It] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a new StatefulSet Apr 21 14:02:19.618: INFO: Found 0 stateful pods, waiting for 3 Apr 21 14:02:29.623: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Apr 21 14:02:29.623: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Apr 21 14:02:29.623: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true Apr 21 14:02:29.634: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3585 ss2-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Apr 21 14:02:29.901: INFO: stderr: "I0421 14:02:29.770195 1925 log.go:172] (0xc000116e70) (0xc00026c6e0) Create stream\nI0421 14:02:29.770247 1925 log.go:172] (0xc000116e70) (0xc00026c6e0) Stream added, broadcasting: 1\nI0421 14:02:29.772738 1925 log.go:172] (0xc000116e70) Reply frame received for 1\nI0421 14:02:29.772779 1925 log.go:172] (0xc000116e70) (0xc00026c780) Create stream\nI0421 14:02:29.772789 1925 log.go:172] (0xc000116e70) (0xc00026c780) Stream added, broadcasting: 3\nI0421 14:02:29.773839 1925 log.go:172] (0xc000116e70) Reply frame received for 3\nI0421 14:02:29.773908 1925 log.go:172] (0xc000116e70) (0xc000a6c000) Create stream\nI0421 14:02:29.773944 1925 log.go:172] (0xc000116e70) (0xc000a6c000) Stream added, broadcasting: 5\nI0421 14:02:29.774725 1925 log.go:172] (0xc000116e70) Reply frame received for 5\nI0421 14:02:29.863147 1925 log.go:172] (0xc000116e70) Data frame received for 5\nI0421 14:02:29.863189 1925 log.go:172] (0xc000a6c000) (5) Data frame handling\nI0421 14:02:29.863210 1925 log.go:172] (0xc000a6c000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0421 14:02:29.892772 1925 log.go:172] (0xc000116e70) Data frame received for 3\nI0421 14:02:29.892868 1925 log.go:172] (0xc00026c780) (3) Data frame handling\nI0421 14:02:29.892885 1925 log.go:172] (0xc00026c780) (3) Data frame sent\nI0421 14:02:29.892897 1925 log.go:172] (0xc000116e70) Data frame received for 3\nI0421 14:02:29.892909 1925 log.go:172] (0xc00026c780) (3) Data frame handling\nI0421 14:02:29.892943 1925 log.go:172] (0xc000116e70) Data frame received for 5\nI0421 14:02:29.892973 1925 log.go:172] (0xc000a6c000) (5) Data frame handling\nI0421 14:02:29.894867 1925 log.go:172] (0xc000116e70) Data frame received for 1\nI0421 14:02:29.894907 1925 log.go:172] (0xc00026c6e0) (1) Data frame handling\nI0421 14:02:29.894960 1925 log.go:172] (0xc00026c6e0) (1) Data frame sent\nI0421 14:02:29.895138 1925 log.go:172] (0xc000116e70) (0xc00026c6e0) Stream removed, broadcasting: 1\nI0421 14:02:29.895236 1925 log.go:172] (0xc000116e70) Go away received\nI0421 14:02:29.895721 1925 log.go:172] (0xc000116e70) (0xc00026c6e0) Stream removed, broadcasting: 1\nI0421 14:02:29.895743 1925 log.go:172] (0xc000116e70) (0xc00026c780) Stream removed, broadcasting: 3\nI0421 14:02:29.895755 1925 log.go:172] (0xc000116e70) (0xc000a6c000) Stream removed, broadcasting: 5\n" Apr 21 14:02:29.901: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Apr 21 14:02:29.901: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine Apr 21 14:02:39.935: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order Apr 21 14:02:49.971: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3585 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 21 14:02:50.216: INFO: stderr: "I0421 14:02:50.114864 1947 log.go:172] (0xc00079e420) (0xc000966640) Create stream\nI0421 14:02:50.114949 1947 log.go:172] (0xc00079e420) (0xc000966640) Stream added, broadcasting: 1\nI0421 14:02:50.117486 1947 log.go:172] (0xc00079e420) Reply frame received for 1\nI0421 14:02:50.117536 1947 log.go:172] (0xc00079e420) (0xc0006621e0) Create stream\nI0421 14:02:50.117553 1947 log.go:172] (0xc00079e420) (0xc0006621e0) Stream added, broadcasting: 3\nI0421 14:02:50.118805 1947 log.go:172] (0xc00079e420) Reply frame received for 3\nI0421 14:02:50.118862 1947 log.go:172] (0xc00079e420) (0xc0008f2000) Create stream\nI0421 14:02:50.118880 1947 log.go:172] (0xc00079e420) (0xc0008f2000) Stream added, broadcasting: 5\nI0421 14:02:50.119942 1947 log.go:172] (0xc00079e420) Reply frame received for 5\nI0421 14:02:50.209750 1947 log.go:172] (0xc00079e420) Data frame received for 3\nI0421 14:02:50.209806 1947 log.go:172] (0xc0006621e0) (3) Data frame handling\nI0421 14:02:50.209830 1947 log.go:172] (0xc0006621e0) (3) Data frame sent\nI0421 14:02:50.209850 1947 log.go:172] (0xc00079e420) Data frame received for 3\nI0421 14:02:50.209863 1947 log.go:172] (0xc0006621e0) (3) Data frame handling\nI0421 14:02:50.209906 1947 log.go:172] (0xc00079e420) Data frame received for 5\nI0421 14:02:50.209930 1947 log.go:172] (0xc0008f2000) (5) Data frame handling\nI0421 14:02:50.209947 1947 log.go:172] (0xc0008f2000) (5) Data frame sent\nI0421 14:02:50.209954 1947 log.go:172] (0xc00079e420) Data frame received for 5\nI0421 14:02:50.209959 1947 log.go:172] (0xc0008f2000) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0421 14:02:50.211601 1947 log.go:172] (0xc00079e420) Data frame received for 1\nI0421 14:02:50.211617 1947 log.go:172] (0xc000966640) (1) Data frame handling\nI0421 14:02:50.211638 1947 log.go:172] (0xc000966640) (1) Data frame sent\nI0421 14:02:50.211658 1947 log.go:172] (0xc00079e420) (0xc000966640) Stream removed, broadcasting: 1\nI0421 14:02:50.211918 1947 log.go:172] (0xc00079e420) (0xc000966640) Stream removed, broadcasting: 1\nI0421 14:02:50.211941 1947 log.go:172] (0xc00079e420) (0xc0006621e0) Stream removed, broadcasting: 3\nI0421 14:02:50.211955 1947 log.go:172] (0xc00079e420) (0xc0008f2000) Stream removed, broadcasting: 5\n" Apr 21 14:02:50.217: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Apr 21 14:02:50.217: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Apr 21 14:03:10.238: INFO: Waiting for StatefulSet statefulset-3585/ss2 to complete update Apr 21 14:03:10.239: INFO: Waiting for Pod statefulset-3585/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c STEP: Rolling back to a previous revision Apr 21 14:03:20.246: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3585 ss2-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Apr 21 14:03:20.489: INFO: stderr: "I0421 14:03:20.387306 1969 log.go:172] (0xc0007aa630) (0xc0005ec960) Create stream\nI0421 14:03:20.387360 1969 log.go:172] (0xc0007aa630) (0xc0005ec960) Stream added, broadcasting: 1\nI0421 14:03:20.391019 1969 log.go:172] (0xc0007aa630) Reply frame received for 1\nI0421 14:03:20.391136 1969 log.go:172] (0xc0007aa630) (0xc000aa0000) Create stream\nI0421 14:03:20.391173 1969 log.go:172] (0xc0007aa630) (0xc000aa0000) Stream added, broadcasting: 3\nI0421 14:03:20.393093 1969 log.go:172] (0xc0007aa630) Reply frame received for 3\nI0421 14:03:20.393296 1969 log.go:172] (0xc0007aa630) (0xc000aa00a0) Create stream\nI0421 14:03:20.393326 1969 log.go:172] (0xc0007aa630) (0xc000aa00a0) Stream added, broadcasting: 5\nI0421 14:03:20.394335 1969 log.go:172] (0xc0007aa630) Reply frame received for 5\nI0421 14:03:20.453625 1969 log.go:172] (0xc0007aa630) Data frame received for 5\nI0421 14:03:20.453657 1969 log.go:172] (0xc000aa00a0) (5) Data frame handling\nI0421 14:03:20.453677 1969 log.go:172] (0xc000aa00a0) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0421 14:03:20.479907 1969 log.go:172] (0xc0007aa630) Data frame received for 3\nI0421 14:03:20.479940 1969 log.go:172] (0xc000aa0000) (3) Data frame handling\nI0421 14:03:20.479953 1969 log.go:172] (0xc000aa0000) (3) Data frame sent\nI0421 14:03:20.479997 1969 log.go:172] (0xc0007aa630) Data frame received for 5\nI0421 14:03:20.480039 1969 log.go:172] (0xc000aa00a0) (5) Data frame handling\nI0421 14:03:20.480096 1969 log.go:172] (0xc0007aa630) Data frame received for 3\nI0421 14:03:20.480131 1969 log.go:172] (0xc000aa0000) (3) Data frame handling\nI0421 14:03:20.483138 1969 log.go:172] (0xc0007aa630) Data frame received for 1\nI0421 14:03:20.483172 1969 log.go:172] (0xc0005ec960) (1) Data frame handling\nI0421 14:03:20.483198 1969 log.go:172] (0xc0005ec960) (1) Data frame sent\nI0421 14:03:20.483231 1969 log.go:172] (0xc0007aa630) (0xc0005ec960) Stream removed, broadcasting: 1\nI0421 14:03:20.483347 1969 log.go:172] (0xc0007aa630) Go away received\nI0421 14:03:20.483631 1969 log.go:172] (0xc0007aa630) (0xc0005ec960) Stream removed, broadcasting: 1\nI0421 14:03:20.483666 1969 log.go:172] (0xc0007aa630) (0xc000aa0000) Stream removed, broadcasting: 3\nI0421 14:03:20.483691 1969 log.go:172] (0xc0007aa630) (0xc000aa00a0) Stream removed, broadcasting: 5\n" Apr 21 14:03:20.489: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Apr 21 14:03:20.489: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Apr 21 14:03:30.550: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order Apr 21 14:03:40.596: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3585 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 21 14:03:40.828: INFO: stderr: "I0421 14:03:40.726024 1989 log.go:172] (0xc0006a8bb0) (0xc000694c80) Create stream\nI0421 14:03:40.726096 1989 log.go:172] (0xc0006a8bb0) (0xc000694c80) Stream added, broadcasting: 1\nI0421 14:03:40.729512 1989 log.go:172] (0xc0006a8bb0) Reply frame received for 1\nI0421 14:03:40.729570 1989 log.go:172] (0xc0006a8bb0) (0xc000a38000) Create stream\nI0421 14:03:40.729593 1989 log.go:172] (0xc0006a8bb0) (0xc000a38000) Stream added, broadcasting: 3\nI0421 14:03:40.730612 1989 log.go:172] (0xc0006a8bb0) Reply frame received for 3\nI0421 14:03:40.730644 1989 log.go:172] (0xc0006a8bb0) (0xc0008dc000) Create stream\nI0421 14:03:40.730653 1989 log.go:172] (0xc0006a8bb0) (0xc0008dc000) Stream added, broadcasting: 5\nI0421 14:03:40.731636 1989 log.go:172] (0xc0006a8bb0) Reply frame received for 5\nI0421 14:03:40.820095 1989 log.go:172] (0xc0006a8bb0) Data frame received for 3\nI0421 14:03:40.820140 1989 log.go:172] (0xc000a38000) (3) Data frame handling\nI0421 14:03:40.820160 1989 log.go:172] (0xc000a38000) (3) Data frame sent\nI0421 14:03:40.820175 1989 log.go:172] (0xc0006a8bb0) Data frame received for 3\nI0421 14:03:40.820186 1989 log.go:172] (0xc000a38000) (3) Data frame handling\nI0421 14:03:40.820289 1989 log.go:172] (0xc0006a8bb0) Data frame received for 5\nI0421 14:03:40.820906 1989 log.go:172] (0xc0008dc000) (5) Data frame handling\nI0421 14:03:40.820990 1989 log.go:172] (0xc0008dc000) (5) Data frame sent\nI0421 14:03:40.821069 1989 log.go:172] (0xc0006a8bb0) Data frame received for 5\nI0421 14:03:40.821313 1989 log.go:172] (0xc0008dc000) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0421 14:03:40.822535 1989 log.go:172] (0xc0006a8bb0) Data frame received for 1\nI0421 14:03:40.822561 1989 log.go:172] (0xc000694c80) (1) Data frame handling\nI0421 14:03:40.822575 1989 log.go:172] (0xc000694c80) (1) Data frame sent\nI0421 14:03:40.822987 1989 log.go:172] (0xc0006a8bb0) (0xc000694c80) Stream removed, broadcasting: 1\nI0421 14:03:40.823009 1989 log.go:172] (0xc0006a8bb0) Go away received\nI0421 14:03:40.823387 1989 log.go:172] (0xc0006a8bb0) (0xc000694c80) Stream removed, broadcasting: 1\nI0421 14:03:40.823409 1989 log.go:172] (0xc0006a8bb0) (0xc000a38000) Stream removed, broadcasting: 3\nI0421 14:03:40.823421 1989 log.go:172] (0xc0006a8bb0) (0xc0008dc000) Stream removed, broadcasting: 5\n" Apr 21 14:03:40.828: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Apr 21 14:03:40.828: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 Apr 21 14:04:00.848: INFO: Deleting all statefulset in ns statefulset-3585 Apr 21 14:04:00.851: INFO: Scaling statefulset ss2 to 0 Apr 21 14:04:20.868: INFO: Waiting for statefulset status.replicas updated to 0 Apr 21 14:04:20.871: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 21 14:04:20.910: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-3585" for this suite. Apr 21 14:04:26.923: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 21 14:04:27.017: INFO: namespace statefulset-3585 deletion completed in 6.104068317s • [SLOW TEST:127.507 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 21 14:04:27.018: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Apr 21 14:04:27.127: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 21 14:04:27.130: INFO: Number of nodes with available pods: 0 Apr 21 14:04:27.130: INFO: Node iruya-worker is running more than one daemon pod Apr 21 14:04:28.135: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 21 14:04:28.138: INFO: Number of nodes with available pods: 0 Apr 21 14:04:28.138: INFO: Node iruya-worker is running more than one daemon pod Apr 21 14:04:29.135: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 21 14:04:29.137: INFO: Number of nodes with available pods: 0 Apr 21 14:04:29.137: INFO: Node iruya-worker is running more than one daemon pod Apr 21 14:04:30.153: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 21 14:04:30.157: INFO: Number of nodes with available pods: 0 Apr 21 14:04:30.157: INFO: Node iruya-worker is running more than one daemon pod Apr 21 14:04:31.135: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 21 14:04:31.139: INFO: Number of nodes with available pods: 2 Apr 21 14:04:31.139: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Stop a daemon pod, check that the daemon pod is revived. Apr 21 14:04:31.158: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 21 14:04:31.160: INFO: Number of nodes with available pods: 1 Apr 21 14:04:31.160: INFO: Node iruya-worker2 is running more than one daemon pod Apr 21 14:04:32.166: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 21 14:04:32.169: INFO: Number of nodes with available pods: 1 Apr 21 14:04:32.169: INFO: Node iruya-worker2 is running more than one daemon pod Apr 21 14:04:33.165: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 21 14:04:33.168: INFO: Number of nodes with available pods: 1 Apr 21 14:04:33.168: INFO: Node iruya-worker2 is running more than one daemon pod Apr 21 14:04:34.167: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 21 14:04:34.170: INFO: Number of nodes with available pods: 1 Apr 21 14:04:34.170: INFO: Node iruya-worker2 is running more than one daemon pod Apr 21 14:04:35.166: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 21 14:04:35.169: INFO: Number of nodes with available pods: 1 Apr 21 14:04:35.169: INFO: Node iruya-worker2 is running more than one daemon pod Apr 21 14:04:36.165: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 21 14:04:36.168: INFO: Number of nodes with available pods: 1 Apr 21 14:04:36.168: INFO: Node iruya-worker2 is running more than one daemon pod Apr 21 14:04:37.166: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 21 14:04:37.170: INFO: Number of nodes with available pods: 1 Apr 21 14:04:37.170: INFO: Node iruya-worker2 is running more than one daemon pod Apr 21 14:04:38.166: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 21 14:04:38.169: INFO: Number of nodes with available pods: 2 Apr 21 14:04:38.169: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-1088, will wait for the garbage collector to delete the pods Apr 21 14:04:38.231: INFO: Deleting DaemonSet.extensions daemon-set took: 5.928154ms Apr 21 14:04:38.331: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.245377ms Apr 21 14:04:51.935: INFO: Number of nodes with available pods: 0 Apr 21 14:04:51.935: INFO: Number of running nodes: 0, number of available pods: 0 Apr 21 14:04:51.938: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-1088/daemonsets","resourceVersion":"6650792"},"items":null} Apr 21 14:04:51.941: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-1088/pods","resourceVersion":"6650792"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 21 14:04:51.950: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-1088" for this suite. Apr 21 14:04:57.966: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 21 14:04:58.056: INFO: namespace daemonsets-1088 deletion completed in 6.102184527s • [SLOW TEST:31.038 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 21 14:04:58.056: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 21 14:04:58.184: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-4579" for this suite. Apr 21 14:05:04.347: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 21 14:05:04.438: INFO: namespace kubelet-test-4579 deletion completed in 6.117303976s • [SLOW TEST:6.382 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78 should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-api-machinery] Aggregator Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 21 14:05:04.438: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename aggregator STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76 Apr 21 14:05:04.501: INFO: >>> kubeConfig: /root/.kube/config [It] Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Registering the sample API server. Apr 21 14:05:04.988: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set Apr 21 14:05:07.159: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723074705, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723074705, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723074705, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723074704, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 21 14:05:09.162: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723074705, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723074705, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723074705, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723074704, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 21 14:05:11.902: INFO: Waited 728.007428ms for the sample-apiserver to be ready to handle requests. [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67 [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 21 14:05:12.303: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "aggregator-8061" for this suite. Apr 21 14:05:18.480: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 21 14:05:18.557: INFO: namespace aggregator-8061 deletion completed in 6.224036957s • [SLOW TEST:14.119 seconds] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 21 14:05:18.558: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating Pod STEP: Waiting for the pod running STEP: Geting the pod STEP: Reading file content from the nginx-container Apr 21 14:05:24.735: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec pod-sharedvolume-fc347e7a-22e1-43e2-936e-6904a6cf7a59 -c busybox-main-container --namespace=emptydir-8818 -- cat /usr/share/volumeshare/shareddata.txt' Apr 21 14:05:24.944: INFO: stderr: "I0421 14:05:24.867714 2009 log.go:172] (0xc0007f0420) (0xc0006128c0) Create stream\nI0421 14:05:24.867807 2009 log.go:172] (0xc0007f0420) (0xc0006128c0) Stream added, broadcasting: 1\nI0421 14:05:24.870522 2009 log.go:172] (0xc0007f0420) Reply frame received for 1\nI0421 14:05:24.870562 2009 log.go:172] (0xc0007f0420) (0xc000612960) Create stream\nI0421 14:05:24.870574 2009 log.go:172] (0xc0007f0420) (0xc000612960) Stream added, broadcasting: 3\nI0421 14:05:24.871519 2009 log.go:172] (0xc0007f0420) Reply frame received for 3\nI0421 14:05:24.871563 2009 log.go:172] (0xc0007f0420) (0xc000392280) Create stream\nI0421 14:05:24.871587 2009 log.go:172] (0xc0007f0420) (0xc000392280) Stream added, broadcasting: 5\nI0421 14:05:24.872472 2009 log.go:172] (0xc0007f0420) Reply frame received for 5\nI0421 14:05:24.937923 2009 log.go:172] (0xc0007f0420) Data frame received for 3\nI0421 14:05:24.937983 2009 log.go:172] (0xc000612960) (3) Data frame handling\nI0421 14:05:24.938002 2009 log.go:172] (0xc000612960) (3) Data frame sent\nI0421 14:05:24.938014 2009 log.go:172] (0xc0007f0420) Data frame received for 3\nI0421 14:05:24.938024 2009 log.go:172] (0xc000612960) (3) Data frame handling\nI0421 14:05:24.938084 2009 log.go:172] (0xc0007f0420) Data frame received for 5\nI0421 14:05:24.938108 2009 log.go:172] (0xc000392280) (5) Data frame handling\nI0421 14:05:24.939734 2009 log.go:172] (0xc0007f0420) Data frame received for 1\nI0421 14:05:24.939777 2009 log.go:172] (0xc0006128c0) (1) Data frame handling\nI0421 14:05:24.939823 2009 log.go:172] (0xc0006128c0) (1) Data frame sent\nI0421 14:05:24.939858 2009 log.go:172] (0xc0007f0420) (0xc0006128c0) Stream removed, broadcasting: 1\nI0421 14:05:24.939885 2009 log.go:172] (0xc0007f0420) Go away received\nI0421 14:05:24.940369 2009 log.go:172] (0xc0007f0420) (0xc0006128c0) Stream removed, broadcasting: 1\nI0421 14:05:24.940392 2009 log.go:172] (0xc0007f0420) (0xc000612960) Stream removed, broadcasting: 3\nI0421 14:05:24.940403 2009 log.go:172] (0xc0007f0420) (0xc000392280) Stream removed, broadcasting: 5\n" Apr 21 14:05:24.944: INFO: stdout: "Hello from the busy-box sub-container\n" [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 21 14:05:24.944: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8818" for this suite. Apr 21 14:05:30.962: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 21 14:05:31.034: INFO: namespace emptydir-8818 deletion completed in 6.085356139s • [SLOW TEST:12.476 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 21 14:05:31.034: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test substitution in container's args Apr 21 14:05:31.074: INFO: Waiting up to 5m0s for pod "var-expansion-f3195c26-62e3-41f4-b47f-e2f0d3e0c7e1" in namespace "var-expansion-2822" to be "success or failure" Apr 21 14:05:31.092: INFO: Pod "var-expansion-f3195c26-62e3-41f4-b47f-e2f0d3e0c7e1": Phase="Pending", Reason="", readiness=false. Elapsed: 17.951029ms Apr 21 14:05:33.096: INFO: Pod "var-expansion-f3195c26-62e3-41f4-b47f-e2f0d3e0c7e1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022127543s Apr 21 14:05:35.100: INFO: Pod "var-expansion-f3195c26-62e3-41f4-b47f-e2f0d3e0c7e1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.026371772s STEP: Saw pod success Apr 21 14:05:35.100: INFO: Pod "var-expansion-f3195c26-62e3-41f4-b47f-e2f0d3e0c7e1" satisfied condition "success or failure" Apr 21 14:05:35.104: INFO: Trying to get logs from node iruya-worker pod var-expansion-f3195c26-62e3-41f4-b47f-e2f0d3e0c7e1 container dapi-container: STEP: delete the pod Apr 21 14:05:35.166: INFO: Waiting for pod var-expansion-f3195c26-62e3-41f4-b47f-e2f0d3e0c7e1 to disappear Apr 21 14:05:35.171: INFO: Pod var-expansion-f3195c26-62e3-41f4-b47f-e2f0d3e0c7e1 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 21 14:05:35.171: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-2822" for this suite. Apr 21 14:05:41.186: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 21 14:05:41.287: INFO: namespace var-expansion-2822 deletion completed in 6.113600331s • [SLOW TEST:10.253 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 21 14:05:41.288: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Apr 21 14:05:41.382: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8f04063f-8398-40e6-a687-e48bcbaad1f9" in namespace "downward-api-7757" to be "success or failure" Apr 21 14:05:41.386: INFO: Pod "downwardapi-volume-8f04063f-8398-40e6-a687-e48bcbaad1f9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.17798ms Apr 21 14:05:43.392: INFO: Pod "downwardapi-volume-8f04063f-8398-40e6-a687-e48bcbaad1f9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009360521s Apr 21 14:05:45.396: INFO: Pod "downwardapi-volume-8f04063f-8398-40e6-a687-e48bcbaad1f9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013380101s STEP: Saw pod success Apr 21 14:05:45.396: INFO: Pod "downwardapi-volume-8f04063f-8398-40e6-a687-e48bcbaad1f9" satisfied condition "success or failure" Apr 21 14:05:45.399: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-8f04063f-8398-40e6-a687-e48bcbaad1f9 container client-container: STEP: delete the pod Apr 21 14:05:45.424: INFO: Waiting for pod downwardapi-volume-8f04063f-8398-40e6-a687-e48bcbaad1f9 to disappear Apr 21 14:05:45.428: INFO: Pod downwardapi-volume-8f04063f-8398-40e6-a687-e48bcbaad1f9 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 21 14:05:45.428: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7757" for this suite. Apr 21 14:05:51.443: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 21 14:05:51.522: INFO: namespace downward-api-7757 deletion completed in 6.090300679s • [SLOW TEST:10.234 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 21 14:05:51.522: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-map-642606b0-af5c-44b2-8cf6-3c9e9fdeaf30 STEP: Creating a pod to test consume configMaps Apr 21 14:05:51.623: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-52a6c74a-134a-4feb-b0e8-8a8f20cf2877" in namespace "projected-8992" to be "success or failure" Apr 21 14:05:51.655: INFO: Pod "pod-projected-configmaps-52a6c74a-134a-4feb-b0e8-8a8f20cf2877": Phase="Pending", Reason="", readiness=false. Elapsed: 32.357683ms Apr 21 14:05:53.659: INFO: Pod "pod-projected-configmaps-52a6c74a-134a-4feb-b0e8-8a8f20cf2877": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036289188s Apr 21 14:05:55.663: INFO: Pod "pod-projected-configmaps-52a6c74a-134a-4feb-b0e8-8a8f20cf2877": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.040489708s STEP: Saw pod success Apr 21 14:05:55.664: INFO: Pod "pod-projected-configmaps-52a6c74a-134a-4feb-b0e8-8a8f20cf2877" satisfied condition "success or failure" Apr 21 14:05:55.667: INFO: Trying to get logs from node iruya-worker pod pod-projected-configmaps-52a6c74a-134a-4feb-b0e8-8a8f20cf2877 container projected-configmap-volume-test: STEP: delete the pod Apr 21 14:05:55.729: INFO: Waiting for pod pod-projected-configmaps-52a6c74a-134a-4feb-b0e8-8a8f20cf2877 to disappear Apr 21 14:05:55.749: INFO: Pod pod-projected-configmaps-52a6c74a-134a-4feb-b0e8-8a8f20cf2877 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 21 14:05:55.749: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8992" for this suite. Apr 21 14:06:01.764: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 21 14:06:01.841: INFO: namespace projected-8992 deletion completed in 6.088371056s • [SLOW TEST:10.319 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 21 14:06:01.842: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Apr 21 14:06:01.913: INFO: Waiting up to 5m0s for pod "downwardapi-volume-50708f20-2141-4567-a576-c9f52c5f0e69" in namespace "downward-api-3551" to be "success or failure" Apr 21 14:06:01.923: INFO: Pod "downwardapi-volume-50708f20-2141-4567-a576-c9f52c5f0e69": Phase="Pending", Reason="", readiness=false. Elapsed: 9.904504ms Apr 21 14:06:03.927: INFO: Pod "downwardapi-volume-50708f20-2141-4567-a576-c9f52c5f0e69": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014056118s Apr 21 14:06:05.931: INFO: Pod "downwardapi-volume-50708f20-2141-4567-a576-c9f52c5f0e69": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01817551s STEP: Saw pod success Apr 21 14:06:05.931: INFO: Pod "downwardapi-volume-50708f20-2141-4567-a576-c9f52c5f0e69" satisfied condition "success or failure" Apr 21 14:06:05.934: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-50708f20-2141-4567-a576-c9f52c5f0e69 container client-container: STEP: delete the pod Apr 21 14:06:05.962: INFO: Waiting for pod downwardapi-volume-50708f20-2141-4567-a576-c9f52c5f0e69 to disappear Apr 21 14:06:05.965: INFO: Pod downwardapi-volume-50708f20-2141-4567-a576-c9f52c5f0e69 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 21 14:06:05.965: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3551" for this suite. Apr 21 14:06:11.990: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 21 14:06:12.072: INFO: namespace downward-api-3551 deletion completed in 6.10375122s • [SLOW TEST:10.230 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 21 14:06:12.072: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0777 on tmpfs Apr 21 14:06:12.137: INFO: Waiting up to 5m0s for pod "pod-d7dcf9c4-fe66-440f-a64f-1b5f28741d0c" in namespace "emptydir-1536" to be "success or failure" Apr 21 14:06:12.141: INFO: Pod "pod-d7dcf9c4-fe66-440f-a64f-1b5f28741d0c": Phase="Pending", Reason="", readiness=false. Elapsed: 3.917308ms Apr 21 14:06:14.146: INFO: Pod "pod-d7dcf9c4-fe66-440f-a64f-1b5f28741d0c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008097622s Apr 21 14:06:16.152: INFO: Pod "pod-d7dcf9c4-fe66-440f-a64f-1b5f28741d0c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014923784s STEP: Saw pod success Apr 21 14:06:16.152: INFO: Pod "pod-d7dcf9c4-fe66-440f-a64f-1b5f28741d0c" satisfied condition "success or failure" Apr 21 14:06:16.155: INFO: Trying to get logs from node iruya-worker pod pod-d7dcf9c4-fe66-440f-a64f-1b5f28741d0c container test-container: STEP: delete the pod Apr 21 14:06:16.193: INFO: Waiting for pod pod-d7dcf9c4-fe66-440f-a64f-1b5f28741d0c to disappear Apr 21 14:06:16.207: INFO: Pod pod-d7dcf9c4-fe66-440f-a64f-1b5f28741d0c no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 21 14:06:16.207: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1536" for this suite. Apr 21 14:06:22.290: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 21 14:06:22.367: INFO: namespace emptydir-1536 deletion completed in 6.15619825s • [SLOW TEST:10.295 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 21 14:06:22.367: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0777 on node default medium Apr 21 14:06:22.941: INFO: Waiting up to 5m0s for pod "pod-e926144e-99a6-4cea-b695-34d6bd433613" in namespace "emptydir-3873" to be "success or failure" Apr 21 14:06:22.955: INFO: Pod "pod-e926144e-99a6-4cea-b695-34d6bd433613": Phase="Pending", Reason="", readiness=false. Elapsed: 14.155611ms Apr 21 14:06:24.959: INFO: Pod "pod-e926144e-99a6-4cea-b695-34d6bd433613": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018559773s Apr 21 14:06:26.964: INFO: Pod "pod-e926144e-99a6-4cea-b695-34d6bd433613": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.02316823s STEP: Saw pod success Apr 21 14:06:26.964: INFO: Pod "pod-e926144e-99a6-4cea-b695-34d6bd433613" satisfied condition "success or failure" Apr 21 14:06:26.967: INFO: Trying to get logs from node iruya-worker2 pod pod-e926144e-99a6-4cea-b695-34d6bd433613 container test-container: STEP: delete the pod Apr 21 14:06:26.988: INFO: Waiting for pod pod-e926144e-99a6-4cea-b695-34d6bd433613 to disappear Apr 21 14:06:27.014: INFO: Pod pod-e926144e-99a6-4cea-b695-34d6bd433613 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 21 14:06:27.014: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3873" for this suite. Apr 21 14:06:33.038: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 21 14:06:33.108: INFO: namespace emptydir-3873 deletion completed in 6.090540022s • [SLOW TEST:10.740 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 21 14:06:33.108: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-dbbbcf8d-3002-487a-a0be-b22537454666 STEP: Creating a pod to test consume secrets Apr 21 14:06:33.193: INFO: Waiting up to 5m0s for pod "pod-secrets-1d233104-557d-4521-9ea6-5be6ed18174e" in namespace "secrets-8528" to be "success or failure" Apr 21 14:06:33.205: INFO: Pod "pod-secrets-1d233104-557d-4521-9ea6-5be6ed18174e": Phase="Pending", Reason="", readiness=false. Elapsed: 11.997339ms Apr 21 14:06:35.211: INFO: Pod "pod-secrets-1d233104-557d-4521-9ea6-5be6ed18174e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017497381s Apr 21 14:06:37.215: INFO: Pod "pod-secrets-1d233104-557d-4521-9ea6-5be6ed18174e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.021241789s STEP: Saw pod success Apr 21 14:06:37.215: INFO: Pod "pod-secrets-1d233104-557d-4521-9ea6-5be6ed18174e" satisfied condition "success or failure" Apr 21 14:06:37.217: INFO: Trying to get logs from node iruya-worker pod pod-secrets-1d233104-557d-4521-9ea6-5be6ed18174e container secret-volume-test: STEP: delete the pod Apr 21 14:06:37.250: INFO: Waiting for pod pod-secrets-1d233104-557d-4521-9ea6-5be6ed18174e to disappear Apr 21 14:06:37.265: INFO: Pod pod-secrets-1d233104-557d-4521-9ea6-5be6ed18174e no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 21 14:06:37.265: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-8528" for this suite. Apr 21 14:06:43.281: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 21 14:06:43.351: INFO: namespace secrets-8528 deletion completed in 6.082619593s • [SLOW TEST:10.243 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 21 14:06:43.352: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 Apr 21 14:06:43.391: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Apr 21 14:06:43.408: INFO: Waiting for terminating namespaces to be deleted... Apr 21 14:06:43.411: INFO: Logging pods the kubelet thinks is on node iruya-worker before test Apr 21 14:06:43.415: INFO: kube-proxy-pmz4p from kube-system started at 2020-03-15 18:24:55 +0000 UTC (1 container statuses recorded) Apr 21 14:06:43.415: INFO: Container kube-proxy ready: true, restart count 0 Apr 21 14:06:43.415: INFO: kindnet-gwz5g from kube-system started at 2020-03-15 18:24:55 +0000 UTC (1 container statuses recorded) Apr 21 14:06:43.415: INFO: Container kindnet-cni ready: true, restart count 0 Apr 21 14:06:43.415: INFO: Logging pods the kubelet thinks is on node iruya-worker2 before test Apr 21 14:06:43.419: INFO: coredns-5d4dd4b4db-gm7vr from kube-system started at 2020-03-15 18:24:52 +0000 UTC (1 container statuses recorded) Apr 21 14:06:43.419: INFO: Container coredns ready: true, restart count 0 Apr 21 14:06:43.419: INFO: coredns-5d4dd4b4db-6jcgz from kube-system started at 2020-03-15 18:24:54 +0000 UTC (1 container statuses recorded) Apr 21 14:06:43.419: INFO: Container coredns ready: true, restart count 0 Apr 21 14:06:43.419: INFO: kube-proxy-vwbcj from kube-system started at 2020-03-15 18:24:42 +0000 UTC (1 container statuses recorded) Apr 21 14:06:43.419: INFO: Container kube-proxy ready: true, restart count 0 Apr 21 14:06:43.419: INFO: kindnet-mgd8b from kube-system started at 2020-03-15 18:24:43 +0000 UTC (1 container statuses recorded) Apr 21 14:06:43.419: INFO: Container kindnet-cni ready: true, restart count 0 [It] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.1607dad55366dafc], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 21 14:06:44.437: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-3068" for this suite. Apr 21 14:06:50.454: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 21 14:06:50.533: INFO: namespace sched-pred-3068 deletion completed in 6.091601282s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72 • [SLOW TEST:7.182 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23 validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 21 14:06:50.534: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-abdddc2f-a134-478c-9473-3069994dd42a STEP: Creating a pod to test consume secrets Apr 21 14:06:50.596: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-3fba6f6a-67e8-4033-a3b9-dcbafcc249c0" in namespace "projected-5010" to be "success or failure" Apr 21 14:06:50.614: INFO: Pod "pod-projected-secrets-3fba6f6a-67e8-4033-a3b9-dcbafcc249c0": Phase="Pending", Reason="", readiness=false. Elapsed: 17.093629ms Apr 21 14:06:52.618: INFO: Pod "pod-projected-secrets-3fba6f6a-67e8-4033-a3b9-dcbafcc249c0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021577053s Apr 21 14:06:54.622: INFO: Pod "pod-projected-secrets-3fba6f6a-67e8-4033-a3b9-dcbafcc249c0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025961229s STEP: Saw pod success Apr 21 14:06:54.622: INFO: Pod "pod-projected-secrets-3fba6f6a-67e8-4033-a3b9-dcbafcc249c0" satisfied condition "success or failure" Apr 21 14:06:54.625: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-secrets-3fba6f6a-67e8-4033-a3b9-dcbafcc249c0 container projected-secret-volume-test: STEP: delete the pod Apr 21 14:06:54.687: INFO: Waiting for pod pod-projected-secrets-3fba6f6a-67e8-4033-a3b9-dcbafcc249c0 to disappear Apr 21 14:06:54.722: INFO: Pod pod-projected-secrets-3fba6f6a-67e8-4033-a3b9-dcbafcc249c0 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 21 14:06:54.722: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5010" for this suite. Apr 21 14:07:00.751: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 21 14:07:00.824: INFO: namespace projected-5010 deletion completed in 6.098314397s • [SLOW TEST:10.291 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 21 14:07:00.825: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod busybox-dd1b2f78-3bde-4ed5-a533-6ee15871bbe3 in namespace container-probe-4179 Apr 21 14:07:04.913: INFO: Started pod busybox-dd1b2f78-3bde-4ed5-a533-6ee15871bbe3 in namespace container-probe-4179 STEP: checking the pod's current state and verifying that restartCount is present Apr 21 14:07:04.916: INFO: Initial restart count of pod busybox-dd1b2f78-3bde-4ed5-a533-6ee15871bbe3 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 21 14:11:05.662: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-4179" for this suite. Apr 21 14:11:11.711: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 21 14:11:11.817: INFO: namespace container-probe-4179 deletion completed in 6.144664675s • [SLOW TEST:250.992 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 21 14:11:11.817: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [It] should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a replication controller Apr 21 14:11:11.847: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-648' Apr 21 14:11:14.464: INFO: stderr: "" Apr 21 14:11:14.464: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Apr 21 14:11:14.464: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-648' Apr 21 14:11:14.565: INFO: stderr: "" Apr 21 14:11:14.565: INFO: stdout: "update-demo-nautilus-6pwzr update-demo-nautilus-k7xj5 " Apr 21 14:11:14.566: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6pwzr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-648' Apr 21 14:11:14.660: INFO: stderr: "" Apr 21 14:11:14.660: INFO: stdout: "" Apr 21 14:11:14.660: INFO: update-demo-nautilus-6pwzr is created but not running Apr 21 14:11:19.660: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-648' Apr 21 14:11:19.768: INFO: stderr: "" Apr 21 14:11:19.768: INFO: stdout: "update-demo-nautilus-6pwzr update-demo-nautilus-k7xj5 " Apr 21 14:11:19.768: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6pwzr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-648' Apr 21 14:11:19.861: INFO: stderr: "" Apr 21 14:11:19.861: INFO: stdout: "true" Apr 21 14:11:19.861: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6pwzr -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-648' Apr 21 14:11:19.944: INFO: stderr: "" Apr 21 14:11:19.944: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 21 14:11:19.944: INFO: validating pod update-demo-nautilus-6pwzr Apr 21 14:11:19.947: INFO: got data: { "image": "nautilus.jpg" } Apr 21 14:11:19.947: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 21 14:11:19.947: INFO: update-demo-nautilus-6pwzr is verified up and running Apr 21 14:11:19.948: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-k7xj5 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-648' Apr 21 14:11:20.043: INFO: stderr: "" Apr 21 14:11:20.043: INFO: stdout: "true" Apr 21 14:11:20.043: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-k7xj5 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-648' Apr 21 14:11:20.141: INFO: stderr: "" Apr 21 14:11:20.141: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 21 14:11:20.141: INFO: validating pod update-demo-nautilus-k7xj5 Apr 21 14:11:20.145: INFO: got data: { "image": "nautilus.jpg" } Apr 21 14:11:20.145: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 21 14:11:20.145: INFO: update-demo-nautilus-k7xj5 is verified up and running STEP: using delete to clean up resources Apr 21 14:11:20.145: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-648' Apr 21 14:11:20.258: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 21 14:11:20.258: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Apr 21 14:11:20.258: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-648' Apr 21 14:11:20.351: INFO: stderr: "No resources found.\n" Apr 21 14:11:20.351: INFO: stdout: "" Apr 21 14:11:20.351: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-648 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Apr 21 14:11:20.443: INFO: stderr: "" Apr 21 14:11:20.443: INFO: stdout: "update-demo-nautilus-6pwzr\nupdate-demo-nautilus-k7xj5\n" Apr 21 14:11:20.944: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-648' Apr 21 14:11:21.052: INFO: stderr: "No resources found.\n" Apr 21 14:11:21.052: INFO: stdout: "" Apr 21 14:11:21.052: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-648 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Apr 21 14:11:21.142: INFO: stderr: "" Apr 21 14:11:21.142: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 21 14:11:21.143: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-648" for this suite. Apr 21 14:11:43.245: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 21 14:11:43.324: INFO: namespace kubectl-648 deletion completed in 22.178081839s • [SLOW TEST:31.507 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 21 14:11:43.325: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0644 on node default medium Apr 21 14:11:43.387: INFO: Waiting up to 5m0s for pod "pod-218b0e15-f340-49d3-9fbf-5081a3ee18e5" in namespace "emptydir-8867" to be "success or failure" Apr 21 14:11:43.390: INFO: Pod "pod-218b0e15-f340-49d3-9fbf-5081a3ee18e5": Phase="Pending", Reason="", readiness=false. Elapsed: 3.869132ms Apr 21 14:11:45.395: INFO: Pod "pod-218b0e15-f340-49d3-9fbf-5081a3ee18e5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00787716s Apr 21 14:11:47.399: INFO: Pod "pod-218b0e15-f340-49d3-9fbf-5081a3ee18e5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012083817s STEP: Saw pod success Apr 21 14:11:47.399: INFO: Pod "pod-218b0e15-f340-49d3-9fbf-5081a3ee18e5" satisfied condition "success or failure" Apr 21 14:11:47.401: INFO: Trying to get logs from node iruya-worker2 pod pod-218b0e15-f340-49d3-9fbf-5081a3ee18e5 container test-container: STEP: delete the pod Apr 21 14:11:47.431: INFO: Waiting for pod pod-218b0e15-f340-49d3-9fbf-5081a3ee18e5 to disappear Apr 21 14:11:47.439: INFO: Pod pod-218b0e15-f340-49d3-9fbf-5081a3ee18e5 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 21 14:11:47.439: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8867" for this suite. Apr 21 14:11:53.454: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 21 14:11:53.529: INFO: namespace emptydir-8867 deletion completed in 6.086941291s • [SLOW TEST:10.205 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 21 14:11:53.530: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Apr 21 14:11:53.574: INFO: Waiting up to 5m0s for pod "downwardapi-volume-50485a80-b476-4d0c-b372-45b0b25d8a2e" in namespace "downward-api-2774" to be "success or failure" Apr 21 14:11:53.593: INFO: Pod "downwardapi-volume-50485a80-b476-4d0c-b372-45b0b25d8a2e": Phase="Pending", Reason="", readiness=false. Elapsed: 18.659267ms Apr 21 14:11:55.598: INFO: Pod "downwardapi-volume-50485a80-b476-4d0c-b372-45b0b25d8a2e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023449036s Apr 21 14:11:57.602: INFO: Pod "downwardapi-volume-50485a80-b476-4d0c-b372-45b0b25d8a2e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.027846335s STEP: Saw pod success Apr 21 14:11:57.602: INFO: Pod "downwardapi-volume-50485a80-b476-4d0c-b372-45b0b25d8a2e" satisfied condition "success or failure" Apr 21 14:11:57.605: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-50485a80-b476-4d0c-b372-45b0b25d8a2e container client-container: STEP: delete the pod Apr 21 14:11:57.642: INFO: Waiting for pod downwardapi-volume-50485a80-b476-4d0c-b372-45b0b25d8a2e to disappear Apr 21 14:11:57.655: INFO: Pod downwardapi-volume-50485a80-b476-4d0c-b372-45b0b25d8a2e no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 21 14:11:57.655: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2774" for this suite. Apr 21 14:12:03.682: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 21 14:12:03.811: INFO: namespace downward-api-2774 deletion completed in 6.152481046s • [SLOW TEST:10.281 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 21 14:12:03.813: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Delete Grace Period /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:47 [It] should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: setting up selector STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes Apr 21 14:12:07.890: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0' STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice Apr 21 14:12:12.984: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 21 14:12:12.987: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-5986" for this suite. Apr 21 14:12:19.006: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 21 14:12:19.090: INFO: namespace pods-5986 deletion completed in 6.099105152s • [SLOW TEST:15.277 seconds] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 [k8s.io] Delete Grace Period /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-network] DNS should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 21 14:12:19.090: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-388.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-388.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 21 14:12:25.194: INFO: DNS probes using dns-388/dns-test-cae16c10-46a1-4543-930c-24f67542e209 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 21 14:12:25.220: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-388" for this suite. Apr 21 14:12:31.317: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 21 14:12:31.396: INFO: namespace dns-388 deletion completed in 6.164378328s • [SLOW TEST:12.306 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 21 14:12:31.396: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Apr 21 14:12:31.489: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b00ee286-6ebe-410b-b757-0ac479b54114" in namespace "projected-4566" to be "success or failure" Apr 21 14:12:31.494: INFO: Pod "downwardapi-volume-b00ee286-6ebe-410b-b757-0ac479b54114": Phase="Pending", Reason="", readiness=false. Elapsed: 4.569595ms Apr 21 14:12:33.499: INFO: Pod "downwardapi-volume-b00ee286-6ebe-410b-b757-0ac479b54114": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009413513s Apr 21 14:12:35.638: INFO: Pod "downwardapi-volume-b00ee286-6ebe-410b-b757-0ac479b54114": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.148284261s STEP: Saw pod success Apr 21 14:12:35.638: INFO: Pod "downwardapi-volume-b00ee286-6ebe-410b-b757-0ac479b54114" satisfied condition "success or failure" Apr 21 14:12:35.650: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-b00ee286-6ebe-410b-b757-0ac479b54114 container client-container: STEP: delete the pod Apr 21 14:12:35.686: INFO: Waiting for pod downwardapi-volume-b00ee286-6ebe-410b-b757-0ac479b54114 to disappear Apr 21 14:12:35.698: INFO: Pod downwardapi-volume-b00ee286-6ebe-410b-b757-0ac479b54114 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 21 14:12:35.698: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4566" for this suite. Apr 21 14:12:41.751: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 21 14:12:41.826: INFO: namespace projected-4566 deletion completed in 6.124877773s • [SLOW TEST:10.430 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 21 14:12:41.826: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename hostpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37 [It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test hostPath mode Apr 21 14:12:41.879: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-2610" to be "success or failure" Apr 21 14:12:41.895: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 16.073369ms Apr 21 14:12:43.899: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020523753s Apr 21 14:12:45.904: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025401193s STEP: Saw pod success Apr 21 14:12:45.904: INFO: Pod "pod-host-path-test" satisfied condition "success or failure" Apr 21 14:12:45.908: INFO: Trying to get logs from node iruya-worker pod pod-host-path-test container test-container-1: STEP: delete the pod Apr 21 14:12:45.939: INFO: Waiting for pod pod-host-path-test to disappear Apr 21 14:12:45.942: INFO: Pod pod-host-path-test no longer exists [AfterEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 21 14:12:45.943: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "hostpath-2610" for this suite. Apr 21 14:12:51.961: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 21 14:12:52.054: INFO: namespace hostpath-2610 deletion completed in 6.108636153s • [SLOW TEST:10.228 seconds] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34 should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 21 14:12:52.054: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes Apr 21 14:12:52.118: INFO: observed the pod list STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice Apr 21 14:13:01.181: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed STEP: verifying pod deletion was observed [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 21 14:13:01.184: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-4779" for this suite. Apr 21 14:13:07.204: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 21 14:13:07.285: INFO: namespace pods-4779 deletion completed in 6.095995155s • [SLOW TEST:15.231 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl api-versions should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 21 14:13:07.286: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: validating api versions Apr 21 14:13:07.330: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions' Apr 21 14:13:07.503: INFO: stderr: "" Apr 21 14:13:07.503: INFO: stdout: "admissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\napps/v1beta1\napps/v1beta2\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 21 14:13:07.503: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4801" for this suite. Apr 21 14:13:13.521: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 21 14:13:13.602: INFO: namespace kubectl-4801 deletion completed in 6.094830785s • [SLOW TEST:6.316 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl api-versions /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 21 14:13:13.602: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-4180.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-4180.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4180.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-4180.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-4180.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4180.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe /etc/hosts STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 21 14:13:19.737: INFO: DNS probes using dns-4180/dns-test-d034387f-5157-4cb1-a964-3f004f82c4cc succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 21 14:13:19.794: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-4180" for this suite. Apr 21 14:13:25.837: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 21 14:13:25.945: INFO: namespace dns-4180 deletion completed in 6.14719782s • [SLOW TEST:12.343 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 21 14:13:25.946: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-configmap-xrdm STEP: Creating a pod to test atomic-volume-subpath Apr 21 14:13:26.018: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-xrdm" in namespace "subpath-1786" to be "success or failure" Apr 21 14:13:26.039: INFO: Pod "pod-subpath-test-configmap-xrdm": Phase="Pending", Reason="", readiness=false. Elapsed: 21.160867ms Apr 21 14:13:28.043: INFO: Pod "pod-subpath-test-configmap-xrdm": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025578612s Apr 21 14:13:30.048: INFO: Pod "pod-subpath-test-configmap-xrdm": Phase="Running", Reason="", readiness=true. Elapsed: 4.029954547s Apr 21 14:13:32.052: INFO: Pod "pod-subpath-test-configmap-xrdm": Phase="Running", Reason="", readiness=true. Elapsed: 6.034090939s Apr 21 14:13:34.056: INFO: Pod "pod-subpath-test-configmap-xrdm": Phase="Running", Reason="", readiness=true. Elapsed: 8.038596848s Apr 21 14:13:36.061: INFO: Pod "pod-subpath-test-configmap-xrdm": Phase="Running", Reason="", readiness=true. Elapsed: 10.043115112s Apr 21 14:13:38.065: INFO: Pod "pod-subpath-test-configmap-xrdm": Phase="Running", Reason="", readiness=true. Elapsed: 12.047486819s Apr 21 14:13:40.069: INFO: Pod "pod-subpath-test-configmap-xrdm": Phase="Running", Reason="", readiness=true. Elapsed: 14.051438721s Apr 21 14:13:42.073: INFO: Pod "pod-subpath-test-configmap-xrdm": Phase="Running", Reason="", readiness=true. Elapsed: 16.055511054s Apr 21 14:13:44.079: INFO: Pod "pod-subpath-test-configmap-xrdm": Phase="Running", Reason="", readiness=true. Elapsed: 18.061260303s Apr 21 14:13:46.083: INFO: Pod "pod-subpath-test-configmap-xrdm": Phase="Running", Reason="", readiness=true. Elapsed: 20.065585107s Apr 21 14:13:48.093: INFO: Pod "pod-subpath-test-configmap-xrdm": Phase="Running", Reason="", readiness=true. Elapsed: 22.075368572s Apr 21 14:13:50.097: INFO: Pod "pod-subpath-test-configmap-xrdm": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.079627703s STEP: Saw pod success Apr 21 14:13:50.097: INFO: Pod "pod-subpath-test-configmap-xrdm" satisfied condition "success or failure" Apr 21 14:13:50.100: INFO: Trying to get logs from node iruya-worker2 pod pod-subpath-test-configmap-xrdm container test-container-subpath-configmap-xrdm: STEP: delete the pod Apr 21 14:13:50.119: INFO: Waiting for pod pod-subpath-test-configmap-xrdm to disappear Apr 21 14:13:50.123: INFO: Pod pod-subpath-test-configmap-xrdm no longer exists STEP: Deleting pod pod-subpath-test-configmap-xrdm Apr 21 14:13:50.123: INFO: Deleting pod "pod-subpath-test-configmap-xrdm" in namespace "subpath-1786" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 21 14:13:50.125: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-1786" for this suite. Apr 21 14:13:56.155: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 21 14:13:56.231: INFO: namespace subpath-1786 deletion completed in 6.101924609s • [SLOW TEST:30.285 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 21 14:13:56.232: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0644 on node default medium Apr 21 14:13:56.294: INFO: Waiting up to 5m0s for pod "pod-5ac30c26-31e7-45b8-8b9c-c412fc110914" in namespace "emptydir-165" to be "success or failure" Apr 21 14:13:56.298: INFO: Pod "pod-5ac30c26-31e7-45b8-8b9c-c412fc110914": Phase="Pending", Reason="", readiness=false. Elapsed: 4.257824ms Apr 21 14:13:58.302: INFO: Pod "pod-5ac30c26-31e7-45b8-8b9c-c412fc110914": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007817535s Apr 21 14:14:00.306: INFO: Pod "pod-5ac30c26-31e7-45b8-8b9c-c412fc110914": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01227365s STEP: Saw pod success Apr 21 14:14:00.306: INFO: Pod "pod-5ac30c26-31e7-45b8-8b9c-c412fc110914" satisfied condition "success or failure" Apr 21 14:14:00.310: INFO: Trying to get logs from node iruya-worker pod pod-5ac30c26-31e7-45b8-8b9c-c412fc110914 container test-container: STEP: delete the pod Apr 21 14:14:00.365: INFO: Waiting for pod pod-5ac30c26-31e7-45b8-8b9c-c412fc110914 to disappear Apr 21 14:14:00.369: INFO: Pod pod-5ac30c26-31e7-45b8-8b9c-c412fc110914 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 21 14:14:00.369: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-165" for this suite. Apr 21 14:14:06.385: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 21 14:14:06.463: INFO: namespace emptydir-165 deletion completed in 6.089395083s • [SLOW TEST:10.231 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 21 14:14:06.463: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [It] should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a replication controller Apr 21 14:14:06.509: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6215' Apr 21 14:14:06.774: INFO: stderr: "" Apr 21 14:14:06.774: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Apr 21 14:14:06.775: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6215' Apr 21 14:14:06.877: INFO: stderr: "" Apr 21 14:14:06.877: INFO: stdout: "update-demo-nautilus-cf69j update-demo-nautilus-jsw86 " Apr 21 14:14:06.877: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-cf69j -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6215' Apr 21 14:14:06.973: INFO: stderr: "" Apr 21 14:14:06.973: INFO: stdout: "" Apr 21 14:14:06.973: INFO: update-demo-nautilus-cf69j is created but not running Apr 21 14:14:11.974: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6215' Apr 21 14:14:12.079: INFO: stderr: "" Apr 21 14:14:12.079: INFO: stdout: "update-demo-nautilus-cf69j update-demo-nautilus-jsw86 " Apr 21 14:14:12.079: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-cf69j -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6215' Apr 21 14:14:12.182: INFO: stderr: "" Apr 21 14:14:12.182: INFO: stdout: "true" Apr 21 14:14:12.182: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-cf69j -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6215' Apr 21 14:14:12.275: INFO: stderr: "" Apr 21 14:14:12.275: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 21 14:14:12.275: INFO: validating pod update-demo-nautilus-cf69j Apr 21 14:14:12.279: INFO: got data: { "image": "nautilus.jpg" } Apr 21 14:14:12.279: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 21 14:14:12.279: INFO: update-demo-nautilus-cf69j is verified up and running Apr 21 14:14:12.279: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jsw86 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6215' Apr 21 14:14:12.387: INFO: stderr: "" Apr 21 14:14:12.387: INFO: stdout: "true" Apr 21 14:14:12.387: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jsw86 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6215' Apr 21 14:14:12.489: INFO: stderr: "" Apr 21 14:14:12.489: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 21 14:14:12.489: INFO: validating pod update-demo-nautilus-jsw86 Apr 21 14:14:12.493: INFO: got data: { "image": "nautilus.jpg" } Apr 21 14:14:12.493: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 21 14:14:12.493: INFO: update-demo-nautilus-jsw86 is verified up and running STEP: scaling down the replication controller Apr 21 14:14:12.496: INFO: scanned /root for discovery docs: Apr 21 14:14:12.496: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-6215' Apr 21 14:14:13.657: INFO: stderr: "" Apr 21 14:14:13.657: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Apr 21 14:14:13.657: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6215' Apr 21 14:14:13.747: INFO: stderr: "" Apr 21 14:14:13.747: INFO: stdout: "update-demo-nautilus-cf69j update-demo-nautilus-jsw86 " STEP: Replicas for name=update-demo: expected=1 actual=2 Apr 21 14:14:18.747: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6215' Apr 21 14:14:18.844: INFO: stderr: "" Apr 21 14:14:18.844: INFO: stdout: "update-demo-nautilus-jsw86 " Apr 21 14:14:18.844: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jsw86 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6215' Apr 21 14:14:18.935: INFO: stderr: "" Apr 21 14:14:18.935: INFO: stdout: "true" Apr 21 14:14:18.935: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jsw86 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6215' Apr 21 14:14:19.034: INFO: stderr: "" Apr 21 14:14:19.034: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 21 14:14:19.034: INFO: validating pod update-demo-nautilus-jsw86 Apr 21 14:14:19.037: INFO: got data: { "image": "nautilus.jpg" } Apr 21 14:14:19.037: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 21 14:14:19.037: INFO: update-demo-nautilus-jsw86 is verified up and running STEP: scaling up the replication controller Apr 21 14:14:19.039: INFO: scanned /root for discovery docs: Apr 21 14:14:19.039: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-6215' Apr 21 14:14:20.172: INFO: stderr: "" Apr 21 14:14:20.172: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Apr 21 14:14:20.172: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6215' Apr 21 14:14:20.280: INFO: stderr: "" Apr 21 14:14:20.280: INFO: stdout: "update-demo-nautilus-jsw86 update-demo-nautilus-mwdln " Apr 21 14:14:20.280: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jsw86 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6215' Apr 21 14:14:20.380: INFO: stderr: "" Apr 21 14:14:20.380: INFO: stdout: "true" Apr 21 14:14:20.380: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jsw86 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6215' Apr 21 14:14:20.478: INFO: stderr: "" Apr 21 14:14:20.478: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 21 14:14:20.478: INFO: validating pod update-demo-nautilus-jsw86 Apr 21 14:14:20.481: INFO: got data: { "image": "nautilus.jpg" } Apr 21 14:14:20.482: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 21 14:14:20.482: INFO: update-demo-nautilus-jsw86 is verified up and running Apr 21 14:14:20.482: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-mwdln -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6215' Apr 21 14:14:20.585: INFO: stderr: "" Apr 21 14:14:20.585: INFO: stdout: "" Apr 21 14:14:20.585: INFO: update-demo-nautilus-mwdln is created but not running Apr 21 14:14:25.585: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6215' Apr 21 14:14:25.693: INFO: stderr: "" Apr 21 14:14:25.693: INFO: stdout: "update-demo-nautilus-jsw86 update-demo-nautilus-mwdln " Apr 21 14:14:25.693: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jsw86 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6215' Apr 21 14:14:25.790: INFO: stderr: "" Apr 21 14:14:25.790: INFO: stdout: "true" Apr 21 14:14:25.791: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jsw86 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6215' Apr 21 14:14:25.885: INFO: stderr: "" Apr 21 14:14:25.885: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 21 14:14:25.885: INFO: validating pod update-demo-nautilus-jsw86 Apr 21 14:14:25.889: INFO: got data: { "image": "nautilus.jpg" } Apr 21 14:14:25.889: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 21 14:14:25.889: INFO: update-demo-nautilus-jsw86 is verified up and running Apr 21 14:14:25.889: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-mwdln -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6215' Apr 21 14:14:25.981: INFO: stderr: "" Apr 21 14:14:25.981: INFO: stdout: "true" Apr 21 14:14:25.981: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-mwdln -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6215' Apr 21 14:14:26.072: INFO: stderr: "" Apr 21 14:14:26.073: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 21 14:14:26.073: INFO: validating pod update-demo-nautilus-mwdln Apr 21 14:14:26.088: INFO: got data: { "image": "nautilus.jpg" } Apr 21 14:14:26.088: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 21 14:14:26.088: INFO: update-demo-nautilus-mwdln is verified up and running STEP: using delete to clean up resources Apr 21 14:14:26.088: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6215' Apr 21 14:14:26.203: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 21 14:14:26.203: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Apr 21 14:14:26.203: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-6215' Apr 21 14:14:26.301: INFO: stderr: "No resources found.\n" Apr 21 14:14:26.301: INFO: stdout: "" Apr 21 14:14:26.302: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-6215 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Apr 21 14:14:26.390: INFO: stderr: "" Apr 21 14:14:26.390: INFO: stdout: "update-demo-nautilus-jsw86\nupdate-demo-nautilus-mwdln\n" Apr 21 14:14:26.891: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-6215' Apr 21 14:14:26.991: INFO: stderr: "No resources found.\n" Apr 21 14:14:26.991: INFO: stdout: "" Apr 21 14:14:26.991: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-6215 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Apr 21 14:14:27.093: INFO: stderr: "" Apr 21 14:14:27.093: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 21 14:14:27.093: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6215" for this suite. Apr 21 14:14:33.235: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 21 14:14:33.315: INFO: namespace kubectl-6215 deletion completed in 6.218738158s • [SLOW TEST:26.852 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 21 14:14:33.315: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Apr 21 14:14:33.399: INFO: (0) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 5.478966ms) Apr 21 14:14:33.403: INFO: (1) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.74423ms) Apr 21 14:14:33.413: INFO: (2) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 10.288452ms) Apr 21 14:14:33.416: INFO: (3) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.810814ms) Apr 21 14:14:33.419: INFO: (4) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.214909ms) Apr 21 14:14:33.423: INFO: (5) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.79566ms) Apr 21 14:14:33.427: INFO: (6) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.814378ms) Apr 21 14:14:33.431: INFO: (7) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.715963ms) Apr 21 14:14:33.434: INFO: (8) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.535332ms) Apr 21 14:14:33.437: INFO: (9) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.163372ms) Apr 21 14:14:33.440: INFO: (10) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.694672ms) Apr 21 14:14:33.443: INFO: (11) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.831175ms) Apr 21 14:14:33.446: INFO: (12) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.94019ms) Apr 21 14:14:33.449: INFO: (13) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.970745ms) Apr 21 14:14:33.452: INFO: (14) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.851527ms) Apr 21 14:14:33.455: INFO: (15) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.246702ms) Apr 21 14:14:33.459: INFO: (16) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.788311ms) Apr 21 14:14:33.463: INFO: (17) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.584103ms) Apr 21 14:14:33.466: INFO: (18) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.460383ms) Apr 21 14:14:33.469: INFO: (19) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.653384ms) [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 21 14:14:33.469: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-4645" for this suite. Apr 21 14:14:39.487: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 21 14:14:39.587: INFO: namespace proxy-4645 deletion completed in 6.115451112s • [SLOW TEST:6.272 seconds] [sig-network] Proxy /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:58 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 21 14:14:39.588: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars Apr 21 14:14:39.695: INFO: Waiting up to 5m0s for pod "downward-api-14508a52-51f4-4f6b-9739-87ba73fda90c" in namespace "downward-api-7559" to be "success or failure" Apr 21 14:14:39.700: INFO: Pod "downward-api-14508a52-51f4-4f6b-9739-87ba73fda90c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.649084ms Apr 21 14:14:41.704: INFO: Pod "downward-api-14508a52-51f4-4f6b-9739-87ba73fda90c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009215439s Apr 21 14:14:43.709: INFO: Pod "downward-api-14508a52-51f4-4f6b-9739-87ba73fda90c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014002092s STEP: Saw pod success Apr 21 14:14:43.709: INFO: Pod "downward-api-14508a52-51f4-4f6b-9739-87ba73fda90c" satisfied condition "success or failure" Apr 21 14:14:43.712: INFO: Trying to get logs from node iruya-worker pod downward-api-14508a52-51f4-4f6b-9739-87ba73fda90c container dapi-container: STEP: delete the pod Apr 21 14:14:43.744: INFO: Waiting for pod downward-api-14508a52-51f4-4f6b-9739-87ba73fda90c to disappear Apr 21 14:14:43.770: INFO: Pod downward-api-14508a52-51f4-4f6b-9739-87ba73fda90c no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 21 14:14:43.770: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7559" for this suite. Apr 21 14:14:49.788: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 21 14:14:49.866: INFO: namespace downward-api-7559 deletion completed in 6.092020762s • [SLOW TEST:10.278 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 21 14:14:49.866: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification Apr 21 14:14:49.932: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-4521,SelfLink:/api/v1/namespaces/watch-4521/configmaps/e2e-watch-test-configmap-a,UID:e34f791c-2daf-4475-9531-131fef1f85ad,ResourceVersion:6652789,Generation:0,CreationTimestamp:2020-04-21 14:14:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} Apr 21 14:14:49.932: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-4521,SelfLink:/api/v1/namespaces/watch-4521/configmaps/e2e-watch-test-configmap-a,UID:e34f791c-2daf-4475-9531-131fef1f85ad,ResourceVersion:6652789,Generation:0,CreationTimestamp:2020-04-21 14:14:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: modifying configmap A and ensuring the correct watchers observe the notification Apr 21 14:14:59.941: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-4521,SelfLink:/api/v1/namespaces/watch-4521/configmaps/e2e-watch-test-configmap-a,UID:e34f791c-2daf-4475-9531-131fef1f85ad,ResourceVersion:6652809,Generation:0,CreationTimestamp:2020-04-21 14:14:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} Apr 21 14:14:59.941: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-4521,SelfLink:/api/v1/namespaces/watch-4521/configmaps/e2e-watch-test-configmap-a,UID:e34f791c-2daf-4475-9531-131fef1f85ad,ResourceVersion:6652809,Generation:0,CreationTimestamp:2020-04-21 14:14:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying configmap A again and ensuring the correct watchers observe the notification Apr 21 14:15:09.949: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-4521,SelfLink:/api/v1/namespaces/watch-4521/configmaps/e2e-watch-test-configmap-a,UID:e34f791c-2daf-4475-9531-131fef1f85ad,ResourceVersion:6652829,Generation:0,CreationTimestamp:2020-04-21 14:14:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Apr 21 14:15:09.949: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-4521,SelfLink:/api/v1/namespaces/watch-4521/configmaps/e2e-watch-test-configmap-a,UID:e34f791c-2daf-4475-9531-131fef1f85ad,ResourceVersion:6652829,Generation:0,CreationTimestamp:2020-04-21 14:14:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: deleting configmap A and ensuring the correct watchers observe the notification Apr 21 14:15:19.957: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-4521,SelfLink:/api/v1/namespaces/watch-4521/configmaps/e2e-watch-test-configmap-a,UID:e34f791c-2daf-4475-9531-131fef1f85ad,ResourceVersion:6652849,Generation:0,CreationTimestamp:2020-04-21 14:14:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Apr 21 14:15:19.957: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-4521,SelfLink:/api/v1/namespaces/watch-4521/configmaps/e2e-watch-test-configmap-a,UID:e34f791c-2daf-4475-9531-131fef1f85ad,ResourceVersion:6652849,Generation:0,CreationTimestamp:2020-04-21 14:14:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification Apr 21 14:15:29.964: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-4521,SelfLink:/api/v1/namespaces/watch-4521/configmaps/e2e-watch-test-configmap-b,UID:47855367-d6a1-4fea-b7a7-3c5405824ff3,ResourceVersion:6652871,Generation:0,CreationTimestamp:2020-04-21 14:15:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} Apr 21 14:15:29.965: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-4521,SelfLink:/api/v1/namespaces/watch-4521/configmaps/e2e-watch-test-configmap-b,UID:47855367-d6a1-4fea-b7a7-3c5405824ff3,ResourceVersion:6652871,Generation:0,CreationTimestamp:2020-04-21 14:15:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: deleting configmap B and ensuring the correct watchers observe the notification Apr 21 14:15:39.971: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-4521,SelfLink:/api/v1/namespaces/watch-4521/configmaps/e2e-watch-test-configmap-b,UID:47855367-d6a1-4fea-b7a7-3c5405824ff3,ResourceVersion:6652892,Generation:0,CreationTimestamp:2020-04-21 14:15:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} Apr 21 14:15:39.972: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-4521,SelfLink:/api/v1/namespaces/watch-4521/configmaps/e2e-watch-test-configmap-b,UID:47855367-d6a1-4fea-b7a7-3c5405824ff3,ResourceVersion:6652892,Generation:0,CreationTimestamp:2020-04-21 14:15:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 21 14:15:49.972: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-4521" for this suite. Apr 21 14:15:56.033: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 21 14:15:56.110: INFO: namespace watch-4521 deletion completed in 6.132980961s • [SLOW TEST:66.244 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 21 14:15:56.111: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-6239.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-6239.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-6239.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-6239.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-6239.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-6239.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-6239.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-6239.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-6239.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-6239.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-6239.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-6239.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6239.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 194.103.105.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.105.103.194_udp@PTR;check="$$(dig +tcp +noall +answer +search 194.103.105.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.105.103.194_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-6239.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-6239.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-6239.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-6239.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-6239.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-6239.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-6239.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-6239.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-6239.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-6239.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-6239.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-6239.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6239.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 194.103.105.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.105.103.194_udp@PTR;check="$$(dig +tcp +noall +answer +search 194.103.105.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.105.103.194_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 21 14:16:03.038: INFO: Unable to read wheezy_udp@dns-test-service.dns-6239.svc.cluster.local from pod dns-6239/dns-test-fbc88ed6-e98b-4a0f-8826-293de0eda610: the server could not find the requested resource (get pods dns-test-fbc88ed6-e98b-4a0f-8826-293de0eda610) Apr 21 14:16:03.042: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6239.svc.cluster.local from pod dns-6239/dns-test-fbc88ed6-e98b-4a0f-8826-293de0eda610: the server could not find the requested resource (get pods dns-test-fbc88ed6-e98b-4a0f-8826-293de0eda610) Apr 21 14:16:03.045: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6239.svc.cluster.local from pod dns-6239/dns-test-fbc88ed6-e98b-4a0f-8826-293de0eda610: the server could not find the requested resource (get pods dns-test-fbc88ed6-e98b-4a0f-8826-293de0eda610) Apr 21 14:16:03.048: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6239.svc.cluster.local from pod dns-6239/dns-test-fbc88ed6-e98b-4a0f-8826-293de0eda610: the server could not find the requested resource (get pods dns-test-fbc88ed6-e98b-4a0f-8826-293de0eda610) Apr 21 14:16:03.070: INFO: Unable to read jessie_udp@dns-test-service.dns-6239.svc.cluster.local from pod dns-6239/dns-test-fbc88ed6-e98b-4a0f-8826-293de0eda610: the server could not find the requested resource (get pods dns-test-fbc88ed6-e98b-4a0f-8826-293de0eda610) Apr 21 14:16:03.073: INFO: Unable to read jessie_tcp@dns-test-service.dns-6239.svc.cluster.local from pod dns-6239/dns-test-fbc88ed6-e98b-4a0f-8826-293de0eda610: the server could not find the requested resource (get pods dns-test-fbc88ed6-e98b-4a0f-8826-293de0eda610) Apr 21 14:16:03.076: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6239.svc.cluster.local from pod dns-6239/dns-test-fbc88ed6-e98b-4a0f-8826-293de0eda610: the server could not find the requested resource (get pods dns-test-fbc88ed6-e98b-4a0f-8826-293de0eda610) Apr 21 14:16:03.079: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6239.svc.cluster.local from pod dns-6239/dns-test-fbc88ed6-e98b-4a0f-8826-293de0eda610: the server could not find the requested resource (get pods dns-test-fbc88ed6-e98b-4a0f-8826-293de0eda610) Apr 21 14:16:03.097: INFO: Lookups using dns-6239/dns-test-fbc88ed6-e98b-4a0f-8826-293de0eda610 failed for: [wheezy_udp@dns-test-service.dns-6239.svc.cluster.local wheezy_tcp@dns-test-service.dns-6239.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-6239.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-6239.svc.cluster.local jessie_udp@dns-test-service.dns-6239.svc.cluster.local jessie_tcp@dns-test-service.dns-6239.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-6239.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-6239.svc.cluster.local] Apr 21 14:16:08.102: INFO: Unable to read wheezy_udp@dns-test-service.dns-6239.svc.cluster.local from pod dns-6239/dns-test-fbc88ed6-e98b-4a0f-8826-293de0eda610: the server could not find the requested resource (get pods dns-test-fbc88ed6-e98b-4a0f-8826-293de0eda610) Apr 21 14:16:08.106: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6239.svc.cluster.local from pod dns-6239/dns-test-fbc88ed6-e98b-4a0f-8826-293de0eda610: the server could not find the requested resource (get pods dns-test-fbc88ed6-e98b-4a0f-8826-293de0eda610) Apr 21 14:16:08.109: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6239.svc.cluster.local from pod dns-6239/dns-test-fbc88ed6-e98b-4a0f-8826-293de0eda610: the server could not find the requested resource (get pods dns-test-fbc88ed6-e98b-4a0f-8826-293de0eda610) Apr 21 14:16:08.112: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6239.svc.cluster.local from pod dns-6239/dns-test-fbc88ed6-e98b-4a0f-8826-293de0eda610: the server could not find the requested resource (get pods dns-test-fbc88ed6-e98b-4a0f-8826-293de0eda610) Apr 21 14:16:08.130: INFO: Unable to read jessie_udp@dns-test-service.dns-6239.svc.cluster.local from pod dns-6239/dns-test-fbc88ed6-e98b-4a0f-8826-293de0eda610: the server could not find the requested resource (get pods dns-test-fbc88ed6-e98b-4a0f-8826-293de0eda610) Apr 21 14:16:08.133: INFO: Unable to read jessie_tcp@dns-test-service.dns-6239.svc.cluster.local from pod dns-6239/dns-test-fbc88ed6-e98b-4a0f-8826-293de0eda610: the server could not find the requested resource (get pods dns-test-fbc88ed6-e98b-4a0f-8826-293de0eda610) Apr 21 14:16:08.136: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6239.svc.cluster.local from pod dns-6239/dns-test-fbc88ed6-e98b-4a0f-8826-293de0eda610: the server could not find the requested resource (get pods dns-test-fbc88ed6-e98b-4a0f-8826-293de0eda610) Apr 21 14:16:08.139: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6239.svc.cluster.local from pod dns-6239/dns-test-fbc88ed6-e98b-4a0f-8826-293de0eda610: the server could not find the requested resource (get pods dns-test-fbc88ed6-e98b-4a0f-8826-293de0eda610) Apr 21 14:16:08.157: INFO: Lookups using dns-6239/dns-test-fbc88ed6-e98b-4a0f-8826-293de0eda610 failed for: [wheezy_udp@dns-test-service.dns-6239.svc.cluster.local wheezy_tcp@dns-test-service.dns-6239.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-6239.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-6239.svc.cluster.local jessie_udp@dns-test-service.dns-6239.svc.cluster.local jessie_tcp@dns-test-service.dns-6239.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-6239.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-6239.svc.cluster.local] Apr 21 14:16:13.120: INFO: Unable to read wheezy_udp@dns-test-service.dns-6239.svc.cluster.local from pod dns-6239/dns-test-fbc88ed6-e98b-4a0f-8826-293de0eda610: the server could not find the requested resource (get pods dns-test-fbc88ed6-e98b-4a0f-8826-293de0eda610) Apr 21 14:16:13.123: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6239.svc.cluster.local from pod dns-6239/dns-test-fbc88ed6-e98b-4a0f-8826-293de0eda610: the server could not find the requested resource (get pods dns-test-fbc88ed6-e98b-4a0f-8826-293de0eda610) Apr 21 14:16:13.126: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6239.svc.cluster.local from pod dns-6239/dns-test-fbc88ed6-e98b-4a0f-8826-293de0eda610: the server could not find the requested resource (get pods dns-test-fbc88ed6-e98b-4a0f-8826-293de0eda610) Apr 21 14:16:13.129: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6239.svc.cluster.local from pod dns-6239/dns-test-fbc88ed6-e98b-4a0f-8826-293de0eda610: the server could not find the requested resource (get pods dns-test-fbc88ed6-e98b-4a0f-8826-293de0eda610) Apr 21 14:16:13.148: INFO: Unable to read jessie_udp@dns-test-service.dns-6239.svc.cluster.local from pod dns-6239/dns-test-fbc88ed6-e98b-4a0f-8826-293de0eda610: the server could not find the requested resource (get pods dns-test-fbc88ed6-e98b-4a0f-8826-293de0eda610) Apr 21 14:16:13.151: INFO: Unable to read jessie_tcp@dns-test-service.dns-6239.svc.cluster.local from pod dns-6239/dns-test-fbc88ed6-e98b-4a0f-8826-293de0eda610: the server could not find the requested resource (get pods dns-test-fbc88ed6-e98b-4a0f-8826-293de0eda610) Apr 21 14:16:13.154: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6239.svc.cluster.local from pod dns-6239/dns-test-fbc88ed6-e98b-4a0f-8826-293de0eda610: the server could not find the requested resource (get pods dns-test-fbc88ed6-e98b-4a0f-8826-293de0eda610) Apr 21 14:16:13.157: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6239.svc.cluster.local from pod dns-6239/dns-test-fbc88ed6-e98b-4a0f-8826-293de0eda610: the server could not find the requested resource (get pods dns-test-fbc88ed6-e98b-4a0f-8826-293de0eda610) Apr 21 14:16:13.172: INFO: Lookups using dns-6239/dns-test-fbc88ed6-e98b-4a0f-8826-293de0eda610 failed for: [wheezy_udp@dns-test-service.dns-6239.svc.cluster.local wheezy_tcp@dns-test-service.dns-6239.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-6239.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-6239.svc.cluster.local jessie_udp@dns-test-service.dns-6239.svc.cluster.local jessie_tcp@dns-test-service.dns-6239.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-6239.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-6239.svc.cluster.local] Apr 21 14:16:18.103: INFO: Unable to read wheezy_udp@dns-test-service.dns-6239.svc.cluster.local from pod dns-6239/dns-test-fbc88ed6-e98b-4a0f-8826-293de0eda610: the server could not find the requested resource (get pods dns-test-fbc88ed6-e98b-4a0f-8826-293de0eda610) Apr 21 14:16:18.107: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6239.svc.cluster.local from pod dns-6239/dns-test-fbc88ed6-e98b-4a0f-8826-293de0eda610: the server could not find the requested resource (get pods dns-test-fbc88ed6-e98b-4a0f-8826-293de0eda610) Apr 21 14:16:18.110: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6239.svc.cluster.local from pod dns-6239/dns-test-fbc88ed6-e98b-4a0f-8826-293de0eda610: the server could not find the requested resource (get pods dns-test-fbc88ed6-e98b-4a0f-8826-293de0eda610) Apr 21 14:16:18.113: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6239.svc.cluster.local from pod dns-6239/dns-test-fbc88ed6-e98b-4a0f-8826-293de0eda610: the server could not find the requested resource (get pods dns-test-fbc88ed6-e98b-4a0f-8826-293de0eda610) Apr 21 14:16:18.132: INFO: Unable to read jessie_udp@dns-test-service.dns-6239.svc.cluster.local from pod dns-6239/dns-test-fbc88ed6-e98b-4a0f-8826-293de0eda610: the server could not find the requested resource (get pods dns-test-fbc88ed6-e98b-4a0f-8826-293de0eda610) Apr 21 14:16:18.134: INFO: Unable to read jessie_tcp@dns-test-service.dns-6239.svc.cluster.local from pod dns-6239/dns-test-fbc88ed6-e98b-4a0f-8826-293de0eda610: the server could not find the requested resource (get pods dns-test-fbc88ed6-e98b-4a0f-8826-293de0eda610) Apr 21 14:16:18.137: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6239.svc.cluster.local from pod dns-6239/dns-test-fbc88ed6-e98b-4a0f-8826-293de0eda610: the server could not find the requested resource (get pods dns-test-fbc88ed6-e98b-4a0f-8826-293de0eda610) Apr 21 14:16:18.140: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6239.svc.cluster.local from pod dns-6239/dns-test-fbc88ed6-e98b-4a0f-8826-293de0eda610: the server could not find the requested resource (get pods dns-test-fbc88ed6-e98b-4a0f-8826-293de0eda610) Apr 21 14:16:18.159: INFO: Lookups using dns-6239/dns-test-fbc88ed6-e98b-4a0f-8826-293de0eda610 failed for: [wheezy_udp@dns-test-service.dns-6239.svc.cluster.local wheezy_tcp@dns-test-service.dns-6239.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-6239.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-6239.svc.cluster.local jessie_udp@dns-test-service.dns-6239.svc.cluster.local jessie_tcp@dns-test-service.dns-6239.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-6239.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-6239.svc.cluster.local] Apr 21 14:16:23.103: INFO: Unable to read wheezy_udp@dns-test-service.dns-6239.svc.cluster.local from pod dns-6239/dns-test-fbc88ed6-e98b-4a0f-8826-293de0eda610: the server could not find the requested resource (get pods dns-test-fbc88ed6-e98b-4a0f-8826-293de0eda610) Apr 21 14:16:23.110: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6239.svc.cluster.local from pod dns-6239/dns-test-fbc88ed6-e98b-4a0f-8826-293de0eda610: the server could not find the requested resource (get pods dns-test-fbc88ed6-e98b-4a0f-8826-293de0eda610) Apr 21 14:16:23.113: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6239.svc.cluster.local from pod dns-6239/dns-test-fbc88ed6-e98b-4a0f-8826-293de0eda610: the server could not find the requested resource (get pods dns-test-fbc88ed6-e98b-4a0f-8826-293de0eda610) Apr 21 14:16:23.137: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6239.svc.cluster.local from pod dns-6239/dns-test-fbc88ed6-e98b-4a0f-8826-293de0eda610: the server could not find the requested resource (get pods dns-test-fbc88ed6-e98b-4a0f-8826-293de0eda610) Apr 21 14:16:23.160: INFO: Unable to read jessie_udp@dns-test-service.dns-6239.svc.cluster.local from pod dns-6239/dns-test-fbc88ed6-e98b-4a0f-8826-293de0eda610: the server could not find the requested resource (get pods dns-test-fbc88ed6-e98b-4a0f-8826-293de0eda610) Apr 21 14:16:23.163: INFO: Unable to read jessie_tcp@dns-test-service.dns-6239.svc.cluster.local from pod dns-6239/dns-test-fbc88ed6-e98b-4a0f-8826-293de0eda610: the server could not find the requested resource (get pods dns-test-fbc88ed6-e98b-4a0f-8826-293de0eda610) Apr 21 14:16:23.166: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6239.svc.cluster.local from pod dns-6239/dns-test-fbc88ed6-e98b-4a0f-8826-293de0eda610: the server could not find the requested resource (get pods dns-test-fbc88ed6-e98b-4a0f-8826-293de0eda610) Apr 21 14:16:23.169: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6239.svc.cluster.local from pod dns-6239/dns-test-fbc88ed6-e98b-4a0f-8826-293de0eda610: the server could not find the requested resource (get pods dns-test-fbc88ed6-e98b-4a0f-8826-293de0eda610) Apr 21 14:16:23.189: INFO: Lookups using dns-6239/dns-test-fbc88ed6-e98b-4a0f-8826-293de0eda610 failed for: [wheezy_udp@dns-test-service.dns-6239.svc.cluster.local wheezy_tcp@dns-test-service.dns-6239.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-6239.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-6239.svc.cluster.local jessie_udp@dns-test-service.dns-6239.svc.cluster.local jessie_tcp@dns-test-service.dns-6239.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-6239.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-6239.svc.cluster.local] Apr 21 14:16:28.102: INFO: Unable to read wheezy_udp@dns-test-service.dns-6239.svc.cluster.local from pod dns-6239/dns-test-fbc88ed6-e98b-4a0f-8826-293de0eda610: the server could not find the requested resource (get pods dns-test-fbc88ed6-e98b-4a0f-8826-293de0eda610) Apr 21 14:16:28.106: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6239.svc.cluster.local from pod dns-6239/dns-test-fbc88ed6-e98b-4a0f-8826-293de0eda610: the server could not find the requested resource (get pods dns-test-fbc88ed6-e98b-4a0f-8826-293de0eda610) Apr 21 14:16:28.110: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6239.svc.cluster.local from pod dns-6239/dns-test-fbc88ed6-e98b-4a0f-8826-293de0eda610: the server could not find the requested resource (get pods dns-test-fbc88ed6-e98b-4a0f-8826-293de0eda610) Apr 21 14:16:28.113: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6239.svc.cluster.local from pod dns-6239/dns-test-fbc88ed6-e98b-4a0f-8826-293de0eda610: the server could not find the requested resource (get pods dns-test-fbc88ed6-e98b-4a0f-8826-293de0eda610) Apr 21 14:16:28.132: INFO: Unable to read jessie_udp@dns-test-service.dns-6239.svc.cluster.local from pod dns-6239/dns-test-fbc88ed6-e98b-4a0f-8826-293de0eda610: the server could not find the requested resource (get pods dns-test-fbc88ed6-e98b-4a0f-8826-293de0eda610) Apr 21 14:16:28.135: INFO: Unable to read jessie_tcp@dns-test-service.dns-6239.svc.cluster.local from pod dns-6239/dns-test-fbc88ed6-e98b-4a0f-8826-293de0eda610: the server could not find the requested resource (get pods dns-test-fbc88ed6-e98b-4a0f-8826-293de0eda610) Apr 21 14:16:28.137: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6239.svc.cluster.local from pod dns-6239/dns-test-fbc88ed6-e98b-4a0f-8826-293de0eda610: the server could not find the requested resource (get pods dns-test-fbc88ed6-e98b-4a0f-8826-293de0eda610) Apr 21 14:16:28.140: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6239.svc.cluster.local from pod dns-6239/dns-test-fbc88ed6-e98b-4a0f-8826-293de0eda610: the server could not find the requested resource (get pods dns-test-fbc88ed6-e98b-4a0f-8826-293de0eda610) Apr 21 14:16:28.156: INFO: Lookups using dns-6239/dns-test-fbc88ed6-e98b-4a0f-8826-293de0eda610 failed for: [wheezy_udp@dns-test-service.dns-6239.svc.cluster.local wheezy_tcp@dns-test-service.dns-6239.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-6239.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-6239.svc.cluster.local jessie_udp@dns-test-service.dns-6239.svc.cluster.local jessie_tcp@dns-test-service.dns-6239.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-6239.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-6239.svc.cluster.local] Apr 21 14:16:33.162: INFO: DNS probes using dns-6239/dns-test-fbc88ed6-e98b-4a0f-8826-293de0eda610 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 21 14:16:33.670: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-6239" for this suite. Apr 21 14:16:39.713: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 21 14:16:39.782: INFO: namespace dns-6239 deletion completed in 6.107253542s • [SLOW TEST:43.671 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 21 14:16:39.782: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update Apr 21 14:16:39.914: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:watch-8841,SelfLink:/api/v1/namespaces/watch-8841/configmaps/e2e-watch-test-resource-version,UID:71d7cd43-847c-4fa0-928e-06e3e738a384,ResourceVersion:6653078,Generation:0,CreationTimestamp:2020-04-21 14:16:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Apr 21 14:16:39.914: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:watch-8841,SelfLink:/api/v1/namespaces/watch-8841/configmaps/e2e-watch-test-resource-version,UID:71d7cd43-847c-4fa0-928e-06e3e738a384,ResourceVersion:6653079,Generation:0,CreationTimestamp:2020-04-21 14:16:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 21 14:16:39.914: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-8841" for this suite. Apr 21 14:16:45.939: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 21 14:16:46.045: INFO: namespace watch-8841 deletion completed in 6.126164203s • [SLOW TEST:6.263 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 21 14:16:46.045: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with configMap that has name projected-configmap-test-upd-419fa1e9-2143-4682-ba80-5950ae902693 STEP: Creating the pod STEP: Updating configmap projected-configmap-test-upd-419fa1e9-2143-4682-ba80-5950ae902693 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 21 14:16:52.180: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7097" for this suite. Apr 21 14:17:14.195: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 21 14:17:14.310: INFO: namespace projected-7097 deletion completed in 22.125593806s • [SLOW TEST:28.265 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 21 14:17:14.310: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Apr 21 14:17:14.437: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. Apr 21 14:17:14.452: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 21 14:17:14.458: INFO: Number of nodes with available pods: 0 Apr 21 14:17:14.458: INFO: Node iruya-worker is running more than one daemon pod Apr 21 14:17:15.463: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 21 14:17:15.467: INFO: Number of nodes with available pods: 0 Apr 21 14:17:15.467: INFO: Node iruya-worker is running more than one daemon pod Apr 21 14:17:16.463: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 21 14:17:16.466: INFO: Number of nodes with available pods: 0 Apr 21 14:17:16.466: INFO: Node iruya-worker is running more than one daemon pod Apr 21 14:17:17.468: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 21 14:17:17.470: INFO: Number of nodes with available pods: 0 Apr 21 14:17:17.470: INFO: Node iruya-worker is running more than one daemon pod Apr 21 14:17:18.464: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 21 14:17:18.467: INFO: Number of nodes with available pods: 2 Apr 21 14:17:18.467: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. Apr 21 14:17:18.500: INFO: Wrong image for pod: daemon-set-crz8z. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 21 14:17:18.500: INFO: Wrong image for pod: daemon-set-vr8cz. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 21 14:17:18.506: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 21 14:17:19.511: INFO: Wrong image for pod: daemon-set-crz8z. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 21 14:17:19.511: INFO: Wrong image for pod: daemon-set-vr8cz. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 21 14:17:19.515: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 21 14:17:20.511: INFO: Wrong image for pod: daemon-set-crz8z. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 21 14:17:20.511: INFO: Wrong image for pod: daemon-set-vr8cz. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 21 14:17:20.516: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 21 14:17:21.511: INFO: Wrong image for pod: daemon-set-crz8z. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 21 14:17:21.511: INFO: Pod daemon-set-crz8z is not available Apr 21 14:17:21.511: INFO: Wrong image for pod: daemon-set-vr8cz. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 21 14:17:21.516: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 21 14:17:22.511: INFO: Pod daemon-set-ln2vj is not available Apr 21 14:17:22.511: INFO: Wrong image for pod: daemon-set-vr8cz. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 21 14:17:22.515: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 21 14:17:23.511: INFO: Pod daemon-set-ln2vj is not available Apr 21 14:17:23.511: INFO: Wrong image for pod: daemon-set-vr8cz. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 21 14:17:23.515: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 21 14:17:24.511: INFO: Pod daemon-set-ln2vj is not available Apr 21 14:17:24.511: INFO: Wrong image for pod: daemon-set-vr8cz. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 21 14:17:24.515: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 21 14:17:25.511: INFO: Wrong image for pod: daemon-set-vr8cz. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 21 14:17:25.515: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 21 14:17:26.511: INFO: Wrong image for pod: daemon-set-vr8cz. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 21 14:17:26.515: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 21 14:17:27.511: INFO: Wrong image for pod: daemon-set-vr8cz. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 21 14:17:27.511: INFO: Pod daemon-set-vr8cz is not available Apr 21 14:17:27.516: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 21 14:17:28.511: INFO: Wrong image for pod: daemon-set-vr8cz. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 21 14:17:28.511: INFO: Pod daemon-set-vr8cz is not available Apr 21 14:17:28.515: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 21 14:17:29.511: INFO: Wrong image for pod: daemon-set-vr8cz. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 21 14:17:29.511: INFO: Pod daemon-set-vr8cz is not available Apr 21 14:17:29.516: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 21 14:17:30.511: INFO: Wrong image for pod: daemon-set-vr8cz. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 21 14:17:30.511: INFO: Pod daemon-set-vr8cz is not available Apr 21 14:17:30.515: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 21 14:17:31.511: INFO: Wrong image for pod: daemon-set-vr8cz. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 21 14:17:31.511: INFO: Pod daemon-set-vr8cz is not available Apr 21 14:17:31.514: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 21 14:17:32.510: INFO: Pod daemon-set-k2vv5 is not available Apr 21 14:17:32.514: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node STEP: Check that daemon pods are still running on every node of the cluster. Apr 21 14:17:32.516: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 21 14:17:32.519: INFO: Number of nodes with available pods: 1 Apr 21 14:17:32.519: INFO: Node iruya-worker2 is running more than one daemon pod Apr 21 14:17:33.612: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 21 14:17:33.615: INFO: Number of nodes with available pods: 1 Apr 21 14:17:33.615: INFO: Node iruya-worker2 is running more than one daemon pod Apr 21 14:17:34.524: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 21 14:17:34.528: INFO: Number of nodes with available pods: 1 Apr 21 14:17:34.528: INFO: Node iruya-worker2 is running more than one daemon pod Apr 21 14:17:35.523: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 21 14:17:35.526: INFO: Number of nodes with available pods: 2 Apr 21 14:17:35.526: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-4551, will wait for the garbage collector to delete the pods Apr 21 14:17:35.601: INFO: Deleting DaemonSet.extensions daemon-set took: 7.116079ms Apr 21 14:17:35.902: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.339712ms Apr 21 14:17:41.906: INFO: Number of nodes with available pods: 0 Apr 21 14:17:41.906: INFO: Number of running nodes: 0, number of available pods: 0 Apr 21 14:17:41.908: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-4551/daemonsets","resourceVersion":"6653313"},"items":null} Apr 21 14:17:41.910: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-4551/pods","resourceVersion":"6653313"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 21 14:17:41.919: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-4551" for this suite. Apr 21 14:17:47.935: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 21 14:17:48.013: INFO: namespace daemonsets-4551 deletion completed in 6.091099203s • [SLOW TEST:33.703 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 21 14:17:48.013: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 21 14:17:52.248: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-1341" for this suite. Apr 21 14:17:58.283: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 21 14:17:58.373: INFO: namespace emptydir-wrapper-1341 deletion completed in 6.122011597s • [SLOW TEST:10.360 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-apps] Deployment deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 21 14:17:58.373: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Apr 21 14:17:58.445: INFO: Pod name rollover-pod: Found 0 pods out of 1 Apr 21 14:18:03.450: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Apr 21 14:18:03.450: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready Apr 21 14:18:05.455: INFO: Creating deployment "test-rollover-deployment" Apr 21 14:18:05.504: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations Apr 21 14:18:07.510: INFO: Check revision of new replica set for deployment "test-rollover-deployment" Apr 21 14:18:07.516: INFO: Ensure that both replica sets have 1 created replica Apr 21 14:18:07.520: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update Apr 21 14:18:07.526: INFO: Updating deployment test-rollover-deployment Apr 21 14:18:07.526: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller Apr 21 14:18:09.618: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 Apr 21 14:18:09.658: INFO: Make sure deployment "test-rollover-deployment" is complete Apr 21 14:18:09.664: INFO: all replica sets need to contain the pod-template-hash label Apr 21 14:18:09.664: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723075485, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723075485, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723075487, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723075485, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 21 14:18:11.672: INFO: all replica sets need to contain the pod-template-hash label Apr 21 14:18:11.672: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723075485, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723075485, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723075491, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723075485, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 21 14:18:13.672: INFO: all replica sets need to contain the pod-template-hash label Apr 21 14:18:13.672: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723075485, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723075485, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723075491, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723075485, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 21 14:18:15.671: INFO: all replica sets need to contain the pod-template-hash label Apr 21 14:18:15.671: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723075485, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723075485, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723075491, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723075485, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 21 14:18:17.671: INFO: all replica sets need to contain the pod-template-hash label Apr 21 14:18:17.671: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723075485, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723075485, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723075491, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723075485, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 21 14:18:19.673: INFO: all replica sets need to contain the pod-template-hash label Apr 21 14:18:19.673: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723075485, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723075485, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723075491, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723075485, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 21 14:18:21.671: INFO: Apr 21 14:18:21.671: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 Apr 21 14:18:21.895: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment,GenerateName:,Namespace:deployment-3745,SelfLink:/apis/apps/v1/namespaces/deployment-3745/deployments/test-rollover-deployment,UID:b89a59b9-aa9b-4655-aba9-533d5590fde2,ResourceVersion:6653525,Generation:2,CreationTimestamp:2020-04-21 14:18:05 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-04-21 14:18:05 +0000 UTC 2020-04-21 14:18:05 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-04-21 14:18:21 +0000 UTC 2020-04-21 14:18:05 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rollover-deployment-854595fc44" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},} Apr 21 14:18:21.898: INFO: New ReplicaSet "test-rollover-deployment-854595fc44" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-854595fc44,GenerateName:,Namespace:deployment-3745,SelfLink:/apis/apps/v1/namespaces/deployment-3745/replicasets/test-rollover-deployment-854595fc44,UID:01581066-bd22-4cf9-8321-023bb95fb6dc,ResourceVersion:6653514,Generation:2,CreationTimestamp:2020-04-21 14:18:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment b89a59b9-aa9b-4655-aba9-533d5590fde2 0xc002c6aa97 0xc002c6aa98}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} Apr 21 14:18:21.898: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": Apr 21 14:18:21.898: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-controller,GenerateName:,Namespace:deployment-3745,SelfLink:/apis/apps/v1/namespaces/deployment-3745/replicasets/test-rollover-controller,UID:c32e81ee-c051-4171-ba4e-6922a9fa3c4a,ResourceVersion:6653524,Generation:2,CreationTimestamp:2020-04-21 14:17:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment b89a59b9-aa9b-4655-aba9-533d5590fde2 0xc002c6a9c7 0xc002c6a9c8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Apr 21 14:18:21.898: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-9b8b997cf,GenerateName:,Namespace:deployment-3745,SelfLink:/apis/apps/v1/namespaces/deployment-3745/replicasets/test-rollover-deployment-9b8b997cf,UID:f4fb7e91-be1e-406a-b750-e01f1b5abfc2,ResourceVersion:6653473,Generation:2,CreationTimestamp:2020-04-21 14:18:05 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment b89a59b9-aa9b-4655-aba9-533d5590fde2 0xc002c6ab70 0xc002c6ab71}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Apr 21 14:18:21.901: INFO: Pod "test-rollover-deployment-854595fc44-cdcnk" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-854595fc44-cdcnk,GenerateName:test-rollover-deployment-854595fc44-,Namespace:deployment-3745,SelfLink:/api/v1/namespaces/deployment-3745/pods/test-rollover-deployment-854595fc44-cdcnk,UID:23d9c4e1-042c-41b1-9cb9-b123f329a37f,ResourceVersion:6653492,Generation:0,CreationTimestamp:2020-04-21 14:18:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rollover-deployment-854595fc44 01581066-bd22-4cf9-8321-023bb95fb6dc 0xc000055857 0xc000055858}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-dm64j {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dm64j,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-dm64j true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0000559a0} {node.kubernetes.io/unreachable Exists NoExecute 0xc000055da0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-21 14:18:07 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-04-21 14:18:11 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-04-21 14:18:11 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-21 14:18:07 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:10.244.1.121,StartTime:2020-04-21 14:18:07 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-04-21 14:18:10 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 containerd://58dc98dfde3b86dc4d0efdba8ac26439c2ae5234faf62a27907f5c013bd771e7}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 21 14:18:21.901: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-3745" for this suite. Apr 21 14:18:27.966: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 21 14:18:28.032: INFO: namespace deployment-3745 deletion completed in 6.128199414s • [SLOW TEST:29.659 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 21 14:18:28.032: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: validating cluster-info Apr 21 14:18:28.161: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info' Apr 21 14:18:28.245: INFO: stderr: "" Apr 21 14:18:28.245: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32769\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32769/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 21 14:18:28.245: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8777" for this suite. Apr 21 14:18:34.258: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 21 14:18:34.359: INFO: namespace kubectl-8777 deletion completed in 6.110204275s • [SLOW TEST:6.327 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl cluster-info /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 21 14:18:34.360: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Apr 21 14:18:37.524: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 21 14:18:37.572: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-9784" for this suite. Apr 21 14:18:43.587: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 21 14:18:43.659: INFO: namespace container-runtime-9784 deletion completed in 6.083729507s • [SLOW TEST:9.299 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129 should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 21 14:18:43.659: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Apr 21 14:18:43.718: INFO: Waiting up to 5m0s for pod "downwardapi-volume-bec5ebea-4bb8-42df-bbc0-c20a8317f8ac" in namespace "projected-8687" to be "success or failure" Apr 21 14:18:43.722: INFO: Pod "downwardapi-volume-bec5ebea-4bb8-42df-bbc0-c20a8317f8ac": Phase="Pending", Reason="", readiness=false. Elapsed: 3.41455ms Apr 21 14:18:45.726: INFO: Pod "downwardapi-volume-bec5ebea-4bb8-42df-bbc0-c20a8317f8ac": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007631212s Apr 21 14:18:47.730: INFO: Pod "downwardapi-volume-bec5ebea-4bb8-42df-bbc0-c20a8317f8ac": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012083456s STEP: Saw pod success Apr 21 14:18:47.730: INFO: Pod "downwardapi-volume-bec5ebea-4bb8-42df-bbc0-c20a8317f8ac" satisfied condition "success or failure" Apr 21 14:18:47.734: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-bec5ebea-4bb8-42df-bbc0-c20a8317f8ac container client-container: STEP: delete the pod Apr 21 14:18:47.755: INFO: Waiting for pod downwardapi-volume-bec5ebea-4bb8-42df-bbc0-c20a8317f8ac to disappear Apr 21 14:18:47.771: INFO: Pod downwardapi-volume-bec5ebea-4bb8-42df-bbc0-c20a8317f8ac no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 21 14:18:47.771: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8687" for this suite. Apr 21 14:18:53.787: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 21 14:18:53.900: INFO: namespace projected-8687 deletion completed in 6.125418658s • [SLOW TEST:10.241 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 21 14:18:53.900: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 21 14:18:58.030: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-2933" for this suite. Apr 21 14:19:38.057: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 21 14:19:38.141: INFO: namespace kubelet-test-2933 deletion completed in 40.108539969s • [SLOW TEST:44.242 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a read only busybox container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:187 should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 21 14:19:38.142: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 pods, got 2 pods STEP: expected 0 rs, got 1 rs STEP: Gathering metrics W0421 14:19:39.249503 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Apr 21 14:19:39.249: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 21 14:19:39.249: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-8215" for this suite. Apr 21 14:19:45.366: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 21 14:19:45.447: INFO: namespace gc-8215 deletion completed in 6.194539201s • [SLOW TEST:7.305 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 21 14:19:45.447: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-e8e82ddc-9c9f-4e93-a5d1-5816cc94a7af STEP: Creating a pod to test consume secrets Apr 21 14:19:45.619: INFO: Waiting up to 5m0s for pod "pod-secrets-4e4e3984-5bc3-44f3-a693-2ac40105187c" in namespace "secrets-9178" to be "success or failure" Apr 21 14:19:45.623: INFO: Pod "pod-secrets-4e4e3984-5bc3-44f3-a693-2ac40105187c": Phase="Pending", Reason="", readiness=false. Elapsed: 3.993739ms Apr 21 14:19:47.627: INFO: Pod "pod-secrets-4e4e3984-5bc3-44f3-a693-2ac40105187c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007848419s Apr 21 14:19:49.631: INFO: Pod "pod-secrets-4e4e3984-5bc3-44f3-a693-2ac40105187c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012241806s STEP: Saw pod success Apr 21 14:19:49.631: INFO: Pod "pod-secrets-4e4e3984-5bc3-44f3-a693-2ac40105187c" satisfied condition "success or failure" Apr 21 14:19:49.635: INFO: Trying to get logs from node iruya-worker pod pod-secrets-4e4e3984-5bc3-44f3-a693-2ac40105187c container secret-volume-test: STEP: delete the pod Apr 21 14:19:49.667: INFO: Waiting for pod pod-secrets-4e4e3984-5bc3-44f3-a693-2ac40105187c to disappear Apr 21 14:19:49.677: INFO: Pod pod-secrets-4e4e3984-5bc3-44f3-a693-2ac40105187c no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 21 14:19:49.678: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-9178" for this suite. Apr 21 14:19:55.706: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 21 14:19:55.797: INFO: namespace secrets-9178 deletion completed in 6.115943942s STEP: Destroying namespace "secret-namespace-7129" for this suite. Apr 21 14:20:01.819: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 21 14:20:01.915: INFO: namespace secret-namespace-7129 deletion completed in 6.118018546s • [SLOW TEST:16.468 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 21 14:20:01.915: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating the pod Apr 21 14:20:06.513: INFO: Successfully updated pod "labelsupdate4c02a539-028d-44d9-bcce-1490716a4b83" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 21 14:20:08.530: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3571" for this suite. Apr 21 14:20:30.547: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 21 14:20:30.628: INFO: namespace projected-3571 deletion completed in 22.094623108s • [SLOW TEST:28.712 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl replace should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 21 14:20:30.628: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1721 [It] should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Apr 21 14:20:30.706: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --labels=run=e2e-test-nginx-pod --namespace=kubectl-7718' Apr 21 14:20:30.815: INFO: stderr: "" Apr 21 14:20:30.815: INFO: stdout: "pod/e2e-test-nginx-pod created\n" STEP: verifying the pod e2e-test-nginx-pod is running STEP: verifying the pod e2e-test-nginx-pod was created Apr 21 14:20:35.866: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-nginx-pod --namespace=kubectl-7718 -o json' Apr 21 14:20:35.969: INFO: stderr: "" Apr 21 14:20:35.969: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2020-04-21T14:20:30Z\",\n \"labels\": {\n \"run\": \"e2e-test-nginx-pod\"\n },\n \"name\": \"e2e-test-nginx-pod\",\n \"namespace\": \"kubectl-7718\",\n \"resourceVersion\": \"6654005\",\n \"selfLink\": \"/api/v1/namespaces/kubectl-7718/pods/e2e-test-nginx-pod\",\n \"uid\": \"6a7437d8-4b29-42ad-88b1-9652ca7f7013\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/nginx:1.14-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-nginx-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-jnhzn\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"iruya-worker\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-jnhzn\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-jnhzn\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-04-21T14:20:30Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-04-21T14:20:33Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-04-21T14:20:33Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-04-21T14:20:30Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"containerd://a64b41a81b76ca78801afbd16aac8f8e133f799669070efb9b1ad32ff68d2d38\",\n \"image\": \"docker.io/library/nginx:1.14-alpine\",\n \"imageID\": \"docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7\",\n \"lastState\": {},\n \"name\": \"e2e-test-nginx-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2020-04-21T14:20:33Z\"\n }\n }\n }\n ],\n \"hostIP\": \"172.17.0.6\",\n \"phase\": \"Running\",\n \"podIP\": \"10.244.2.14\",\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2020-04-21T14:20:30Z\"\n }\n}\n" STEP: replace the image in the pod Apr 21 14:20:35.970: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-7718' Apr 21 14:20:36.231: INFO: stderr: "" Apr 21 14:20:36.231: INFO: stdout: "pod/e2e-test-nginx-pod replaced\n" STEP: verifying the pod e2e-test-nginx-pod has the right image docker.io/library/busybox:1.29 [AfterEach] [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1726 Apr 21 14:20:36.244: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=kubectl-7718' Apr 21 14:20:42.182: INFO: stderr: "" Apr 21 14:20:42.182: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 21 14:20:42.182: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7718" for this suite. Apr 21 14:20:48.197: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 21 14:20:48.269: INFO: namespace kubectl-7718 deletion completed in 6.083058695s • [SLOW TEST:17.641 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 21 14:20:48.269: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-6b36324b-4891-425c-b737-d319ea2a7404 STEP: Creating a pod to test consume secrets Apr 21 14:20:48.330: INFO: Waiting up to 5m0s for pod "pod-secrets-8447c994-93e1-4431-9a54-df6bc91d1968" in namespace "secrets-6076" to be "success or failure" Apr 21 14:20:48.340: INFO: Pod "pod-secrets-8447c994-93e1-4431-9a54-df6bc91d1968": Phase="Pending", Reason="", readiness=false. Elapsed: 9.815525ms Apr 21 14:20:50.345: INFO: Pod "pod-secrets-8447c994-93e1-4431-9a54-df6bc91d1968": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01447809s Apr 21 14:20:52.348: INFO: Pod "pod-secrets-8447c994-93e1-4431-9a54-df6bc91d1968": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017555775s STEP: Saw pod success Apr 21 14:20:52.348: INFO: Pod "pod-secrets-8447c994-93e1-4431-9a54-df6bc91d1968" satisfied condition "success or failure" Apr 21 14:20:52.351: INFO: Trying to get logs from node iruya-worker2 pod pod-secrets-8447c994-93e1-4431-9a54-df6bc91d1968 container secret-env-test: STEP: delete the pod Apr 21 14:20:52.365: INFO: Waiting for pod pod-secrets-8447c994-93e1-4431-9a54-df6bc91d1968 to disappear Apr 21 14:20:52.370: INFO: Pod pod-secrets-8447c994-93e1-4431-9a54-df6bc91d1968 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 21 14:20:52.370: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-6076" for this suite. Apr 21 14:20:58.386: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 21 14:20:58.469: INFO: namespace secrets-6076 deletion completed in 6.094934517s • [SLOW TEST:10.200 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31 should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run --rm job should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 21 14:20:58.469: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: executing a command with run --rm and attach with stdin Apr 21 14:20:58.518: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-7045 run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed'' Apr 21 14:21:01.813: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\nI0421 14:21:01.738483 2996 log.go:172] (0xc0008ce420) (0xc00083c460) Create stream\nI0421 14:21:01.738534 2996 log.go:172] (0xc0008ce420) (0xc00083c460) Stream added, broadcasting: 1\nI0421 14:21:01.742127 2996 log.go:172] (0xc0008ce420) Reply frame received for 1\nI0421 14:21:01.742176 2996 log.go:172] (0xc0008ce420) (0xc00059a000) Create stream\nI0421 14:21:01.742188 2996 log.go:172] (0xc0008ce420) (0xc00059a000) Stream added, broadcasting: 3\nI0421 14:21:01.743360 2996 log.go:172] (0xc0008ce420) Reply frame received for 3\nI0421 14:21:01.743414 2996 log.go:172] (0xc0008ce420) (0xc00083c000) Create stream\nI0421 14:21:01.743431 2996 log.go:172] (0xc0008ce420) (0xc00083c000) Stream added, broadcasting: 5\nI0421 14:21:01.744137 2996 log.go:172] (0xc0008ce420) Reply frame received for 5\nI0421 14:21:01.744169 2996 log.go:172] (0xc0008ce420) (0xc00083c0a0) Create stream\nI0421 14:21:01.744176 2996 log.go:172] (0xc0008ce420) (0xc00083c0a0) Stream added, broadcasting: 7\nI0421 14:21:01.744950 2996 log.go:172] (0xc0008ce420) Reply frame received for 7\nI0421 14:21:01.745280 2996 log.go:172] (0xc00059a000) (3) Writing data frame\nI0421 14:21:01.745419 2996 log.go:172] (0xc00059a000) (3) Writing data frame\nI0421 14:21:01.746604 2996 log.go:172] (0xc0008ce420) Data frame received for 5\nI0421 14:21:01.746624 2996 log.go:172] (0xc00083c000) (5) Data frame handling\nI0421 14:21:01.746643 2996 log.go:172] (0xc00083c000) (5) Data frame sent\nI0421 14:21:01.746669 2996 log.go:172] (0xc0008ce420) Data frame received for 5\nI0421 14:21:01.746676 2996 log.go:172] (0xc00083c000) (5) Data frame handling\nI0421 14:21:01.746683 2996 log.go:172] (0xc00083c000) (5) Data frame sent\nI0421 14:21:01.787716 2996 log.go:172] (0xc0008ce420) Data frame received for 5\nI0421 14:21:01.787758 2996 log.go:172] (0xc00083c000) (5) Data frame handling\nI0421 14:21:01.787780 2996 log.go:172] (0xc0008ce420) Data frame received for 7\nI0421 14:21:01.787796 2996 log.go:172] (0xc00083c0a0) (7) Data frame handling\nI0421 14:21:01.788291 2996 log.go:172] (0xc0008ce420) Data frame received for 1\nI0421 14:21:01.788330 2996 log.go:172] (0xc0008ce420) (0xc00059a000) Stream removed, broadcasting: 3\nI0421 14:21:01.788400 2996 log.go:172] (0xc00083c460) (1) Data frame handling\nI0421 14:21:01.788443 2996 log.go:172] (0xc00083c460) (1) Data frame sent\nI0421 14:21:01.788538 2996 log.go:172] (0xc0008ce420) (0xc00083c460) Stream removed, broadcasting: 1\nI0421 14:21:01.788570 2996 log.go:172] (0xc0008ce420) Go away received\nI0421 14:21:01.788627 2996 log.go:172] (0xc0008ce420) (0xc00083c460) Stream removed, broadcasting: 1\nI0421 14:21:01.788648 2996 log.go:172] (0xc0008ce420) (0xc00059a000) Stream removed, broadcasting: 3\nI0421 14:21:01.788661 2996 log.go:172] (0xc0008ce420) (0xc00083c000) Stream removed, broadcasting: 5\nI0421 14:21:01.788681 2996 log.go:172] (0xc0008ce420) (0xc00083c0a0) Stream removed, broadcasting: 7\n" Apr 21 14:21:01.813: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n" STEP: verifying the job e2e-test-rm-busybox-job was deleted [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 21 14:21:03.826: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7045" for this suite. Apr 21 14:21:13.853: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 21 14:21:13.926: INFO: namespace kubectl-7045 deletion completed in 10.097103821s • [SLOW TEST:15.456 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run --rm job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-apps] Deployment deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 21 14:21:13.926: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Apr 21 14:21:13.988: INFO: Creating deployment "nginx-deployment" Apr 21 14:21:14.003: INFO: Waiting for observed generation 1 Apr 21 14:21:16.063: INFO: Waiting for all required pods to come up Apr 21 14:21:16.068: INFO: Pod name nginx: Found 10 pods out of 10 STEP: ensuring each pod is running Apr 21 14:21:24.078: INFO: Waiting for deployment "nginx-deployment" to complete Apr 21 14:21:24.084: INFO: Updating deployment "nginx-deployment" with a non-existent image Apr 21 14:21:24.089: INFO: Updating deployment nginx-deployment Apr 21 14:21:24.089: INFO: Waiting for observed generation 2 Apr 21 14:21:26.139: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 Apr 21 14:21:26.141: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 Apr 21 14:21:26.143: INFO: Waiting for the first rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas Apr 21 14:21:26.149: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 Apr 21 14:21:26.149: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 Apr 21 14:21:26.151: INFO: Waiting for the second rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas Apr 21 14:21:26.154: INFO: Verifying that deployment "nginx-deployment" has minimum required number of available replicas Apr 21 14:21:26.154: INFO: Scaling up the deployment "nginx-deployment" from 10 to 30 Apr 21 14:21:26.159: INFO: Updating deployment nginx-deployment Apr 21 14:21:26.159: INFO: Waiting for the replicasets of deployment "nginx-deployment" to have desired number of replicas Apr 21 14:21:26.280: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 Apr 21 14:21:26.323: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 Apr 21 14:21:28.697: INFO: Deployment "nginx-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment,GenerateName:,Namespace:deployment-5960,SelfLink:/apis/apps/v1/namespaces/deployment-5960/deployments/nginx-deployment,UID:1e4bc3a2-c014-47e3-91a5-131068e383f8,ResourceVersion:6654429,Generation:3,CreationTimestamp:2020-04-21 14:21:13 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*30,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:33,UpdatedReplicas:13,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[{Available False 2020-04-21 14:21:26 +0000 UTC 2020-04-21 14:21:26 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2020-04-21 14:21:26 +0000 UTC 2020-04-21 14:21:14 +0000 UTC ReplicaSetUpdated ReplicaSet "nginx-deployment-55fb7cb77f" is progressing.}],ReadyReplicas:8,CollisionCount:nil,},} Apr 21 14:21:28.819: INFO: New ReplicaSet "nginx-deployment-55fb7cb77f" of Deployment "nginx-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f,GenerateName:,Namespace:deployment-5960,SelfLink:/apis/apps/v1/namespaces/deployment-5960/replicasets/nginx-deployment-55fb7cb77f,UID:058a17de-ddca-4244-994b-6e4e228303af,ResourceVersion:6654423,Generation:3,CreationTimestamp:2020-04-21 14:21:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment nginx-deployment 1e4bc3a2-c014-47e3-91a5-131068e383f8 0xc00290cfa7 0xc00290cfa8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*13,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Apr 21 14:21:28.819: INFO: All old ReplicaSets of Deployment "nginx-deployment": Apr 21 14:21:28.819: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498,GenerateName:,Namespace:deployment-5960,SelfLink:/apis/apps/v1/namespaces/deployment-5960/replicasets/nginx-deployment-7b8c6f4498,UID:1b1c4e90-0a89-4704-997b-6f7d32eab989,ResourceVersion:6654409,Generation:3,CreationTimestamp:2020-04-21 14:21:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment nginx-deployment 1e4bc3a2-c014-47e3-91a5-131068e383f8 0xc00290d077 0xc00290d078}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*20,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[],},} Apr 21 14:21:28.850: INFO: Pod "nginx-deployment-55fb7cb77f-4qbjj" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-4qbjj,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-5960,SelfLink:/api/v1/namespaces/deployment-5960/pods/nginx-deployment-55fb7cb77f-4qbjj,UID:2210891b-180b-4e28-a556-cb8269d7e571,ResourceVersion:6654408,Generation:0,CreationTimestamp:2020-04-21 14:21:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 058a17de-ddca-4244-994b-6e4e228303af 0xc00290d9d7 0xc00290d9d8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-2h5jp {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-2h5jp,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-2h5jp true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00290da50} {node.kubernetes.io/unreachable Exists NoExecute 0xc00290da70}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-21 14:21:26 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 21 14:21:28.850: INFO: Pod "nginx-deployment-55fb7cb77f-7cwbr" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-7cwbr,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-5960,SelfLink:/api/v1/namespaces/deployment-5960/pods/nginx-deployment-55fb7cb77f-7cwbr,UID:255a188f-ffb3-46a8-8e77-b89854adb071,ResourceVersion:6654431,Generation:0,CreationTimestamp:2020-04-21 14:21:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 058a17de-ddca-4244-994b-6e4e228303af 0xc00290daf7 0xc00290daf8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-2h5jp {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-2h5jp,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-2h5jp true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00290db80} {node.kubernetes.io/unreachable Exists NoExecute 0xc00290dba0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-21 14:21:26 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-21 14:21:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-21 14:21:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-21 14:21:26 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:,StartTime:2020-04-21 14:21:26 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 21 14:21:28.850: INFO: Pod "nginx-deployment-55fb7cb77f-7fp6z" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-7fp6z,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-5960,SelfLink:/api/v1/namespaces/deployment-5960/pods/nginx-deployment-55fb7cb77f-7fp6z,UID:3b216135-e7bd-454c-9b9d-ca4c45459996,ResourceVersion:6654353,Generation:0,CreationTimestamp:2020-04-21 14:21:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 058a17de-ddca-4244-994b-6e4e228303af 0xc00290dc70 0xc00290dc71}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-2h5jp {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-2h5jp,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-2h5jp true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00290dcf0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00290dd10}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-21 14:21:24 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-21 14:21:24 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-21 14:21:24 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-21 14:21:24 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:,StartTime:2020-04-21 14:21:24 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 21 14:21:28.851: INFO: Pod "nginx-deployment-55fb7cb77f-bxswk" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-bxswk,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-5960,SelfLink:/api/v1/namespaces/deployment-5960/pods/nginx-deployment-55fb7cb77f-bxswk,UID:ea1ad45f-d142-4f3a-a157-fb6f5264f950,ResourceVersion:6654332,Generation:0,CreationTimestamp:2020-04-21 14:21:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 058a17de-ddca-4244-994b-6e4e228303af 0xc00290dde0 0xc00290dde1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-2h5jp {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-2h5jp,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-2h5jp true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00290de60} {node.kubernetes.io/unreachable Exists NoExecute 0xc00290de80}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-21 14:21:24 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-21 14:21:24 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-21 14:21:24 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-21 14:21:24 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:,StartTime:2020-04-21 14:21:24 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 21 14:21:28.851: INFO: Pod "nginx-deployment-55fb7cb77f-gfwfg" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-gfwfg,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-5960,SelfLink:/api/v1/namespaces/deployment-5960/pods/nginx-deployment-55fb7cb77f-gfwfg,UID:e5aa2c96-dfd9-48b9-b48d-bd046e9698fc,ResourceVersion:6654335,Generation:0,CreationTimestamp:2020-04-21 14:21:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 058a17de-ddca-4244-994b-6e4e228303af 0xc00290df50 0xc00290df51}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-2h5jp {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-2h5jp,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-2h5jp true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00290dfd0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00290dff0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-21 14:21:24 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-21 14:21:24 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-21 14:21:24 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-21 14:21:24 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:,StartTime:2020-04-21 14:21:24 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 21 14:21:28.851: INFO: Pod "nginx-deployment-55fb7cb77f-gq7rv" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-gq7rv,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-5960,SelfLink:/api/v1/namespaces/deployment-5960/pods/nginx-deployment-55fb7cb77f-gq7rv,UID:e40e7217-2e51-4f44-a788-e95551b46904,ResourceVersion:6654417,Generation:0,CreationTimestamp:2020-04-21 14:21:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 058a17de-ddca-4244-994b-6e4e228303af 0xc0038a00e0 0xc0038a00e1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-2h5jp {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-2h5jp,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-2h5jp true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0038a0160} {node.kubernetes.io/unreachable Exists NoExecute 0xc0038a0180}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-21 14:21:26 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 21 14:21:28.851: INFO: Pod "nginx-deployment-55fb7cb77f-hgk95" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-hgk95,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-5960,SelfLink:/api/v1/namespaces/deployment-5960/pods/nginx-deployment-55fb7cb77f-hgk95,UID:f076c469-e90a-47ff-b2f5-359026f271ee,ResourceVersion:6654415,Generation:0,CreationTimestamp:2020-04-21 14:21:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 058a17de-ddca-4244-994b-6e4e228303af 0xc0038a0207 0xc0038a0208}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-2h5jp {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-2h5jp,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-2h5jp true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0038a0280} {node.kubernetes.io/unreachable Exists NoExecute 0xc0038a02a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-21 14:21:26 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 21 14:21:28.851: INFO: Pod "nginx-deployment-55fb7cb77f-q29jl" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-q29jl,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-5960,SelfLink:/api/v1/namespaces/deployment-5960/pods/nginx-deployment-55fb7cb77f-q29jl,UID:22371823-e64e-4559-80b8-828dcbe43be7,ResourceVersion:6654325,Generation:0,CreationTimestamp:2020-04-21 14:21:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 058a17de-ddca-4244-994b-6e4e228303af 0xc0038a0327 0xc0038a0328}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-2h5jp {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-2h5jp,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-2h5jp true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0038a03a0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0038a03c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-21 14:21:24 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-21 14:21:24 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-21 14:21:24 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-21 14:21:24 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:,StartTime:2020-04-21 14:21:24 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 21 14:21:28.851: INFO: Pod "nginx-deployment-55fb7cb77f-qz9gz" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-qz9gz,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-5960,SelfLink:/api/v1/namespaces/deployment-5960/pods/nginx-deployment-55fb7cb77f-qz9gz,UID:5470d888-593f-4b3b-b41c-7beb326f3ef7,ResourceVersion:6654465,Generation:0,CreationTimestamp:2020-04-21 14:21:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 058a17de-ddca-4244-994b-6e4e228303af 0xc0038a0490 0xc0038a0491}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-2h5jp {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-2h5jp,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-2h5jp true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0038a0510} {node.kubernetes.io/unreachable Exists NoExecute 0xc0038a0530}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-21 14:21:26 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-21 14:21:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-21 14:21:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-21 14:21:26 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:,StartTime:2020-04-21 14:21:26 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 21 14:21:28.851: INFO: Pod "nginx-deployment-55fb7cb77f-s8gmv" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-s8gmv,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-5960,SelfLink:/api/v1/namespaces/deployment-5960/pods/nginx-deployment-55fb7cb77f-s8gmv,UID:9d283ea9-d6cb-48e5-b0a7-2637ccf40846,ResourceVersion:6654434,Generation:0,CreationTimestamp:2020-04-21 14:21:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 058a17de-ddca-4244-994b-6e4e228303af 0xc0038a0600 0xc0038a0601}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-2h5jp {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-2h5jp,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-2h5jp true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0038a0680} {node.kubernetes.io/unreachable Exists NoExecute 0xc0038a06a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-21 14:21:26 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-21 14:21:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-21 14:21:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-21 14:21:26 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:,StartTime:2020-04-21 14:21:26 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 21 14:21:28.851: INFO: Pod "nginx-deployment-55fb7cb77f-tplsf" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-tplsf,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-5960,SelfLink:/api/v1/namespaces/deployment-5960/pods/nginx-deployment-55fb7cb77f-tplsf,UID:3b820192-3eeb-443e-af70-def3004caec3,ResourceVersion:6654352,Generation:0,CreationTimestamp:2020-04-21 14:21:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 058a17de-ddca-4244-994b-6e4e228303af 0xc0038a0770 0xc0038a0771}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-2h5jp {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-2h5jp,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-2h5jp true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0038a07f0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0038a0810}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-21 14:21:24 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-21 14:21:24 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-21 14:21:24 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-21 14:21:24 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:,StartTime:2020-04-21 14:21:24 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 21 14:21:28.852: INFO: Pod "nginx-deployment-55fb7cb77f-vc27d" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-vc27d,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-5960,SelfLink:/api/v1/namespaces/deployment-5960/pods/nginx-deployment-55fb7cb77f-vc27d,UID:97f698e2-66e8-4a49-b4a9-c016d8d315be,ResourceVersion:6654416,Generation:0,CreationTimestamp:2020-04-21 14:21:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 058a17de-ddca-4244-994b-6e4e228303af 0xc0038a08e0 0xc0038a08e1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-2h5jp {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-2h5jp,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-2h5jp true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0038a0960} {node.kubernetes.io/unreachable Exists NoExecute 0xc0038a0980}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-21 14:21:26 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 21 14:21:28.852: INFO: Pod "nginx-deployment-55fb7cb77f-vhhxw" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-vhhxw,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-5960,SelfLink:/api/v1/namespaces/deployment-5960/pods/nginx-deployment-55fb7cb77f-vhhxw,UID:52405f8a-1387-4be8-b670-aab161ee6d20,ResourceVersion:6654418,Generation:0,CreationTimestamp:2020-04-21 14:21:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 058a17de-ddca-4244-994b-6e4e228303af 0xc0038a0a07 0xc0038a0a08}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-2h5jp {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-2h5jp,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-2h5jp true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0038a0a80} {node.kubernetes.io/unreachable Exists NoExecute 0xc0038a0aa0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-21 14:21:26 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 21 14:21:28.852: INFO: Pod "nginx-deployment-7b8c6f4498-5lhp5" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-5lhp5,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-5960,SelfLink:/api/v1/namespaces/deployment-5960/pods/nginx-deployment-7b8c6f4498-5lhp5,UID:1df2987a-29d9-469b-9c9e-acb3ee0b4b5b,ResourceVersion:6654271,Generation:0,CreationTimestamp:2020-04-21 14:21:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 1b1c4e90-0a89-4704-997b-6f7d32eab989 0xc0038a0b27 0xc0038a0b28}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-2h5jp {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-2h5jp,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-2h5jp true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0038a0ba0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0038a0bc0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-21 14:21:14 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-04-21 14:21:22 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-04-21 14:21:22 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-21 14:21:14 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:10.244.1.129,StartTime:2020-04-21 14:21:14 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-04-21 14:21:21 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://e78af5934b7e8f2cfa860cb9d2bd836d540ad6ea1e60e9d512ba72a46fc7bc55}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 21 14:21:28.852: INFO: Pod "nginx-deployment-7b8c6f4498-7h4s4" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-7h4s4,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-5960,SelfLink:/api/v1/namespaces/deployment-5960/pods/nginx-deployment-7b8c6f4498-7h4s4,UID:06eccded-bce8-48a7-98a0-03de4ba899d2,ResourceVersion:6654403,Generation:0,CreationTimestamp:2020-04-21 14:21:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 1b1c4e90-0a89-4704-997b-6f7d32eab989 0xc0038a0c97 0xc0038a0c98}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-2h5jp {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-2h5jp,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-2h5jp true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0038a0d10} {node.kubernetes.io/unreachable Exists NoExecute 0xc0038a0d30}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-21 14:21:26 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-21 14:21:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-21 14:21:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-21 14:21:26 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:,StartTime:2020-04-21 14:21:26 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 21 14:21:28.852: INFO: Pod "nginx-deployment-7b8c6f4498-7nt6q" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-7nt6q,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-5960,SelfLink:/api/v1/namespaces/deployment-5960/pods/nginx-deployment-7b8c6f4498-7nt6q,UID:38d5b58a-da9b-405e-bb31-66321d97f164,ResourceVersion:6654462,Generation:0,CreationTimestamp:2020-04-21 14:21:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 1b1c4e90-0a89-4704-997b-6f7d32eab989 0xc0038a0df7 0xc0038a0df8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-2h5jp {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-2h5jp,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-2h5jp true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0038a0e70} {node.kubernetes.io/unreachable Exists NoExecute 0xc0038a0e90}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-21 14:21:26 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-21 14:21:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-21 14:21:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-21 14:21:26 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:,StartTime:2020-04-21 14:21:26 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 21 14:21:28.852: INFO: Pod "nginx-deployment-7b8c6f4498-9lc2g" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-9lc2g,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-5960,SelfLink:/api/v1/namespaces/deployment-5960/pods/nginx-deployment-7b8c6f4498-9lc2g,UID:a0d3c47a-54fc-4c0b-b1c9-7ac34591f259,ResourceVersion:6654262,Generation:0,CreationTimestamp:2020-04-21 14:21:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 1b1c4e90-0a89-4704-997b-6f7d32eab989 0xc0038a0f57 0xc0038a0f58}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-2h5jp {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-2h5jp,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-2h5jp true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0038a0fd0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0038a0ff0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-21 14:21:14 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-04-21 14:21:21 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-04-21 14:21:21 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-21 14:21:14 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:10.244.2.16,StartTime:2020-04-21 14:21:14 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-04-21 14:21:19 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://57824181dddd26aa057944d821dd9e555166e3881d7d09df76a776416f744041}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 21 14:21:28.852: INFO: Pod "nginx-deployment-7b8c6f4498-9spvr" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-9spvr,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-5960,SelfLink:/api/v1/namespaces/deployment-5960/pods/nginx-deployment-7b8c6f4498-9spvr,UID:171d198a-598f-4b4c-a4c2-6e7f1ec80a32,ResourceVersion:6654468,Generation:0,CreationTimestamp:2020-04-21 14:21:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 1b1c4e90-0a89-4704-997b-6f7d32eab989 0xc0038a10c7 0xc0038a10c8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-2h5jp {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-2h5jp,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-2h5jp true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0038a1140} {node.kubernetes.io/unreachable Exists NoExecute 0xc0038a1160}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-21 14:21:26 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-21 14:21:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-21 14:21:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-21 14:21:26 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:,StartTime:2020-04-21 14:21:26 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 21 14:21:28.853: INFO: Pod "nginx-deployment-7b8c6f4498-dfpjc" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-dfpjc,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-5960,SelfLink:/api/v1/namespaces/deployment-5960/pods/nginx-deployment-7b8c6f4498-dfpjc,UID:9fb070e4-6a37-4cb5-8fa1-9815a4b49458,ResourceVersion:6654257,Generation:0,CreationTimestamp:2020-04-21 14:21:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 1b1c4e90-0a89-4704-997b-6f7d32eab989 0xc0038a1227 0xc0038a1228}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-2h5jp {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-2h5jp,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-2h5jp true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0038a12a0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0038a12c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-21 14:21:14 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-04-21 14:21:21 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-04-21 14:21:21 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-21 14:21:14 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:10.244.1.127,StartTime:2020-04-21 14:21:14 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-04-21 14:21:20 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://3412799dbd43f7e7518fbc3f7eab9614df861325398e16f7cce4bdd323bd4242}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 21 14:21:28.853: INFO: Pod "nginx-deployment-7b8c6f4498-dsfdc" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-dsfdc,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-5960,SelfLink:/api/v1/namespaces/deployment-5960/pods/nginx-deployment-7b8c6f4498-dsfdc,UID:5cdedbd1-e917-48e4-ac7d-d606121d42ec,ResourceVersion:6654452,Generation:0,CreationTimestamp:2020-04-21 14:21:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 1b1c4e90-0a89-4704-997b-6f7d32eab989 0xc0038a1397 0xc0038a1398}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-2h5jp {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-2h5jp,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-2h5jp true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0038a1410} {node.kubernetes.io/unreachable Exists NoExecute 0xc0038a1430}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-21 14:21:26 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-21 14:21:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-21 14:21:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-21 14:21:26 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:,StartTime:2020-04-21 14:21:26 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 21 14:21:28.853: INFO: Pod "nginx-deployment-7b8c6f4498-g69vx" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-g69vx,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-5960,SelfLink:/api/v1/namespaces/deployment-5960/pods/nginx-deployment-7b8c6f4498-g69vx,UID:abb2d2aa-068f-4390-880c-a01569b73209,ResourceVersion:6654239,Generation:0,CreationTimestamp:2020-04-21 14:21:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 1b1c4e90-0a89-4704-997b-6f7d32eab989 0xc0038a14f7 0xc0038a14f8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-2h5jp {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-2h5jp,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-2h5jp true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0038a1570} {node.kubernetes.io/unreachable Exists NoExecute 0xc0038a1590}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-21 14:21:14 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-04-21 14:21:18 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-04-21 14:21:18 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-21 14:21:14 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:10.244.1.126,StartTime:2020-04-21 14:21:14 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-04-21 14:21:17 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://85a83ac5e4f1a9784fde35e0f1bd3e5df2ddf04b0b17573737c1496bc8600475}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 21 14:21:28.853: INFO: Pod "nginx-deployment-7b8c6f4498-gpwf6" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-gpwf6,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-5960,SelfLink:/api/v1/namespaces/deployment-5960/pods/nginx-deployment-7b8c6f4498-gpwf6,UID:9a01833d-763a-4050-a799-a4f94f14601a,ResourceVersion:6654450,Generation:0,CreationTimestamp:2020-04-21 14:21:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 1b1c4e90-0a89-4704-997b-6f7d32eab989 0xc0038a1667 0xc0038a1668}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-2h5jp {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-2h5jp,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-2h5jp true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0038a16e0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0038a1700}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-21 14:21:26 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-21 14:21:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-21 14:21:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-21 14:21:26 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:,StartTime:2020-04-21 14:21:26 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 21 14:21:28.853: INFO: Pod "nginx-deployment-7b8c6f4498-hhzht" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-hhzht,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-5960,SelfLink:/api/v1/namespaces/deployment-5960/pods/nginx-deployment-7b8c6f4498-hhzht,UID:9d5836d9-39f9-4691-9b32-65861bfe8d7c,ResourceVersion:6654428,Generation:0,CreationTimestamp:2020-04-21 14:21:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 1b1c4e90-0a89-4704-997b-6f7d32eab989 0xc0038a17c7 0xc0038a17c8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-2h5jp {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-2h5jp,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-2h5jp true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0038a1840} {node.kubernetes.io/unreachable Exists NoExecute 0xc0038a1860}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-21 14:21:26 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-21 14:21:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-21 14:21:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-21 14:21:26 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:,StartTime:2020-04-21 14:21:26 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 21 14:21:28.853: INFO: Pod "nginx-deployment-7b8c6f4498-kpz55" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-kpz55,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-5960,SelfLink:/api/v1/namespaces/deployment-5960/pods/nginx-deployment-7b8c6f4498-kpz55,UID:131f84f6-8101-47e7-9af5-d30a6217eadc,ResourceVersion:6654414,Generation:0,CreationTimestamp:2020-04-21 14:21:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 1b1c4e90-0a89-4704-997b-6f7d32eab989 0xc0038a1927 0xc0038a1928}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-2h5jp {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-2h5jp,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-2h5jp true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0038a19a0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0038a19c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-21 14:21:26 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 21 14:21:28.853: INFO: Pod "nginx-deployment-7b8c6f4498-l6q8n" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-l6q8n,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-5960,SelfLink:/api/v1/namespaces/deployment-5960/pods/nginx-deployment-7b8c6f4498-l6q8n,UID:c9f6e134-f5dd-4bd7-8765-89557c02a821,ResourceVersion:6654443,Generation:0,CreationTimestamp:2020-04-21 14:21:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 1b1c4e90-0a89-4704-997b-6f7d32eab989 0xc0038a1a47 0xc0038a1a48}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-2h5jp {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-2h5jp,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-2h5jp true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0038a1ac0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0038a1ae0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-21 14:21:26 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-21 14:21:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-21 14:21:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-21 14:21:26 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:,StartTime:2020-04-21 14:21:26 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 21 14:21:28.853: INFO: Pod "nginx-deployment-7b8c6f4498-lkf6p" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-lkf6p,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-5960,SelfLink:/api/v1/namespaces/deployment-5960/pods/nginx-deployment-7b8c6f4498-lkf6p,UID:1005a9ed-6bc6-4887-9f7e-e29b73528cc6,ResourceVersion:6654459,Generation:0,CreationTimestamp:2020-04-21 14:21:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 1b1c4e90-0a89-4704-997b-6f7d32eab989 0xc0038a1ba7 0xc0038a1ba8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-2h5jp {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-2h5jp,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-2h5jp true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0038a1c20} {node.kubernetes.io/unreachable Exists NoExecute 0xc0038a1c40}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-21 14:21:26 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-21 14:21:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-21 14:21:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-21 14:21:26 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:,StartTime:2020-04-21 14:21:26 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 21 14:21:28.853: INFO: Pod "nginx-deployment-7b8c6f4498-nl9hg" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-nl9hg,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-5960,SelfLink:/api/v1/namespaces/deployment-5960/pods/nginx-deployment-7b8c6f4498-nl9hg,UID:aa8e7653-731a-4a2d-b461-36d4348e563c,ResourceVersion:6654269,Generation:0,CreationTimestamp:2020-04-21 14:21:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 1b1c4e90-0a89-4704-997b-6f7d32eab989 0xc0038a1d07 0xc0038a1d08}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-2h5jp {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-2h5jp,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-2h5jp true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0038a1d80} {node.kubernetes.io/unreachable Exists NoExecute 0xc0038a1da0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-21 14:21:14 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-04-21 14:21:22 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-04-21 14:21:22 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-21 14:21:14 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:10.244.1.128,StartTime:2020-04-21 14:21:14 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-04-21 14:21:21 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://66333208d846b225b383d981854a6abd85b318990f4244f041364dd20e69b21a}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 21 14:21:28.854: INFO: Pod "nginx-deployment-7b8c6f4498-pcdv8" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-pcdv8,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-5960,SelfLink:/api/v1/namespaces/deployment-5960/pods/nginx-deployment-7b8c6f4498-pcdv8,UID:b4d79185-d7a1-41ff-8f8c-de6570ff34fa,ResourceVersion:6654299,Generation:0,CreationTimestamp:2020-04-21 14:21:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 1b1c4e90-0a89-4704-997b-6f7d32eab989 0xc0038a1e87 0xc0038a1e88}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-2h5jp {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-2h5jp,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-2h5jp true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0038a1f00} {node.kubernetes.io/unreachable Exists NoExecute 0xc0038a1f20}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-21 14:21:14 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-04-21 14:21:23 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-04-21 14:21:23 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-21 14:21:14 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:10.244.2.18,StartTime:2020-04-21 14:21:14 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-04-21 14:21:22 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://d39d9f923f970e16358c982be654263bc055e2a99f8e2ff68392c54ce7aa09c7}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 21 14:21:28.854: INFO: Pod "nginx-deployment-7b8c6f4498-qz7db" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-qz7db,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-5960,SelfLink:/api/v1/namespaces/deployment-5960/pods/nginx-deployment-7b8c6f4498-qz7db,UID:03a7f840-c673-4312-9b7f-c2580105d3d7,ResourceVersion:6654278,Generation:0,CreationTimestamp:2020-04-21 14:21:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 1b1c4e90-0a89-4704-997b-6f7d32eab989 0xc0038a1ff7 0xc0038a1ff8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-2h5jp {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-2h5jp,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-2h5jp true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002f26070} {node.kubernetes.io/unreachable Exists NoExecute 0xc002f26090}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-21 14:21:14 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-04-21 14:21:22 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-04-21 14:21:22 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-21 14:21:14 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:10.244.2.17,StartTime:2020-04-21 14:21:14 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-04-21 14:21:20 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://61a03f59a322ecb1085371e708f0938f821b1321cbaf3c36c45393094693526e}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 21 14:21:28.854: INFO: Pod "nginx-deployment-7b8c6f4498-svdbt" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-svdbt,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-5960,SelfLink:/api/v1/namespaces/deployment-5960/pods/nginx-deployment-7b8c6f4498-svdbt,UID:fa7272ce-655a-4069-b706-05f1f9c9ea59,ResourceVersion:6654437,Generation:0,CreationTimestamp:2020-04-21 14:21:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 1b1c4e90-0a89-4704-997b-6f7d32eab989 0xc002f26167 0xc002f26168}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-2h5jp {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-2h5jp,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-2h5jp true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002f261e0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002f26200}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-21 14:21:26 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-21 14:21:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-21 14:21:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-21 14:21:26 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:,StartTime:2020-04-21 14:21:26 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 21 14:21:28.854: INFO: Pod "nginx-deployment-7b8c6f4498-tlf6p" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-tlf6p,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-5960,SelfLink:/api/v1/namespaces/deployment-5960/pods/nginx-deployment-7b8c6f4498-tlf6p,UID:47a42fea-1f21-4fcf-9a83-e7a1728c1187,ResourceVersion:6654293,Generation:0,CreationTimestamp:2020-04-21 14:21:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 1b1c4e90-0a89-4704-997b-6f7d32eab989 0xc002f262c7 0xc002f262c8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-2h5jp {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-2h5jp,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-2h5jp true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002f26340} {node.kubernetes.io/unreachable Exists NoExecute 0xc002f26360}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-21 14:21:14 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-04-21 14:21:23 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-04-21 14:21:23 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-21 14:21:14 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:10.244.2.19,StartTime:2020-04-21 14:21:14 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-04-21 14:21:23 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://36b4d58be38ac99fcff1e0ee28d38002b76a15128d9e8c50acaf3ead67bf2040}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 21 14:21:28.854: INFO: Pod "nginx-deployment-7b8c6f4498-xng7f" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-xng7f,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-5960,SelfLink:/api/v1/namespaces/deployment-5960/pods/nginx-deployment-7b8c6f4498-xng7f,UID:5b07bbc7-4472-43cb-a75d-2b7052ce5181,ResourceVersion:6654413,Generation:0,CreationTimestamp:2020-04-21 14:21:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 1b1c4e90-0a89-4704-997b-6f7d32eab989 0xc002f26437 0xc002f26438}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-2h5jp {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-2h5jp,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-2h5jp true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002f264d0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002f264f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-21 14:21:26 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 21 14:21:28.854: INFO: Pod "nginx-deployment-7b8c6f4498-zmskx" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-zmskx,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-5960,SelfLink:/api/v1/namespaces/deployment-5960/pods/nginx-deployment-7b8c6f4498-zmskx,UID:b48d94f7-474b-49bc-bc20-1113895a1292,ResourceVersion:6654406,Generation:0,CreationTimestamp:2020-04-21 14:21:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 1b1c4e90-0a89-4704-997b-6f7d32eab989 0xc002f26587 0xc002f26588}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-2h5jp {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-2h5jp,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-2h5jp true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002f26610} {node.kubernetes.io/unreachable Exists NoExecute 0xc002f26630}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-21 14:21:26 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-21 14:21:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-21 14:21:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-21 14:21:26 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:,StartTime:2020-04-21 14:21:26 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 21 14:21:28.854: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-5960" for this suite. Apr 21 14:21:45.267: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 21 14:21:45.343: INFO: namespace deployment-5960 deletion completed in 16.367729354s • [SLOW TEST:31.417 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 21 14:21:45.344: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-847e192f-02d7-49d0-8b54-a508c383fd17 STEP: Creating a pod to test consume configMaps Apr 21 14:21:45.506: INFO: Waiting up to 5m0s for pod "pod-configmaps-abe16561-2c1f-4c48-b8f5-a06e662f2d2d" in namespace "configmap-3326" to be "success or failure" Apr 21 14:21:45.579: INFO: Pod "pod-configmaps-abe16561-2c1f-4c48-b8f5-a06e662f2d2d": Phase="Pending", Reason="", readiness=false. Elapsed: 72.964767ms Apr 21 14:21:47.586: INFO: Pod "pod-configmaps-abe16561-2c1f-4c48-b8f5-a06e662f2d2d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.079851419s Apr 21 14:21:49.589: INFO: Pod "pod-configmaps-abe16561-2c1f-4c48-b8f5-a06e662f2d2d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.083674118s STEP: Saw pod success Apr 21 14:21:49.590: INFO: Pod "pod-configmaps-abe16561-2c1f-4c48-b8f5-a06e662f2d2d" satisfied condition "success or failure" Apr 21 14:21:49.592: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-abe16561-2c1f-4c48-b8f5-a06e662f2d2d container configmap-volume-test: STEP: delete the pod Apr 21 14:21:49.613: INFO: Waiting for pod pod-configmaps-abe16561-2c1f-4c48-b8f5-a06e662f2d2d to disappear Apr 21 14:21:49.666: INFO: Pod pod-configmaps-abe16561-2c1f-4c48-b8f5-a06e662f2d2d no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 21 14:21:49.666: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3326" for this suite. Apr 21 14:21:55.683: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 21 14:21:55.758: INFO: namespace configmap-3326 deletion completed in 6.085469545s • [SLOW TEST:10.414 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 21 14:21:55.758: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-map-29c7ef6f-74c3-4ccb-94bd-40f8e0add2a4 STEP: Creating a pod to test consume secrets Apr 21 14:21:55.820: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-760c3fe2-0014-4ac6-bc4b-6084409765b0" in namespace "projected-5786" to be "success or failure" Apr 21 14:21:55.833: INFO: Pod "pod-projected-secrets-760c3fe2-0014-4ac6-bc4b-6084409765b0": Phase="Pending", Reason="", readiness=false. Elapsed: 13.296113ms Apr 21 14:21:57.837: INFO: Pod "pod-projected-secrets-760c3fe2-0014-4ac6-bc4b-6084409765b0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017157029s Apr 21 14:21:59.842: INFO: Pod "pod-projected-secrets-760c3fe2-0014-4ac6-bc4b-6084409765b0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.021671731s STEP: Saw pod success Apr 21 14:21:59.842: INFO: Pod "pod-projected-secrets-760c3fe2-0014-4ac6-bc4b-6084409765b0" satisfied condition "success or failure" Apr 21 14:21:59.845: INFO: Trying to get logs from node iruya-worker pod pod-projected-secrets-760c3fe2-0014-4ac6-bc4b-6084409765b0 container projected-secret-volume-test: STEP: delete the pod Apr 21 14:21:59.880: INFO: Waiting for pod pod-projected-secrets-760c3fe2-0014-4ac6-bc4b-6084409765b0 to disappear Apr 21 14:21:59.890: INFO: Pod pod-projected-secrets-760c3fe2-0014-4ac6-bc4b-6084409765b0 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 21 14:21:59.890: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5786" for this suite. Apr 21 14:22:05.930: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 21 14:22:06.012: INFO: namespace projected-5786 deletion completed in 6.118503177s • [SLOW TEST:10.253 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 21 14:22:06.012: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Apr 21 14:22:06.107: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4824d0dd-7f1c-4540-b320-02dcc517129f" in namespace "projected-4543" to be "success or failure" Apr 21 14:22:06.112: INFO: Pod "downwardapi-volume-4824d0dd-7f1c-4540-b320-02dcc517129f": Phase="Pending", Reason="", readiness=false. Elapsed: 5.572382ms Apr 21 14:22:08.116: INFO: Pod "downwardapi-volume-4824d0dd-7f1c-4540-b320-02dcc517129f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008888212s Apr 21 14:22:10.120: INFO: Pod "downwardapi-volume-4824d0dd-7f1c-4540-b320-02dcc517129f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013425818s STEP: Saw pod success Apr 21 14:22:10.120: INFO: Pod "downwardapi-volume-4824d0dd-7f1c-4540-b320-02dcc517129f" satisfied condition "success or failure" Apr 21 14:22:10.124: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-4824d0dd-7f1c-4540-b320-02dcc517129f container client-container: STEP: delete the pod Apr 21 14:22:10.155: INFO: Waiting for pod downwardapi-volume-4824d0dd-7f1c-4540-b320-02dcc517129f to disappear Apr 21 14:22:10.160: INFO: Pod downwardapi-volume-4824d0dd-7f1c-4540-b320-02dcc517129f no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 21 14:22:10.160: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4543" for this suite. Apr 21 14:22:16.176: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 21 14:22:16.263: INFO: namespace projected-4543 deletion completed in 6.100150266s • [SLOW TEST:10.251 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 21 14:22:16.264: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test use defaults Apr 21 14:22:16.342: INFO: Waiting up to 5m0s for pod "client-containers-76f5c4fc-4dee-48d3-915c-0aaa25ac063f" in namespace "containers-189" to be "success or failure" Apr 21 14:22:16.346: INFO: Pod "client-containers-76f5c4fc-4dee-48d3-915c-0aaa25ac063f": Phase="Pending", Reason="", readiness=false. Elapsed: 3.822716ms Apr 21 14:22:18.932: INFO: Pod "client-containers-76f5c4fc-4dee-48d3-915c-0aaa25ac063f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.589651254s Apr 21 14:22:20.936: INFO: Pod "client-containers-76f5c4fc-4dee-48d3-915c-0aaa25ac063f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.593890288s STEP: Saw pod success Apr 21 14:22:20.936: INFO: Pod "client-containers-76f5c4fc-4dee-48d3-915c-0aaa25ac063f" satisfied condition "success or failure" Apr 21 14:22:20.939: INFO: Trying to get logs from node iruya-worker pod client-containers-76f5c4fc-4dee-48d3-915c-0aaa25ac063f container test-container: STEP: delete the pod Apr 21 14:22:21.216: INFO: Waiting for pod client-containers-76f5c4fc-4dee-48d3-915c-0aaa25ac063f to disappear Apr 21 14:22:21.223: INFO: Pod client-containers-76f5c4fc-4dee-48d3-915c-0aaa25ac063f no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 21 14:22:21.223: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-189" for this suite. Apr 21 14:22:27.238: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 21 14:22:27.307: INFO: namespace containers-189 deletion completed in 6.081887529s • [SLOW TEST:11.043 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-auth] ServiceAccounts should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 21 14:22:27.308: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: getting the auto-created API token STEP: reading a file in the container Apr 21 14:22:34.054: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-767 pod-service-account-8afde4b7-2f55-4ce9-a179-bed1a0065456 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token' STEP: reading a file in the container Apr 21 14:22:36.184: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-767 pod-service-account-8afde4b7-2f55-4ce9-a179-bed1a0065456 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt' STEP: reading a file in the container Apr 21 14:22:36.406: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-767 pod-service-account-8afde4b7-2f55-4ce9-a179-bed1a0065456 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace' [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 21 14:22:36.600: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-767" for this suite. Apr 21 14:22:42.627: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 21 14:22:42.711: INFO: namespace svcaccounts-767 deletion completed in 6.107021311s • [SLOW TEST:15.403 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-apps] ReplicationController should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 21 14:22:42.711: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Given a Pod with a 'name' label pod-adoption is created STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 21 14:22:47.854: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-7710" for this suite. Apr 21 14:23:09.871: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 21 14:23:09.950: INFO: namespace replication-controller-7710 deletion completed in 22.092254161s • [SLOW TEST:27.239 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 21 14:23:09.950: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Apr 21 14:23:10.033: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1d4bfa79-584a-4c7f-a6ca-0207817bdb44" in namespace "downward-api-4488" to be "success or failure" Apr 21 14:23:10.089: INFO: Pod "downwardapi-volume-1d4bfa79-584a-4c7f-a6ca-0207817bdb44": Phase="Pending", Reason="", readiness=false. Elapsed: 56.132025ms Apr 21 14:23:12.093: INFO: Pod "downwardapi-volume-1d4bfa79-584a-4c7f-a6ca-0207817bdb44": Phase="Pending", Reason="", readiness=false. Elapsed: 2.060112553s Apr 21 14:23:14.098: INFO: Pod "downwardapi-volume-1d4bfa79-584a-4c7f-a6ca-0207817bdb44": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.064599544s STEP: Saw pod success Apr 21 14:23:14.098: INFO: Pod "downwardapi-volume-1d4bfa79-584a-4c7f-a6ca-0207817bdb44" satisfied condition "success or failure" Apr 21 14:23:14.101: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-1d4bfa79-584a-4c7f-a6ca-0207817bdb44 container client-container: STEP: delete the pod Apr 21 14:23:14.118: INFO: Waiting for pod downwardapi-volume-1d4bfa79-584a-4c7f-a6ca-0207817bdb44 to disappear Apr 21 14:23:14.122: INFO: Pod downwardapi-volume-1d4bfa79-584a-4c7f-a6ca-0207817bdb44 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 21 14:23:14.122: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4488" for this suite. Apr 21 14:23:20.137: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 21 14:23:20.246: INFO: namespace downward-api-4488 deletion completed in 6.121239147s • [SLOW TEST:10.296 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl version should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 21 14:23:20.247: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Apr 21 14:23:20.325: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version' Apr 21 14:23:20.471: INFO: stderr: "" Apr 21 14:23:20.471: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"15\", GitVersion:\"v1.15.11\", GitCommit:\"d94a81c724ea8e1ccc9002d89b7fe81d58f89ede\", GitTreeState:\"clean\", BuildDate:\"2020-04-05T10:39:42Z\", GoVersion:\"go1.12.14\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"15\", GitVersion:\"v1.15.7\", GitCommit:\"6c143d35bb11d74970e7bc0b6c45b6bfdffc0bd4\", GitTreeState:\"clean\", BuildDate:\"2020-01-14T00:28:37Z\", GoVersion:\"go1.12.12\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 21 14:23:20.471: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1854" for this suite. Apr 21 14:23:26.498: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 21 14:23:26.571: INFO: namespace kubectl-1854 deletion completed in 6.085850452s • [SLOW TEST:6.324 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl version /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 21 14:23:26.573: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Apr 21 14:23:26.650: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 21 14:23:26.655: INFO: Number of nodes with available pods: 0 Apr 21 14:23:26.655: INFO: Node iruya-worker is running more than one daemon pod Apr 21 14:23:27.660: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 21 14:23:27.664: INFO: Number of nodes with available pods: 0 Apr 21 14:23:27.664: INFO: Node iruya-worker is running more than one daemon pod Apr 21 14:23:28.661: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 21 14:23:28.664: INFO: Number of nodes with available pods: 0 Apr 21 14:23:28.664: INFO: Node iruya-worker is running more than one daemon pod Apr 21 14:23:29.661: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 21 14:23:29.664: INFO: Number of nodes with available pods: 0 Apr 21 14:23:29.664: INFO: Node iruya-worker is running more than one daemon pod Apr 21 14:23:30.660: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 21 14:23:30.664: INFO: Number of nodes with available pods: 1 Apr 21 14:23:30.664: INFO: Node iruya-worker is running more than one daemon pod Apr 21 14:23:31.661: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 21 14:23:31.665: INFO: Number of nodes with available pods: 2 Apr 21 14:23:31.665: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. Apr 21 14:23:31.702: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 21 14:23:31.713: INFO: Number of nodes with available pods: 2 Apr 21 14:23:31.713: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-4850, will wait for the garbage collector to delete the pods Apr 21 14:23:32.785: INFO: Deleting DaemonSet.extensions daemon-set took: 6.708292ms Apr 21 14:23:33.086: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.278678ms Apr 21 14:23:42.190: INFO: Number of nodes with available pods: 0 Apr 21 14:23:42.190: INFO: Number of running nodes: 0, number of available pods: 0 Apr 21 14:23:42.192: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-4850/daemonsets","resourceVersion":"6655166"},"items":null} Apr 21 14:23:42.195: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-4850/pods","resourceVersion":"6655166"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 21 14:23:42.204: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-4850" for this suite. Apr 21 14:23:48.221: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 21 14:23:48.287: INFO: namespace daemonsets-4850 deletion completed in 6.080391567s • [SLOW TEST:21.714 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 21 14:23:48.288: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88 [It] should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating service endpoint-test2 in namespace services-5333 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-5333 to expose endpoints map[] Apr 21 14:23:48.413: INFO: Get endpoints failed (29.11941ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found Apr 21 14:23:49.418: INFO: successfully validated that service endpoint-test2 in namespace services-5333 exposes endpoints map[] (1.034835626s elapsed) STEP: Creating pod pod1 in namespace services-5333 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-5333 to expose endpoints map[pod1:[80]] Apr 21 14:23:52.493: INFO: successfully validated that service endpoint-test2 in namespace services-5333 exposes endpoints map[pod1:[80]] (3.068382322s elapsed) STEP: Creating pod pod2 in namespace services-5333 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-5333 to expose endpoints map[pod1:[80] pod2:[80]] Apr 21 14:23:55.570: INFO: successfully validated that service endpoint-test2 in namespace services-5333 exposes endpoints map[pod1:[80] pod2:[80]] (3.073259826s elapsed) STEP: Deleting pod pod1 in namespace services-5333 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-5333 to expose endpoints map[pod2:[80]] Apr 21 14:23:55.604: INFO: successfully validated that service endpoint-test2 in namespace services-5333 exposes endpoints map[pod2:[80]] (28.521396ms elapsed) STEP: Deleting pod pod2 in namespace services-5333 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-5333 to expose endpoints map[] Apr 21 14:23:56.613: INFO: successfully validated that service endpoint-test2 in namespace services-5333 exposes endpoints map[] (1.005749168s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 21 14:23:56.638: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-5333" for this suite. Apr 21 14:24:18.664: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 21 14:24:18.742: INFO: namespace services-5333 deletion completed in 22.097592519s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92 • [SLOW TEST:30.455 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 21 14:24:18.743: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-upd-3de45a07-1bc7-4956-851f-e8014ca51f63 STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 21 14:24:24.916: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6906" for this suite. Apr 21 14:24:46.949: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 21 14:24:47.023: INFO: namespace configmap-6906 deletion completed in 22.104282407s • [SLOW TEST:28.280 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 21 14:24:47.024: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Apr 21 14:24:47.096: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c5e8f477-8f6a-4f95-be2c-6c4c05fe92e2" in namespace "projected-5634" to be "success or failure" Apr 21 14:24:47.113: INFO: Pod "downwardapi-volume-c5e8f477-8f6a-4f95-be2c-6c4c05fe92e2": Phase="Pending", Reason="", readiness=false. Elapsed: 17.646433ms Apr 21 14:24:49.117: INFO: Pod "downwardapi-volume-c5e8f477-8f6a-4f95-be2c-6c4c05fe92e2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021374889s Apr 21 14:24:51.121: INFO: Pod "downwardapi-volume-c5e8f477-8f6a-4f95-be2c-6c4c05fe92e2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025722291s STEP: Saw pod success Apr 21 14:24:51.122: INFO: Pod "downwardapi-volume-c5e8f477-8f6a-4f95-be2c-6c4c05fe92e2" satisfied condition "success or failure" Apr 21 14:24:51.125: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-c5e8f477-8f6a-4f95-be2c-6c4c05fe92e2 container client-container: STEP: delete the pod Apr 21 14:24:51.854: INFO: Waiting for pod downwardapi-volume-c5e8f477-8f6a-4f95-be2c-6c4c05fe92e2 to disappear Apr 21 14:24:51.915: INFO: Pod downwardapi-volume-c5e8f477-8f6a-4f95-be2c-6c4c05fe92e2 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 21 14:24:51.915: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5634" for this suite. Apr 21 14:24:57.966: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 21 14:24:58.071: INFO: namespace projected-5634 deletion completed in 6.152760623s • [SLOW TEST:11.048 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 21 14:24:58.072: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir volume type on tmpfs Apr 21 14:24:58.187: INFO: Waiting up to 5m0s for pod "pod-5836dd23-e2a2-4ad6-b564-bbfc1bde0b65" in namespace "emptydir-6161" to be "success or failure" Apr 21 14:24:58.230: INFO: Pod "pod-5836dd23-e2a2-4ad6-b564-bbfc1bde0b65": Phase="Pending", Reason="", readiness=false. Elapsed: 43.624079ms Apr 21 14:25:00.234: INFO: Pod "pod-5836dd23-e2a2-4ad6-b564-bbfc1bde0b65": Phase="Pending", Reason="", readiness=false. Elapsed: 2.047418968s Apr 21 14:25:02.238: INFO: Pod "pod-5836dd23-e2a2-4ad6-b564-bbfc1bde0b65": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.051172364s STEP: Saw pod success Apr 21 14:25:02.238: INFO: Pod "pod-5836dd23-e2a2-4ad6-b564-bbfc1bde0b65" satisfied condition "success or failure" Apr 21 14:25:02.240: INFO: Trying to get logs from node iruya-worker2 pod pod-5836dd23-e2a2-4ad6-b564-bbfc1bde0b65 container test-container: STEP: delete the pod Apr 21 14:25:02.257: INFO: Waiting for pod pod-5836dd23-e2a2-4ad6-b564-bbfc1bde0b65 to disappear Apr 21 14:25:02.262: INFO: Pod pod-5836dd23-e2a2-4ad6-b564-bbfc1bde0b65 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 21 14:25:02.262: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6161" for this suite. Apr 21 14:25:08.309: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 21 14:25:08.404: INFO: namespace emptydir-6161 deletion completed in 6.139085154s • [SLOW TEST:10.333 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 21 14:25:08.405: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test override arguments Apr 21 14:25:08.486: INFO: Waiting up to 5m0s for pod "client-containers-cc976fac-55b3-4709-9da6-2599b63cc5c5" in namespace "containers-3331" to be "success or failure" Apr 21 14:25:08.488: INFO: Pod "client-containers-cc976fac-55b3-4709-9da6-2599b63cc5c5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.530821ms Apr 21 14:25:10.695: INFO: Pod "client-containers-cc976fac-55b3-4709-9da6-2599b63cc5c5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.209887369s Apr 21 14:25:12.700: INFO: Pod "client-containers-cc976fac-55b3-4709-9da6-2599b63cc5c5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.214482403s Apr 21 14:25:14.923: INFO: Pod "client-containers-cc976fac-55b3-4709-9da6-2599b63cc5c5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.437392054s STEP: Saw pod success Apr 21 14:25:14.923: INFO: Pod "client-containers-cc976fac-55b3-4709-9da6-2599b63cc5c5" satisfied condition "success or failure" Apr 21 14:25:14.926: INFO: Trying to get logs from node iruya-worker2 pod client-containers-cc976fac-55b3-4709-9da6-2599b63cc5c5 container test-container: STEP: delete the pod Apr 21 14:25:15.555: INFO: Waiting for pod client-containers-cc976fac-55b3-4709-9da6-2599b63cc5c5 to disappear Apr 21 14:25:15.578: INFO: Pod client-containers-cc976fac-55b3-4709-9da6-2599b63cc5c5 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 21 14:25:15.578: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-3331" for this suite. Apr 21 14:25:21.624: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 21 14:25:21.700: INFO: namespace containers-3331 deletion completed in 6.119171185s • [SLOW TEST:13.296 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 21 14:25:21.702: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Apr 21 14:25:21.804: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1093b381-18b4-412e-b7fb-c3c0c1d1a525" in namespace "projected-3767" to be "success or failure" Apr 21 14:25:21.832: INFO: Pod "downwardapi-volume-1093b381-18b4-412e-b7fb-c3c0c1d1a525": Phase="Pending", Reason="", readiness=false. Elapsed: 28.355648ms Apr 21 14:25:23.837: INFO: Pod "downwardapi-volume-1093b381-18b4-412e-b7fb-c3c0c1d1a525": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033065349s Apr 21 14:25:25.841: INFO: Pod "downwardapi-volume-1093b381-18b4-412e-b7fb-c3c0c1d1a525": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.037534184s STEP: Saw pod success Apr 21 14:25:25.842: INFO: Pod "downwardapi-volume-1093b381-18b4-412e-b7fb-c3c0c1d1a525" satisfied condition "success or failure" Apr 21 14:25:25.845: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-1093b381-18b4-412e-b7fb-c3c0c1d1a525 container client-container: STEP: delete the pod Apr 21 14:25:25.875: INFO: Waiting for pod downwardapi-volume-1093b381-18b4-412e-b7fb-c3c0c1d1a525 to disappear Apr 21 14:25:25.907: INFO: Pod downwardapi-volume-1093b381-18b4-412e-b7fb-c3c0c1d1a525 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 21 14:25:25.907: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3767" for this suite. Apr 21 14:25:31.929: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 21 14:25:32.005: INFO: namespace projected-3767 deletion completed in 6.093653835s • [SLOW TEST:10.303 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 21 14:25:32.008: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name projected-secret-test-08101478-242f-4bdc-b3fb-242b06135068 STEP: Creating a pod to test consume secrets Apr 21 14:25:32.091: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-6b3d91b1-f01a-41f1-8033-5891f0b16389" in namespace "projected-3048" to be "success or failure" Apr 21 14:25:32.128: INFO: Pod "pod-projected-secrets-6b3d91b1-f01a-41f1-8033-5891f0b16389": Phase="Pending", Reason="", readiness=false. Elapsed: 36.625634ms Apr 21 14:25:34.132: INFO: Pod "pod-projected-secrets-6b3d91b1-f01a-41f1-8033-5891f0b16389": Phase="Pending", Reason="", readiness=false. Elapsed: 2.040561273s Apr 21 14:25:36.135: INFO: Pod "pod-projected-secrets-6b3d91b1-f01a-41f1-8033-5891f0b16389": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.043849609s STEP: Saw pod success Apr 21 14:25:36.135: INFO: Pod "pod-projected-secrets-6b3d91b1-f01a-41f1-8033-5891f0b16389" satisfied condition "success or failure" Apr 21 14:25:36.137: INFO: Trying to get logs from node iruya-worker pod pod-projected-secrets-6b3d91b1-f01a-41f1-8033-5891f0b16389 container secret-volume-test: STEP: delete the pod Apr 21 14:25:36.165: INFO: Waiting for pod pod-projected-secrets-6b3d91b1-f01a-41f1-8033-5891f0b16389 to disappear Apr 21 14:25:36.191: INFO: Pod pod-projected-secrets-6b3d91b1-f01a-41f1-8033-5891f0b16389 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 21 14:25:36.191: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3048" for this suite. Apr 21 14:25:42.219: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 21 14:25:42.283: INFO: namespace projected-3048 deletion completed in 6.087098s • [SLOW TEST:10.275 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 21 14:25:42.283: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test substitution in container's command Apr 21 14:25:42.350: INFO: Waiting up to 5m0s for pod "var-expansion-de7b9d81-ae99-4273-8bec-cbd921cb2d28" in namespace "var-expansion-4169" to be "success or failure" Apr 21 14:25:42.359: INFO: Pod "var-expansion-de7b9d81-ae99-4273-8bec-cbd921cb2d28": Phase="Pending", Reason="", readiness=false. Elapsed: 8.294087ms Apr 21 14:25:44.363: INFO: Pod "var-expansion-de7b9d81-ae99-4273-8bec-cbd921cb2d28": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012196291s Apr 21 14:25:46.366: INFO: Pod "var-expansion-de7b9d81-ae99-4273-8bec-cbd921cb2d28": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.015721258s STEP: Saw pod success Apr 21 14:25:46.366: INFO: Pod "var-expansion-de7b9d81-ae99-4273-8bec-cbd921cb2d28" satisfied condition "success or failure" Apr 21 14:25:46.369: INFO: Trying to get logs from node iruya-worker2 pod var-expansion-de7b9d81-ae99-4273-8bec-cbd921cb2d28 container dapi-container: STEP: delete the pod Apr 21 14:25:46.401: INFO: Waiting for pod var-expansion-de7b9d81-ae99-4273-8bec-cbd921cb2d28 to disappear Apr 21 14:25:46.405: INFO: Pod var-expansion-de7b9d81-ae99-4273-8bec-cbd921cb2d28 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 21 14:25:46.405: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-4169" for this suite. Apr 21 14:25:52.421: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 21 14:25:52.499: INFO: namespace var-expansion-4169 deletion completed in 6.091374018s • [SLOW TEST:10.216 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 21 14:25:52.500: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0777 on node default medium Apr 21 14:25:52.581: INFO: Waiting up to 5m0s for pod "pod-19393963-c5c6-498e-bdb2-43071a7ea055" in namespace "emptydir-332" to be "success or failure" Apr 21 14:25:52.597: INFO: Pod "pod-19393963-c5c6-498e-bdb2-43071a7ea055": Phase="Pending", Reason="", readiness=false. Elapsed: 15.52216ms Apr 21 14:25:54.601: INFO: Pod "pod-19393963-c5c6-498e-bdb2-43071a7ea055": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019736259s Apr 21 14:25:56.605: INFO: Pod "pod-19393963-c5c6-498e-bdb2-43071a7ea055": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024046443s STEP: Saw pod success Apr 21 14:25:56.606: INFO: Pod "pod-19393963-c5c6-498e-bdb2-43071a7ea055" satisfied condition "success or failure" Apr 21 14:25:56.609: INFO: Trying to get logs from node iruya-worker pod pod-19393963-c5c6-498e-bdb2-43071a7ea055 container test-container: STEP: delete the pod Apr 21 14:25:56.636: INFO: Waiting for pod pod-19393963-c5c6-498e-bdb2-43071a7ea055 to disappear Apr 21 14:25:56.678: INFO: Pod pod-19393963-c5c6-498e-bdb2-43071a7ea055 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 21 14:25:56.678: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-332" for this suite. Apr 21 14:26:02.697: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 21 14:26:02.768: INFO: namespace emptydir-332 deletion completed in 6.086093499s • [SLOW TEST:10.268 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 21 14:26:02.768: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Apr 21 14:26:02.836: INFO: (0) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 4.731363ms) Apr 21 14:26:02.840: INFO: (1) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 4.327097ms) Apr 21 14:26:02.844: INFO: (2) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.302471ms) Apr 21 14:26:02.847: INFO: (3) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.2628ms) Apr 21 14:26:02.851: INFO: (4) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.78446ms) Apr 21 14:26:02.855: INFO: (5) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.653667ms) Apr 21 14:26:02.858: INFO: (6) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.633034ms) Apr 21 14:26:02.862: INFO: (7) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.700772ms) Apr 21 14:26:02.866: INFO: (8) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.629555ms) Apr 21 14:26:02.888: INFO: (9) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 22.489267ms) Apr 21 14:26:02.892: INFO: (10) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.369902ms) Apr 21 14:26:02.896: INFO: (11) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.940722ms) Apr 21 14:26:02.899: INFO: (12) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.542215ms) Apr 21 14:26:02.902: INFO: (13) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.219652ms) Apr 21 14:26:02.906: INFO: (14) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.582672ms) Apr 21 14:26:02.909: INFO: (15) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.151081ms) Apr 21 14:26:02.912: INFO: (16) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 2.996721ms) Apr 21 14:26:02.916: INFO: (17) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.358598ms) Apr 21 14:26:02.919: INFO: (18) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 2.953531ms) Apr 21 14:26:02.922: INFO: (19) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.522017ms) [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 21 14:26:02.922: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-2665" for this suite. Apr 21 14:26:08.942: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 21 14:26:09.016: INFO: namespace proxy-2665 deletion completed in 6.090009546s • [SLOW TEST:6.248 seconds] [sig-network] Proxy /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:58 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run job should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 21 14:26:09.016: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1612 [It] should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Apr 21 14:26:09.060: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-6168' Apr 21 14:26:09.279: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Apr 21 14:26:09.279: INFO: stdout: "job.batch/e2e-test-nginx-job created\n" STEP: verifying the job e2e-test-nginx-job was created [AfterEach] [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1617 Apr 21 14:26:09.304: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-nginx-job --namespace=kubectl-6168' Apr 21 14:26:09.424: INFO: stderr: "" Apr 21 14:26:09.424: INFO: stdout: "job.batch \"e2e-test-nginx-job\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 21 14:26:09.424: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6168" for this suite. Apr 21 14:26:15.442: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 21 14:26:15.516: INFO: namespace kubectl-6168 deletion completed in 6.08926058s • [SLOW TEST:6.500 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 21 14:26:15.517: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Given a Pod with a 'name' label pod-adoption-release is created STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change Apr 21 14:26:20.633: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 21 14:26:21.695: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-3770" for this suite. Apr 21 14:26:43.763: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 21 14:26:43.843: INFO: namespace replicaset-3770 deletion completed in 22.143136521s • [SLOW TEST:28.326 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 21 14:26:43.843: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Apr 21 14:26:43.910: INFO: Waiting up to 5m0s for pod "downwardapi-volume-29e27ab6-e0a3-4c38-a532-4bbf4a45f69e" in namespace "projected-3589" to be "success or failure" Apr 21 14:26:43.915: INFO: Pod "downwardapi-volume-29e27ab6-e0a3-4c38-a532-4bbf4a45f69e": Phase="Pending", Reason="", readiness=false. Elapsed: 5.509985ms Apr 21 14:26:45.934: INFO: Pod "downwardapi-volume-29e27ab6-e0a3-4c38-a532-4bbf4a45f69e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023923139s Apr 21 14:26:47.943: INFO: Pod "downwardapi-volume-29e27ab6-e0a3-4c38-a532-4bbf4a45f69e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.033537476s STEP: Saw pod success Apr 21 14:26:47.943: INFO: Pod "downwardapi-volume-29e27ab6-e0a3-4c38-a532-4bbf4a45f69e" satisfied condition "success or failure" Apr 21 14:26:47.947: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-29e27ab6-e0a3-4c38-a532-4bbf4a45f69e container client-container: STEP: delete the pod Apr 21 14:26:47.965: INFO: Waiting for pod downwardapi-volume-29e27ab6-e0a3-4c38-a532-4bbf4a45f69e to disappear Apr 21 14:26:47.969: INFO: Pod downwardapi-volume-29e27ab6-e0a3-4c38-a532-4bbf4a45f69e no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 21 14:26:47.969: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3589" for this suite. Apr 21 14:26:53.996: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 21 14:26:54.071: INFO: namespace projected-3589 deletion completed in 6.098611341s • [SLOW TEST:10.228 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 21 14:26:54.071: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test env composition Apr 21 14:26:54.146: INFO: Waiting up to 5m0s for pod "var-expansion-c9fd35b9-14ef-4503-9b1c-418948869742" in namespace "var-expansion-4557" to be "success or failure" Apr 21 14:26:54.150: INFO: Pod "var-expansion-c9fd35b9-14ef-4503-9b1c-418948869742": Phase="Pending", Reason="", readiness=false. Elapsed: 4.390319ms Apr 21 14:26:56.157: INFO: Pod "var-expansion-c9fd35b9-14ef-4503-9b1c-418948869742": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011397121s Apr 21 14:26:58.161: INFO: Pod "var-expansion-c9fd35b9-14ef-4503-9b1c-418948869742": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.015469409s STEP: Saw pod success Apr 21 14:26:58.161: INFO: Pod "var-expansion-c9fd35b9-14ef-4503-9b1c-418948869742" satisfied condition "success or failure" Apr 21 14:26:58.164: INFO: Trying to get logs from node iruya-worker pod var-expansion-c9fd35b9-14ef-4503-9b1c-418948869742 container dapi-container: STEP: delete the pod Apr 21 14:26:58.197: INFO: Waiting for pod var-expansion-c9fd35b9-14ef-4503-9b1c-418948869742 to disappear Apr 21 14:26:58.217: INFO: Pod var-expansion-c9fd35b9-14ef-4503-9b1c-418948869742 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 21 14:26:58.217: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-4557" for this suite. Apr 21 14:27:04.242: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 21 14:27:04.319: INFO: namespace var-expansion-4557 deletion completed in 6.098227063s • [SLOW TEST:10.248 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 21 14:27:04.320: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap configmap-600/configmap-test-b1218c68-2dc3-4900-b41c-83edf918bcb6 STEP: Creating a pod to test consume configMaps Apr 21 14:27:04.392: INFO: Waiting up to 5m0s for pod "pod-configmaps-8fb4987e-a5f6-4954-9d50-a5a793027638" in namespace "configmap-600" to be "success or failure" Apr 21 14:27:04.402: INFO: Pod "pod-configmaps-8fb4987e-a5f6-4954-9d50-a5a793027638": Phase="Pending", Reason="", readiness=false. Elapsed: 9.737586ms Apr 21 14:27:06.410: INFO: Pod "pod-configmaps-8fb4987e-a5f6-4954-9d50-a5a793027638": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017405589s Apr 21 14:27:08.414: INFO: Pod "pod-configmaps-8fb4987e-a5f6-4954-9d50-a5a793027638": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.021776644s STEP: Saw pod success Apr 21 14:27:08.414: INFO: Pod "pod-configmaps-8fb4987e-a5f6-4954-9d50-a5a793027638" satisfied condition "success or failure" Apr 21 14:27:08.417: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-8fb4987e-a5f6-4954-9d50-a5a793027638 container env-test: STEP: delete the pod Apr 21 14:27:08.439: INFO: Waiting for pod pod-configmaps-8fb4987e-a5f6-4954-9d50-a5a793027638 to disappear Apr 21 14:27:08.444: INFO: Pod pod-configmaps-8fb4987e-a5f6-4954-9d50-a5a793027638 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 21 14:27:08.444: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-600" for this suite. Apr 21 14:27:14.502: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 21 14:27:14.589: INFO: namespace configmap-600 deletion completed in 6.140809235s • [SLOW TEST:10.269 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 21 14:27:14.589: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-e67f1327-4467-4fb9-966d-f356ccf9417e STEP: Creating a pod to test consume configMaps Apr 21 14:27:14.660: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-6d85fc41-00a1-4052-9d80-1d742e90b058" in namespace "projected-1768" to be "success or failure" Apr 21 14:27:14.709: INFO: Pod "pod-projected-configmaps-6d85fc41-00a1-4052-9d80-1d742e90b058": Phase="Pending", Reason="", readiness=false. Elapsed: 48.767146ms Apr 21 14:27:16.712: INFO: Pod "pod-projected-configmaps-6d85fc41-00a1-4052-9d80-1d742e90b058": Phase="Pending", Reason="", readiness=false. Elapsed: 2.051619066s Apr 21 14:27:18.716: INFO: Pod "pod-projected-configmaps-6d85fc41-00a1-4052-9d80-1d742e90b058": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.055599025s STEP: Saw pod success Apr 21 14:27:18.716: INFO: Pod "pod-projected-configmaps-6d85fc41-00a1-4052-9d80-1d742e90b058" satisfied condition "success or failure" Apr 21 14:27:18.719: INFO: Trying to get logs from node iruya-worker pod pod-projected-configmaps-6d85fc41-00a1-4052-9d80-1d742e90b058 container projected-configmap-volume-test: STEP: delete the pod Apr 21 14:27:18.731: INFO: Waiting for pod pod-projected-configmaps-6d85fc41-00a1-4052-9d80-1d742e90b058 to disappear Apr 21 14:27:18.750: INFO: Pod pod-projected-configmaps-6d85fc41-00a1-4052-9d80-1d742e90b058 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 21 14:27:18.750: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1768" for this suite. Apr 21 14:27:24.770: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 21 14:27:24.840: INFO: namespace projected-1768 deletion completed in 6.087166773s • [SLOW TEST:10.251 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 21 14:27:24.841: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Apr 21 14:27:24.916: INFO: Pod name cleanup-pod: Found 0 pods out of 1 Apr 21 14:27:29.921: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Apr 21 14:27:29.921: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 Apr 21 14:27:29.966: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment,GenerateName:,Namespace:deployment-408,SelfLink:/apis/apps/v1/namespaces/deployment-408/deployments/test-cleanup-deployment,UID:7ece2fbc-7cc0-4cb0-948f-6eb3d1255bff,ResourceVersion:6656058,Generation:1,CreationTimestamp:2020-04-21 14:27:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[],ReadyReplicas:0,CollisionCount:nil,},} Apr 21 14:27:29.972: INFO: New ReplicaSet "test-cleanup-deployment-55bbcbc84c" of Deployment "test-cleanup-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment-55bbcbc84c,GenerateName:,Namespace:deployment-408,SelfLink:/apis/apps/v1/namespaces/deployment-408/replicasets/test-cleanup-deployment-55bbcbc84c,UID:c0d453c6-41e4-4785-8f7c-6dc473530e55,ResourceVersion:6656060,Generation:1,CreationTimestamp:2020-04-21 14:27:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-cleanup-deployment 7ece2fbc-7cc0-4cb0-948f-6eb3d1255bff 0xc00343d7c7 0xc00343d7c8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:0,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Apr 21 14:27:29.973: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment": Apr 21 14:27:29.973: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller,GenerateName:,Namespace:deployment-408,SelfLink:/apis/apps/v1/namespaces/deployment-408/replicasets/test-cleanup-controller,UID:e83af415-db09-4205-ac1b-7471f566e206,ResourceVersion:6656059,Generation:1,CreationTimestamp:2020-04-21 14:27:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 Deployment test-cleanup-deployment 7ece2fbc-7cc0-4cb0-948f-6eb3d1255bff 0xc00343d6f7 0xc00343d6f8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} Apr 21 14:27:30.008: INFO: Pod "test-cleanup-controller-rxmjx" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller-rxmjx,GenerateName:test-cleanup-controller-,Namespace:deployment-408,SelfLink:/api/v1/namespaces/deployment-408/pods/test-cleanup-controller-rxmjx,UID:b3f152ef-820a-424c-be92-bc326496458a,ResourceVersion:6656055,Generation:0,CreationTimestamp:2020-04-21 14:27:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-cleanup-controller e83af415-db09-4205-ac1b-7471f566e206 0xc003868087 0xc003868088}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-hptw2 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hptw2,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-hptw2 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc003868100} {node.kubernetes.io/unreachable Exists NoExecute 0xc003868120}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-21 14:27:24 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-04-21 14:27:28 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-04-21 14:27:28 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-21 14:27:24 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:10.244.1.153,StartTime:2020-04-21 14:27:24 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-04-21 14:27:27 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://1b6a25aa104ba7f7ceb79c94bb94b991208d6b66e9a7b4816d6c2b6418056da2}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 21 14:27:30.008: INFO: Pod "test-cleanup-deployment-55bbcbc84c-jbdgz" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment-55bbcbc84c-jbdgz,GenerateName:test-cleanup-deployment-55bbcbc84c-,Namespace:deployment-408,SelfLink:/api/v1/namespaces/deployment-408/pods/test-cleanup-deployment-55bbcbc84c-jbdgz,UID:8e1d3048-1a79-4eee-8df4-883f250811d1,ResourceVersion:6656066,Generation:0,CreationTimestamp:2020-04-21 14:27:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-cleanup-deployment-55bbcbc84c c0d453c6-41e4-4785-8f7c-6dc473530e55 0xc003868207 0xc003868208}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-hptw2 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hptw2,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-hptw2 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc003868280} {node.kubernetes.io/unreachable Exists NoExecute 0xc0038682a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-21 14:27:29 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 21 14:27:30.008: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-408" for this suite. Apr 21 14:27:36.107: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 21 14:27:36.205: INFO: namespace deployment-408 deletion completed in 6.147284763s • [SLOW TEST:11.365 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 21 14:27:36.206: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-bda2909a-deb3-4e18-87c5-45c0c7383175 STEP: Creating a pod to test consume configMaps Apr 21 14:27:36.279: INFO: Waiting up to 5m0s for pod "pod-configmaps-c9550719-d948-4af1-a1e7-8637ab5c20a8" in namespace "configmap-4015" to be "success or failure" Apr 21 14:27:36.315: INFO: Pod "pod-configmaps-c9550719-d948-4af1-a1e7-8637ab5c20a8": Phase="Pending", Reason="", readiness=false. Elapsed: 35.844605ms Apr 21 14:27:38.320: INFO: Pod "pod-configmaps-c9550719-d948-4af1-a1e7-8637ab5c20a8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.041068219s Apr 21 14:27:40.324: INFO: Pod "pod-configmaps-c9550719-d948-4af1-a1e7-8637ab5c20a8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.045486409s STEP: Saw pod success Apr 21 14:27:40.324: INFO: Pod "pod-configmaps-c9550719-d948-4af1-a1e7-8637ab5c20a8" satisfied condition "success or failure" Apr 21 14:27:40.327: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-c9550719-d948-4af1-a1e7-8637ab5c20a8 container configmap-volume-test: STEP: delete the pod Apr 21 14:27:40.344: INFO: Waiting for pod pod-configmaps-c9550719-d948-4af1-a1e7-8637ab5c20a8 to disappear Apr 21 14:27:40.348: INFO: Pod pod-configmaps-c9550719-d948-4af1-a1e7-8637ab5c20a8 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 21 14:27:40.348: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4015" for this suite. Apr 21 14:27:46.374: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 21 14:27:46.519: INFO: namespace configmap-4015 deletion completed in 6.168710221s • [SLOW TEST:10.314 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 21 14:27:46.519: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating the pod Apr 21 14:27:51.096: INFO: Successfully updated pod "labelsupdated272439d-8aef-4c6d-9394-372e2cb24d31" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 21 14:27:53.114: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8504" for this suite. Apr 21 14:28:15.131: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 21 14:28:15.208: INFO: namespace downward-api-8504 deletion completed in 22.089563806s • [SLOW TEST:28.688 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 21 14:28:15.209: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: starting an echo server on multiple ports STEP: creating replication controller proxy-service-x2mc4 in namespace proxy-7186 I0421 14:28:15.374174 6 runners.go:180] Created replication controller with name: proxy-service-x2mc4, namespace: proxy-7186, replica count: 1 I0421 14:28:16.424624 6 runners.go:180] proxy-service-x2mc4 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0421 14:28:17.424857 6 runners.go:180] proxy-service-x2mc4 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0421 14:28:18.425205 6 runners.go:180] proxy-service-x2mc4 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0421 14:28:19.425447 6 runners.go:180] proxy-service-x2mc4 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0421 14:28:20.425667 6 runners.go:180] proxy-service-x2mc4 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0421 14:28:21.425891 6 runners.go:180] proxy-service-x2mc4 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0421 14:28:22.426149 6 runners.go:180] proxy-service-x2mc4 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0421 14:28:23.426432 6 runners.go:180] proxy-service-x2mc4 Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Apr 21 14:28:23.430: INFO: setup took 8.120478458s, starting test cases STEP: running 16 cases, 20 attempts per case, 320 total attempts Apr 21 14:28:23.437: INFO: (0) /api/v1/namespaces/proxy-7186/pods/proxy-service-x2mc4-gdlwx:160/proxy/: foo (200; 7.138685ms) Apr 21 14:28:23.437: INFO: (0) /api/v1/namespaces/proxy-7186/pods/http:proxy-service-x2mc4-gdlwx:162/proxy/: bar (200; 7.140052ms) Apr 21 14:28:23.437: INFO: (0) /api/v1/namespaces/proxy-7186/pods/proxy-service-x2mc4-gdlwx:1080/proxy/: test<... (200; 7.13974ms) Apr 21 14:28:23.437: INFO: (0) /api/v1/namespaces/proxy-7186/pods/http:proxy-service-x2mc4-gdlwx:160/proxy/: foo (200; 7.431812ms) Apr 21 14:28:23.438: INFO: (0) /api/v1/namespaces/proxy-7186/pods/proxy-service-x2mc4-gdlwx:162/proxy/: bar (200; 7.275051ms) Apr 21 14:28:23.438: INFO: (0) /api/v1/namespaces/proxy-7186/pods/proxy-service-x2mc4-gdlwx/proxy/: test (200; 7.461705ms) Apr 21 14:28:23.439: INFO: (0) /api/v1/namespaces/proxy-7186/pods/http:proxy-service-x2mc4-gdlwx:1080/proxy/: ... (200; 8.427223ms) Apr 21 14:28:23.439: INFO: (0) /api/v1/namespaces/proxy-7186/services/proxy-service-x2mc4:portname1/proxy/: foo (200; 8.688545ms) Apr 21 14:28:23.439: INFO: (0) /api/v1/namespaces/proxy-7186/services/proxy-service-x2mc4:portname2/proxy/: bar (200; 8.723165ms) Apr 21 14:28:23.439: INFO: (0) /api/v1/namespaces/proxy-7186/services/http:proxy-service-x2mc4:portname2/proxy/: bar (200; 9.015692ms) Apr 21 14:28:23.439: INFO: (0) /api/v1/namespaces/proxy-7186/services/http:proxy-service-x2mc4:portname1/proxy/: foo (200; 8.966598ms) Apr 21 14:28:23.446: INFO: (0) /api/v1/namespaces/proxy-7186/pods/https:proxy-service-x2mc4-gdlwx:462/proxy/: tls qux (200; 15.622773ms) Apr 21 14:28:23.446: INFO: (0) /api/v1/namespaces/proxy-7186/pods/https:proxy-service-x2mc4-gdlwx:460/proxy/: tls baz (200; 15.79145ms) Apr 21 14:28:23.447: INFO: (0) /api/v1/namespaces/proxy-7186/pods/https:proxy-service-x2mc4-gdlwx:443/proxy/: test (200; 3.445048ms) Apr 21 14:28:23.452: INFO: (1) /api/v1/namespaces/proxy-7186/pods/https:proxy-service-x2mc4-gdlwx:443/proxy/: test<... (200; 4.28787ms) Apr 21 14:28:23.453: INFO: (1) /api/v1/namespaces/proxy-7186/services/http:proxy-service-x2mc4:portname2/proxy/: bar (200; 4.841839ms) Apr 21 14:28:23.453: INFO: (1) /api/v1/namespaces/proxy-7186/pods/https:proxy-service-x2mc4-gdlwx:462/proxy/: tls qux (200; 5.272352ms) Apr 21 14:28:23.453: INFO: (1) /api/v1/namespaces/proxy-7186/pods/http:proxy-service-x2mc4-gdlwx:1080/proxy/: ... (200; 5.242463ms) Apr 21 14:28:23.453: INFO: (1) /api/v1/namespaces/proxy-7186/services/proxy-service-x2mc4:portname2/proxy/: bar (200; 5.336336ms) Apr 21 14:28:23.453: INFO: (1) /api/v1/namespaces/proxy-7186/services/https:proxy-service-x2mc4:tlsportname1/proxy/: tls baz (200; 5.353439ms) Apr 21 14:28:23.453: INFO: (1) /api/v1/namespaces/proxy-7186/services/http:proxy-service-x2mc4:portname1/proxy/: foo (200; 5.323651ms) Apr 21 14:28:23.453: INFO: (1) /api/v1/namespaces/proxy-7186/services/https:proxy-service-x2mc4:tlsportname2/proxy/: tls qux (200; 5.382814ms) Apr 21 14:28:23.453: INFO: (1) /api/v1/namespaces/proxy-7186/pods/proxy-service-x2mc4-gdlwx:160/proxy/: foo (200; 5.340239ms) Apr 21 14:28:23.453: INFO: (1) /api/v1/namespaces/proxy-7186/pods/https:proxy-service-x2mc4-gdlwx:460/proxy/: tls baz (200; 5.340227ms) Apr 21 14:28:23.456: INFO: (2) /api/v1/namespaces/proxy-7186/pods/proxy-service-x2mc4-gdlwx:160/proxy/: foo (200; 3.06471ms) Apr 21 14:28:23.457: INFO: (2) /api/v1/namespaces/proxy-7186/pods/http:proxy-service-x2mc4-gdlwx:1080/proxy/: ... (200; 3.563678ms) Apr 21 14:28:23.457: INFO: (2) /api/v1/namespaces/proxy-7186/pods/proxy-service-x2mc4-gdlwx/proxy/: test (200; 3.523417ms) Apr 21 14:28:23.457: INFO: (2) /api/v1/namespaces/proxy-7186/pods/https:proxy-service-x2mc4-gdlwx:443/proxy/: test<... (200; 5.584525ms) Apr 21 14:28:23.459: INFO: (2) /api/v1/namespaces/proxy-7186/services/http:proxy-service-x2mc4:portname1/proxy/: foo (200; 5.588211ms) Apr 21 14:28:23.459: INFO: (2) /api/v1/namespaces/proxy-7186/services/http:proxy-service-x2mc4:portname2/proxy/: bar (200; 5.731341ms) Apr 21 14:28:23.459: INFO: (2) /api/v1/namespaces/proxy-7186/services/https:proxy-service-x2mc4:tlsportname2/proxy/: tls qux (200; 5.756603ms) Apr 21 14:28:23.459: INFO: (2) /api/v1/namespaces/proxy-7186/pods/proxy-service-x2mc4-gdlwx:162/proxy/: bar (200; 5.804051ms) Apr 21 14:28:23.462: INFO: (3) /api/v1/namespaces/proxy-7186/pods/https:proxy-service-x2mc4-gdlwx:460/proxy/: tls baz (200; 2.606146ms) Apr 21 14:28:23.462: INFO: (3) /api/v1/namespaces/proxy-7186/pods/https:proxy-service-x2mc4-gdlwx:443/proxy/: test (200; 4.337916ms) Apr 21 14:28:23.464: INFO: (3) /api/v1/namespaces/proxy-7186/pods/proxy-service-x2mc4-gdlwx:1080/proxy/: test<... (200; 4.627745ms) Apr 21 14:28:23.464: INFO: (3) /api/v1/namespaces/proxy-7186/pods/http:proxy-service-x2mc4-gdlwx:1080/proxy/: ... (200; 4.610944ms) Apr 21 14:28:23.464: INFO: (3) /api/v1/namespaces/proxy-7186/pods/http:proxy-service-x2mc4-gdlwx:162/proxy/: bar (200; 4.719845ms) Apr 21 14:28:23.465: INFO: (3) /api/v1/namespaces/proxy-7186/services/proxy-service-x2mc4:portname2/proxy/: bar (200; 5.761903ms) Apr 21 14:28:23.465: INFO: (3) /api/v1/namespaces/proxy-7186/services/http:proxy-service-x2mc4:portname2/proxy/: bar (200; 5.73829ms) Apr 21 14:28:23.465: INFO: (3) /api/v1/namespaces/proxy-7186/pods/https:proxy-service-x2mc4-gdlwx:462/proxy/: tls qux (200; 6.014852ms) Apr 21 14:28:23.465: INFO: (3) /api/v1/namespaces/proxy-7186/services/proxy-service-x2mc4:portname1/proxy/: foo (200; 5.970426ms) Apr 21 14:28:23.465: INFO: (3) /api/v1/namespaces/proxy-7186/services/https:proxy-service-x2mc4:tlsportname1/proxy/: tls baz (200; 6.112856ms) Apr 21 14:28:23.465: INFO: (3) /api/v1/namespaces/proxy-7186/services/http:proxy-service-x2mc4:portname1/proxy/: foo (200; 6.066552ms) Apr 21 14:28:23.465: INFO: (3) /api/v1/namespaces/proxy-7186/services/https:proxy-service-x2mc4:tlsportname2/proxy/: tls qux (200; 6.15022ms) Apr 21 14:28:23.471: INFO: (4) /api/v1/namespaces/proxy-7186/services/proxy-service-x2mc4:portname2/proxy/: bar (200; 4.950486ms) Apr 21 14:28:23.471: INFO: (4) /api/v1/namespaces/proxy-7186/services/http:proxy-service-x2mc4:portname1/proxy/: foo (200; 4.989021ms) Apr 21 14:28:23.471: INFO: (4) /api/v1/namespaces/proxy-7186/pods/proxy-service-x2mc4-gdlwx:160/proxy/: foo (200; 5.015582ms) Apr 21 14:28:23.471: INFO: (4) /api/v1/namespaces/proxy-7186/services/https:proxy-service-x2mc4:tlsportname1/proxy/: tls baz (200; 5.108462ms) Apr 21 14:28:23.471: INFO: (4) /api/v1/namespaces/proxy-7186/services/http:proxy-service-x2mc4:portname2/proxy/: bar (200; 5.357052ms) Apr 21 14:28:23.471: INFO: (4) /api/v1/namespaces/proxy-7186/services/proxy-service-x2mc4:portname1/proxy/: foo (200; 5.495728ms) Apr 21 14:28:23.471: INFO: (4) /api/v1/namespaces/proxy-7186/pods/http:proxy-service-x2mc4-gdlwx:1080/proxy/: ... (200; 5.583633ms) Apr 21 14:28:23.471: INFO: (4) /api/v1/namespaces/proxy-7186/pods/proxy-service-x2mc4-gdlwx:1080/proxy/: test<... (200; 5.736445ms) Apr 21 14:28:23.471: INFO: (4) /api/v1/namespaces/proxy-7186/pods/proxy-service-x2mc4-gdlwx:162/proxy/: bar (200; 5.772004ms) Apr 21 14:28:23.471: INFO: (4) /api/v1/namespaces/proxy-7186/pods/https:proxy-service-x2mc4-gdlwx:443/proxy/: test (200; 6.198967ms) Apr 21 14:28:23.473: INFO: (4) /api/v1/namespaces/proxy-7186/services/https:proxy-service-x2mc4:tlsportname2/proxy/: tls qux (200; 7.041218ms) Apr 21 14:28:23.473: INFO: (4) /api/v1/namespaces/proxy-7186/pods/https:proxy-service-x2mc4-gdlwx:462/proxy/: tls qux (200; 7.086366ms) Apr 21 14:28:23.473: INFO: (4) /api/v1/namespaces/proxy-7186/pods/http:proxy-service-x2mc4-gdlwx:162/proxy/: bar (200; 7.254713ms) Apr 21 14:28:23.473: INFO: (4) /api/v1/namespaces/proxy-7186/pods/https:proxy-service-x2mc4-gdlwx:460/proxy/: tls baz (200; 7.588817ms) Apr 21 14:28:23.477: INFO: (5) /api/v1/namespaces/proxy-7186/pods/proxy-service-x2mc4-gdlwx:160/proxy/: foo (200; 3.318207ms) Apr 21 14:28:23.477: INFO: (5) /api/v1/namespaces/proxy-7186/pods/proxy-service-x2mc4-gdlwx/proxy/: test (200; 3.841642ms) Apr 21 14:28:23.477: INFO: (5) /api/v1/namespaces/proxy-7186/pods/https:proxy-service-x2mc4-gdlwx:462/proxy/: tls qux (200; 4.162383ms) Apr 21 14:28:23.477: INFO: (5) /api/v1/namespaces/proxy-7186/pods/https:proxy-service-x2mc4-gdlwx:460/proxy/: tls baz (200; 4.206734ms) Apr 21 14:28:23.477: INFO: (5) /api/v1/namespaces/proxy-7186/pods/http:proxy-service-x2mc4-gdlwx:1080/proxy/: ... (200; 4.236405ms) Apr 21 14:28:23.477: INFO: (5) /api/v1/namespaces/proxy-7186/pods/proxy-service-x2mc4-gdlwx:162/proxy/: bar (200; 4.286872ms) Apr 21 14:28:23.477: INFO: (5) /api/v1/namespaces/proxy-7186/pods/http:proxy-service-x2mc4-gdlwx:162/proxy/: bar (200; 4.233808ms) Apr 21 14:28:23.478: INFO: (5) /api/v1/namespaces/proxy-7186/pods/http:proxy-service-x2mc4-gdlwx:160/proxy/: foo (200; 4.322921ms) Apr 21 14:28:23.478: INFO: (5) /api/v1/namespaces/proxy-7186/pods/https:proxy-service-x2mc4-gdlwx:443/proxy/: test<... (200; 4.262558ms) Apr 21 14:28:23.478: INFO: (5) /api/v1/namespaces/proxy-7186/services/http:proxy-service-x2mc4:portname2/proxy/: bar (200; 5.229778ms) Apr 21 14:28:23.479: INFO: (5) /api/v1/namespaces/proxy-7186/services/https:proxy-service-x2mc4:tlsportname1/proxy/: tls baz (200; 5.250562ms) Apr 21 14:28:23.479: INFO: (5) /api/v1/namespaces/proxy-7186/services/proxy-service-x2mc4:portname1/proxy/: foo (200; 5.27362ms) Apr 21 14:28:23.479: INFO: (5) /api/v1/namespaces/proxy-7186/services/proxy-service-x2mc4:portname2/proxy/: bar (200; 5.387729ms) Apr 21 14:28:23.479: INFO: (5) /api/v1/namespaces/proxy-7186/services/http:proxy-service-x2mc4:portname1/proxy/: foo (200; 5.407027ms) Apr 21 14:28:23.479: INFO: (5) /api/v1/namespaces/proxy-7186/services/https:proxy-service-x2mc4:tlsportname2/proxy/: tls qux (200; 5.500759ms) Apr 21 14:28:23.484: INFO: (6) /api/v1/namespaces/proxy-7186/services/http:proxy-service-x2mc4:portname2/proxy/: bar (200; 5.09322ms) Apr 21 14:28:23.484: INFO: (6) /api/v1/namespaces/proxy-7186/pods/https:proxy-service-x2mc4-gdlwx:460/proxy/: tls baz (200; 5.549222ms) Apr 21 14:28:23.485: INFO: (6) /api/v1/namespaces/proxy-7186/pods/proxy-service-x2mc4-gdlwx:160/proxy/: foo (200; 5.74969ms) Apr 21 14:28:23.485: INFO: (6) /api/v1/namespaces/proxy-7186/pods/proxy-service-x2mc4-gdlwx:162/proxy/: bar (200; 5.758986ms) Apr 21 14:28:23.485: INFO: (6) /api/v1/namespaces/proxy-7186/pods/proxy-service-x2mc4-gdlwx:1080/proxy/: test<... (200; 5.773848ms) Apr 21 14:28:23.485: INFO: (6) /api/v1/namespaces/proxy-7186/pods/http:proxy-service-x2mc4-gdlwx:162/proxy/: bar (200; 5.88463ms) Apr 21 14:28:23.485: INFO: (6) /api/v1/namespaces/proxy-7186/pods/http:proxy-service-x2mc4-gdlwx:160/proxy/: foo (200; 6.112333ms) Apr 21 14:28:23.485: INFO: (6) /api/v1/namespaces/proxy-7186/pods/https:proxy-service-x2mc4-gdlwx:462/proxy/: tls qux (200; 6.103317ms) Apr 21 14:28:23.485: INFO: (6) /api/v1/namespaces/proxy-7186/pods/proxy-service-x2mc4-gdlwx/proxy/: test (200; 6.205263ms) Apr 21 14:28:23.485: INFO: (6) /api/v1/namespaces/proxy-7186/pods/https:proxy-service-x2mc4-gdlwx:443/proxy/: ... (200; 6.374256ms) Apr 21 14:28:23.487: INFO: (6) /api/v1/namespaces/proxy-7186/services/http:proxy-service-x2mc4:portname1/proxy/: foo (200; 8.519719ms) Apr 21 14:28:23.487: INFO: (6) /api/v1/namespaces/proxy-7186/services/proxy-service-x2mc4:portname1/proxy/: foo (200; 8.492988ms) Apr 21 14:28:23.487: INFO: (6) /api/v1/namespaces/proxy-7186/services/https:proxy-service-x2mc4:tlsportname1/proxy/: tls baz (200; 8.708207ms) Apr 21 14:28:23.487: INFO: (6) /api/v1/namespaces/proxy-7186/services/https:proxy-service-x2mc4:tlsportname2/proxy/: tls qux (200; 8.694336ms) Apr 21 14:28:23.488: INFO: (6) /api/v1/namespaces/proxy-7186/services/proxy-service-x2mc4:portname2/proxy/: bar (200; 8.916159ms) Apr 21 14:28:23.490: INFO: (7) /api/v1/namespaces/proxy-7186/pods/https:proxy-service-x2mc4-gdlwx:460/proxy/: tls baz (200; 2.330308ms) Apr 21 14:28:23.492: INFO: (7) /api/v1/namespaces/proxy-7186/pods/proxy-service-x2mc4-gdlwx:162/proxy/: bar (200; 4.08642ms) Apr 21 14:28:23.492: INFO: (7) /api/v1/namespaces/proxy-7186/services/proxy-service-x2mc4:portname2/proxy/: bar (200; 4.078083ms) Apr 21 14:28:23.492: INFO: (7) /api/v1/namespaces/proxy-7186/services/https:proxy-service-x2mc4:tlsportname1/proxy/: tls baz (200; 4.085096ms) Apr 21 14:28:23.492: INFO: (7) /api/v1/namespaces/proxy-7186/services/proxy-service-x2mc4:portname1/proxy/: foo (200; 4.06701ms) Apr 21 14:28:23.492: INFO: (7) /api/v1/namespaces/proxy-7186/pods/proxy-service-x2mc4-gdlwx:160/proxy/: foo (200; 4.053242ms) Apr 21 14:28:23.492: INFO: (7) /api/v1/namespaces/proxy-7186/pods/http:proxy-service-x2mc4-gdlwx:1080/proxy/: ... (200; 4.189043ms) Apr 21 14:28:23.492: INFO: (7) /api/v1/namespaces/proxy-7186/pods/proxy-service-x2mc4-gdlwx:1080/proxy/: test<... (200; 4.591175ms) Apr 21 14:28:23.493: INFO: (7) /api/v1/namespaces/proxy-7186/services/http:proxy-service-x2mc4:portname1/proxy/: foo (200; 4.699696ms) Apr 21 14:28:23.493: INFO: (7) /api/v1/namespaces/proxy-7186/services/http:proxy-service-x2mc4:portname2/proxy/: bar (200; 4.802089ms) Apr 21 14:28:23.493: INFO: (7) /api/v1/namespaces/proxy-7186/pods/http:proxy-service-x2mc4-gdlwx:160/proxy/: foo (200; 4.889752ms) Apr 21 14:28:23.493: INFO: (7) /api/v1/namespaces/proxy-7186/pods/https:proxy-service-x2mc4-gdlwx:443/proxy/: test (200; 4.872982ms) Apr 21 14:28:23.493: INFO: (7) /api/v1/namespaces/proxy-7186/services/https:proxy-service-x2mc4:tlsportname2/proxy/: tls qux (200; 4.975555ms) Apr 21 14:28:23.493: INFO: (7) /api/v1/namespaces/proxy-7186/pods/https:proxy-service-x2mc4-gdlwx:462/proxy/: tls qux (200; 4.984973ms) Apr 21 14:28:23.493: INFO: (7) /api/v1/namespaces/proxy-7186/pods/http:proxy-service-x2mc4-gdlwx:162/proxy/: bar (200; 5.17062ms) Apr 21 14:28:23.495: INFO: (8) /api/v1/namespaces/proxy-7186/pods/http:proxy-service-x2mc4-gdlwx:162/proxy/: bar (200; 2.010406ms) Apr 21 14:28:23.498: INFO: (8) /api/v1/namespaces/proxy-7186/pods/proxy-service-x2mc4-gdlwx/proxy/: test (200; 4.854702ms) Apr 21 14:28:23.498: INFO: (8) /api/v1/namespaces/proxy-7186/pods/proxy-service-x2mc4-gdlwx:160/proxy/: foo (200; 4.872344ms) Apr 21 14:28:23.498: INFO: (8) /api/v1/namespaces/proxy-7186/pods/https:proxy-service-x2mc4-gdlwx:443/proxy/: ... (200; 5.261941ms) Apr 21 14:28:23.499: INFO: (8) /api/v1/namespaces/proxy-7186/pods/https:proxy-service-x2mc4-gdlwx:462/proxy/: tls qux (200; 5.542922ms) Apr 21 14:28:23.499: INFO: (8) /api/v1/namespaces/proxy-7186/pods/proxy-service-x2mc4-gdlwx:162/proxy/: bar (200; 5.608248ms) Apr 21 14:28:23.499: INFO: (8) /api/v1/namespaces/proxy-7186/pods/proxy-service-x2mc4-gdlwx:1080/proxy/: test<... (200; 5.629207ms) Apr 21 14:28:23.499: INFO: (8) /api/v1/namespaces/proxy-7186/services/proxy-service-x2mc4:portname1/proxy/: foo (200; 5.642467ms) Apr 21 14:28:23.499: INFO: (8) /api/v1/namespaces/proxy-7186/services/https:proxy-service-x2mc4:tlsportname1/proxy/: tls baz (200; 5.762176ms) Apr 21 14:28:23.499: INFO: (8) /api/v1/namespaces/proxy-7186/pods/https:proxy-service-x2mc4-gdlwx:460/proxy/: tls baz (200; 5.832156ms) Apr 21 14:28:23.499: INFO: (8) /api/v1/namespaces/proxy-7186/services/http:proxy-service-x2mc4:portname2/proxy/: bar (200; 5.855323ms) Apr 21 14:28:23.499: INFO: (8) /api/v1/namespaces/proxy-7186/services/http:proxy-service-x2mc4:portname1/proxy/: foo (200; 5.800929ms) Apr 21 14:28:23.500: INFO: (8) /api/v1/namespaces/proxy-7186/services/https:proxy-service-x2mc4:tlsportname2/proxy/: tls qux (200; 6.592499ms) Apr 21 14:28:23.504: INFO: (9) /api/v1/namespaces/proxy-7186/pods/https:proxy-service-x2mc4-gdlwx:460/proxy/: tls baz (200; 4.16158ms) Apr 21 14:28:23.504: INFO: (9) /api/v1/namespaces/proxy-7186/pods/https:proxy-service-x2mc4-gdlwx:462/proxy/: tls qux (200; 4.16202ms) Apr 21 14:28:23.504: INFO: (9) /api/v1/namespaces/proxy-7186/pods/proxy-service-x2mc4-gdlwx:160/proxy/: foo (200; 4.601922ms) Apr 21 14:28:23.505: INFO: (9) /api/v1/namespaces/proxy-7186/pods/proxy-service-x2mc4-gdlwx:1080/proxy/: test<... (200; 5.471853ms) Apr 21 14:28:23.506: INFO: (9) /api/v1/namespaces/proxy-7186/pods/proxy-service-x2mc4-gdlwx:162/proxy/: bar (200; 5.857212ms) Apr 21 14:28:23.506: INFO: (9) /api/v1/namespaces/proxy-7186/services/https:proxy-service-x2mc4:tlsportname1/proxy/: tls baz (200; 5.960997ms) Apr 21 14:28:23.506: INFO: (9) /api/v1/namespaces/proxy-7186/pods/proxy-service-x2mc4-gdlwx/proxy/: test (200; 5.899448ms) Apr 21 14:28:23.506: INFO: (9) /api/v1/namespaces/proxy-7186/services/proxy-service-x2mc4:portname1/proxy/: foo (200; 5.963687ms) Apr 21 14:28:23.506: INFO: (9) /api/v1/namespaces/proxy-7186/pods/http:proxy-service-x2mc4-gdlwx:162/proxy/: bar (200; 5.980463ms) Apr 21 14:28:23.506: INFO: (9) /api/v1/namespaces/proxy-7186/services/http:proxy-service-x2mc4:portname1/proxy/: foo (200; 6.190919ms) Apr 21 14:28:23.506: INFO: (9) /api/v1/namespaces/proxy-7186/services/proxy-service-x2mc4:portname2/proxy/: bar (200; 6.175051ms) Apr 21 14:28:23.506: INFO: (9) /api/v1/namespaces/proxy-7186/services/http:proxy-service-x2mc4:portname2/proxy/: bar (200; 6.192145ms) Apr 21 14:28:23.506: INFO: (9) /api/v1/namespaces/proxy-7186/pods/http:proxy-service-x2mc4-gdlwx:160/proxy/: foo (200; 6.293737ms) Apr 21 14:28:23.506: INFO: (9) /api/v1/namespaces/proxy-7186/pods/https:proxy-service-x2mc4-gdlwx:443/proxy/: ... (200; 6.373358ms) Apr 21 14:28:23.509: INFO: (10) /api/v1/namespaces/proxy-7186/pods/proxy-service-x2mc4-gdlwx:1080/proxy/: test<... (200; 2.961358ms) Apr 21 14:28:23.509: INFO: (10) /api/v1/namespaces/proxy-7186/pods/https:proxy-service-x2mc4-gdlwx:443/proxy/: ... (200; 3.162733ms) Apr 21 14:28:23.510: INFO: (10) /api/v1/namespaces/proxy-7186/pods/https:proxy-service-x2mc4-gdlwx:462/proxy/: tls qux (200; 3.529164ms) Apr 21 14:28:23.510: INFO: (10) /api/v1/namespaces/proxy-7186/pods/http:proxy-service-x2mc4-gdlwx:162/proxy/: bar (200; 3.583435ms) Apr 21 14:28:23.510: INFO: (10) /api/v1/namespaces/proxy-7186/pods/proxy-service-x2mc4-gdlwx:162/proxy/: bar (200; 3.734301ms) Apr 21 14:28:23.510: INFO: (10) /api/v1/namespaces/proxy-7186/services/proxy-service-x2mc4:portname2/proxy/: bar (200; 3.894035ms) Apr 21 14:28:23.511: INFO: (10) /api/v1/namespaces/proxy-7186/services/http:proxy-service-x2mc4:portname2/proxy/: bar (200; 4.277534ms) Apr 21 14:28:23.511: INFO: (10) /api/v1/namespaces/proxy-7186/services/http:proxy-service-x2mc4:portname1/proxy/: foo (200; 4.273322ms) Apr 21 14:28:23.511: INFO: (10) /api/v1/namespaces/proxy-7186/services/https:proxy-service-x2mc4:tlsportname1/proxy/: tls baz (200; 4.281119ms) Apr 21 14:28:23.511: INFO: (10) /api/v1/namespaces/proxy-7186/pods/proxy-service-x2mc4-gdlwx:160/proxy/: foo (200; 4.855662ms) Apr 21 14:28:23.511: INFO: (10) /api/v1/namespaces/proxy-7186/pods/proxy-service-x2mc4-gdlwx/proxy/: test (200; 5.090447ms) Apr 21 14:28:23.512: INFO: (10) /api/v1/namespaces/proxy-7186/pods/http:proxy-service-x2mc4-gdlwx:160/proxy/: foo (200; 5.247439ms) Apr 21 14:28:23.512: INFO: (10) /api/v1/namespaces/proxy-7186/services/https:proxy-service-x2mc4:tlsportname2/proxy/: tls qux (200; 5.32165ms) Apr 21 14:28:23.512: INFO: (10) /api/v1/namespaces/proxy-7186/pods/https:proxy-service-x2mc4-gdlwx:460/proxy/: tls baz (200; 5.297326ms) Apr 21 14:28:23.512: INFO: (10) /api/v1/namespaces/proxy-7186/services/proxy-service-x2mc4:portname1/proxy/: foo (200; 5.348648ms) Apr 21 14:28:23.515: INFO: (11) /api/v1/namespaces/proxy-7186/pods/https:proxy-service-x2mc4-gdlwx:462/proxy/: tls qux (200; 2.886696ms) Apr 21 14:28:23.515: INFO: (11) /api/v1/namespaces/proxy-7186/pods/http:proxy-service-x2mc4-gdlwx:162/proxy/: bar (200; 2.846504ms) Apr 21 14:28:23.515: INFO: (11) /api/v1/namespaces/proxy-7186/pods/https:proxy-service-x2mc4-gdlwx:443/proxy/: test (200; 2.847485ms) Apr 21 14:28:23.516: INFO: (11) /api/v1/namespaces/proxy-7186/pods/http:proxy-service-x2mc4-gdlwx:160/proxy/: foo (200; 4.118854ms) Apr 21 14:28:23.516: INFO: (11) /api/v1/namespaces/proxy-7186/pods/http:proxy-service-x2mc4-gdlwx:1080/proxy/: ... (200; 4.151991ms) Apr 21 14:28:23.516: INFO: (11) /api/v1/namespaces/proxy-7186/pods/proxy-service-x2mc4-gdlwx:162/proxy/: bar (200; 4.116212ms) Apr 21 14:28:23.516: INFO: (11) /api/v1/namespaces/proxy-7186/pods/proxy-service-x2mc4-gdlwx:1080/proxy/: test<... (200; 4.311011ms) Apr 21 14:28:23.516: INFO: (11) /api/v1/namespaces/proxy-7186/pods/https:proxy-service-x2mc4-gdlwx:460/proxy/: tls baz (200; 4.373983ms) Apr 21 14:28:23.516: INFO: (11) /api/v1/namespaces/proxy-7186/pods/proxy-service-x2mc4-gdlwx:160/proxy/: foo (200; 4.421003ms) Apr 21 14:28:23.518: INFO: (11) /api/v1/namespaces/proxy-7186/services/https:proxy-service-x2mc4:tlsportname2/proxy/: tls qux (200; 6.334695ms) Apr 21 14:28:23.518: INFO: (11) /api/v1/namespaces/proxy-7186/services/proxy-service-x2mc4:portname1/proxy/: foo (200; 6.295138ms) Apr 21 14:28:23.518: INFO: (11) /api/v1/namespaces/proxy-7186/services/proxy-service-x2mc4:portname2/proxy/: bar (200; 6.463163ms) Apr 21 14:28:23.518: INFO: (11) /api/v1/namespaces/proxy-7186/services/http:proxy-service-x2mc4:portname2/proxy/: bar (200; 6.489908ms) Apr 21 14:28:23.518: INFO: (11) /api/v1/namespaces/proxy-7186/services/https:proxy-service-x2mc4:tlsportname1/proxy/: tls baz (200; 6.507619ms) Apr 21 14:28:23.518: INFO: (11) /api/v1/namespaces/proxy-7186/services/http:proxy-service-x2mc4:portname1/proxy/: foo (200; 6.581708ms) Apr 21 14:28:23.523: INFO: (12) /api/v1/namespaces/proxy-7186/pods/http:proxy-service-x2mc4-gdlwx:162/proxy/: bar (200; 4.639253ms) Apr 21 14:28:23.523: INFO: (12) /api/v1/namespaces/proxy-7186/pods/proxy-service-x2mc4-gdlwx:1080/proxy/: test<... (200; 4.654146ms) Apr 21 14:28:23.523: INFO: (12) /api/v1/namespaces/proxy-7186/pods/http:proxy-service-x2mc4-gdlwx:160/proxy/: foo (200; 4.793995ms) Apr 21 14:28:23.523: INFO: (12) /api/v1/namespaces/proxy-7186/pods/proxy-service-x2mc4-gdlwx:162/proxy/: bar (200; 4.799338ms) Apr 21 14:28:23.523: INFO: (12) /api/v1/namespaces/proxy-7186/pods/proxy-service-x2mc4-gdlwx:160/proxy/: foo (200; 4.949218ms) Apr 21 14:28:23.523: INFO: (12) /api/v1/namespaces/proxy-7186/pods/https:proxy-service-x2mc4-gdlwx:460/proxy/: tls baz (200; 4.972464ms) Apr 21 14:28:23.523: INFO: (12) /api/v1/namespaces/proxy-7186/pods/https:proxy-service-x2mc4-gdlwx:462/proxy/: tls qux (200; 5.092252ms) Apr 21 14:28:23.523: INFO: (12) /api/v1/namespaces/proxy-7186/pods/https:proxy-service-x2mc4-gdlwx:443/proxy/: ... (200; 5.008767ms) Apr 21 14:28:23.523: INFO: (12) /api/v1/namespaces/proxy-7186/pods/proxy-service-x2mc4-gdlwx/proxy/: test (200; 5.056644ms) Apr 21 14:28:23.523: INFO: (12) /api/v1/namespaces/proxy-7186/services/http:proxy-service-x2mc4:portname1/proxy/: foo (200; 5.121805ms) Apr 21 14:28:23.532: INFO: (12) /api/v1/namespaces/proxy-7186/services/proxy-service-x2mc4:portname1/proxy/: foo (200; 13.722829ms) Apr 21 14:28:23.532: INFO: (12) /api/v1/namespaces/proxy-7186/services/proxy-service-x2mc4:portname2/proxy/: bar (200; 13.847602ms) Apr 21 14:28:23.532: INFO: (12) /api/v1/namespaces/proxy-7186/services/http:proxy-service-x2mc4:portname2/proxy/: bar (200; 13.929944ms) Apr 21 14:28:23.532: INFO: (12) /api/v1/namespaces/proxy-7186/services/https:proxy-service-x2mc4:tlsportname2/proxy/: tls qux (200; 13.942201ms) Apr 21 14:28:23.532: INFO: (12) /api/v1/namespaces/proxy-7186/services/https:proxy-service-x2mc4:tlsportname1/proxy/: tls baz (200; 13.849435ms) Apr 21 14:28:23.536: INFO: (13) /api/v1/namespaces/proxy-7186/pods/https:proxy-service-x2mc4-gdlwx:443/proxy/: ... (200; 4.28699ms) Apr 21 14:28:23.537: INFO: (13) /api/v1/namespaces/proxy-7186/services/https:proxy-service-x2mc4:tlsportname2/proxy/: tls qux (200; 4.637211ms) Apr 21 14:28:23.537: INFO: (13) /api/v1/namespaces/proxy-7186/pods/https:proxy-service-x2mc4-gdlwx:460/proxy/: tls baz (200; 4.469746ms) Apr 21 14:28:23.537: INFO: (13) /api/v1/namespaces/proxy-7186/pods/https:proxy-service-x2mc4-gdlwx:462/proxy/: tls qux (200; 4.550138ms) Apr 21 14:28:23.537: INFO: (13) /api/v1/namespaces/proxy-7186/pods/proxy-service-x2mc4-gdlwx/proxy/: test (200; 4.542695ms) Apr 21 14:28:23.537: INFO: (13) /api/v1/namespaces/proxy-7186/pods/proxy-service-x2mc4-gdlwx:1080/proxy/: test<... (200; 4.54434ms) Apr 21 14:28:23.537: INFO: (13) /api/v1/namespaces/proxy-7186/pods/http:proxy-service-x2mc4-gdlwx:162/proxy/: bar (200; 4.52415ms) Apr 21 14:28:23.537: INFO: (13) /api/v1/namespaces/proxy-7186/pods/proxy-service-x2mc4-gdlwx:160/proxy/: foo (200; 4.69697ms) Apr 21 14:28:23.537: INFO: (13) /api/v1/namespaces/proxy-7186/services/http:proxy-service-x2mc4:portname2/proxy/: bar (200; 4.841263ms) Apr 21 14:28:23.537: INFO: (13) /api/v1/namespaces/proxy-7186/services/proxy-service-x2mc4:portname2/proxy/: bar (200; 4.912673ms) Apr 21 14:28:23.537: INFO: (13) /api/v1/namespaces/proxy-7186/services/http:proxy-service-x2mc4:portname1/proxy/: foo (200; 4.986704ms) Apr 21 14:28:23.538: INFO: (13) /api/v1/namespaces/proxy-7186/services/proxy-service-x2mc4:portname1/proxy/: foo (200; 5.455821ms) Apr 21 14:28:23.538: INFO: (13) /api/v1/namespaces/proxy-7186/services/https:proxy-service-x2mc4:tlsportname1/proxy/: tls baz (200; 5.551718ms) Apr 21 14:28:23.542: INFO: (14) /api/v1/namespaces/proxy-7186/pods/proxy-service-x2mc4-gdlwx/proxy/: test (200; 3.704582ms) Apr 21 14:28:23.542: INFO: (14) /api/v1/namespaces/proxy-7186/pods/proxy-service-x2mc4-gdlwx:1080/proxy/: test<... (200; 3.808705ms) Apr 21 14:28:23.542: INFO: (14) /api/v1/namespaces/proxy-7186/pods/proxy-service-x2mc4-gdlwx:160/proxy/: foo (200; 3.79111ms) Apr 21 14:28:23.542: INFO: (14) /api/v1/namespaces/proxy-7186/services/proxy-service-x2mc4:portname2/proxy/: bar (200; 3.811536ms) Apr 21 14:28:23.542: INFO: (14) /api/v1/namespaces/proxy-7186/pods/https:proxy-service-x2mc4-gdlwx:462/proxy/: tls qux (200; 3.779588ms) Apr 21 14:28:23.542: INFO: (14) /api/v1/namespaces/proxy-7186/pods/proxy-service-x2mc4-gdlwx:162/proxy/: bar (200; 3.850005ms) Apr 21 14:28:23.542: INFO: (14) /api/v1/namespaces/proxy-7186/pods/http:proxy-service-x2mc4-gdlwx:160/proxy/: foo (200; 4.099879ms) Apr 21 14:28:23.542: INFO: (14) /api/v1/namespaces/proxy-7186/pods/http:proxy-service-x2mc4-gdlwx:1080/proxy/: ... (200; 4.244202ms) Apr 21 14:28:23.542: INFO: (14) /api/v1/namespaces/proxy-7186/pods/http:proxy-service-x2mc4-gdlwx:162/proxy/: bar (200; 4.3321ms) Apr 21 14:28:23.542: INFO: (14) /api/v1/namespaces/proxy-7186/services/http:proxy-service-x2mc4:portname1/proxy/: foo (200; 4.32481ms) Apr 21 14:28:23.543: INFO: (14) /api/v1/namespaces/proxy-7186/pods/https:proxy-service-x2mc4-gdlwx:443/proxy/: ... (200; 3.515171ms) Apr 21 14:28:23.547: INFO: (15) /api/v1/namespaces/proxy-7186/pods/proxy-service-x2mc4-gdlwx/proxy/: test (200; 3.842013ms) Apr 21 14:28:23.547: INFO: (15) /api/v1/namespaces/proxy-7186/pods/https:proxy-service-x2mc4-gdlwx:462/proxy/: tls qux (200; 3.747181ms) Apr 21 14:28:23.547: INFO: (15) /api/v1/namespaces/proxy-7186/pods/http:proxy-service-x2mc4-gdlwx:160/proxy/: foo (200; 3.763497ms) Apr 21 14:28:23.547: INFO: (15) /api/v1/namespaces/proxy-7186/pods/https:proxy-service-x2mc4-gdlwx:460/proxy/: tls baz (200; 3.945263ms) Apr 21 14:28:23.547: INFO: (15) /api/v1/namespaces/proxy-7186/pods/http:proxy-service-x2mc4-gdlwx:162/proxy/: bar (200; 3.887509ms) Apr 21 14:28:23.547: INFO: (15) /api/v1/namespaces/proxy-7186/pods/proxy-service-x2mc4-gdlwx:160/proxy/: foo (200; 3.885519ms) Apr 21 14:28:23.547: INFO: (15) /api/v1/namespaces/proxy-7186/pods/proxy-service-x2mc4-gdlwx:1080/proxy/: test<... (200; 3.947433ms) Apr 21 14:28:23.547: INFO: (15) /api/v1/namespaces/proxy-7186/services/proxy-service-x2mc4:portname2/proxy/: bar (200; 4.303439ms) Apr 21 14:28:23.547: INFO: (15) /api/v1/namespaces/proxy-7186/services/http:proxy-service-x2mc4:portname1/proxy/: foo (200; 4.370531ms) Apr 21 14:28:23.547: INFO: (15) /api/v1/namespaces/proxy-7186/services/https:proxy-service-x2mc4:tlsportname2/proxy/: tls qux (200; 4.415751ms) Apr 21 14:28:23.547: INFO: (15) /api/v1/namespaces/proxy-7186/services/https:proxy-service-x2mc4:tlsportname1/proxy/: tls baz (200; 4.367093ms) Apr 21 14:28:23.547: INFO: (15) /api/v1/namespaces/proxy-7186/services/http:proxy-service-x2mc4:portname2/proxy/: bar (200; 4.589612ms) Apr 21 14:28:23.548: INFO: (15) /api/v1/namespaces/proxy-7186/services/proxy-service-x2mc4:portname1/proxy/: foo (200; 4.914098ms) Apr 21 14:28:23.551: INFO: (16) /api/v1/namespaces/proxy-7186/pods/http:proxy-service-x2mc4-gdlwx:160/proxy/: foo (200; 2.784864ms) Apr 21 14:28:23.551: INFO: (16) /api/v1/namespaces/proxy-7186/pods/proxy-service-x2mc4-gdlwx:1080/proxy/: test<... (200; 3.186896ms) Apr 21 14:28:23.551: INFO: (16) /api/v1/namespaces/proxy-7186/pods/https:proxy-service-x2mc4-gdlwx:443/proxy/: test (200; 3.646207ms) Apr 21 14:28:23.552: INFO: (16) /api/v1/namespaces/proxy-7186/pods/proxy-service-x2mc4-gdlwx:162/proxy/: bar (200; 3.65209ms) Apr 21 14:28:23.552: INFO: (16) /api/v1/namespaces/proxy-7186/pods/http:proxy-service-x2mc4-gdlwx:162/proxy/: bar (200; 3.650031ms) Apr 21 14:28:23.552: INFO: (16) /api/v1/namespaces/proxy-7186/pods/http:proxy-service-x2mc4-gdlwx:1080/proxy/: ... (200; 3.633441ms) Apr 21 14:28:23.552: INFO: (16) /api/v1/namespaces/proxy-7186/services/https:proxy-service-x2mc4:tlsportname1/proxy/: tls baz (200; 3.751024ms) Apr 21 14:28:23.552: INFO: (16) /api/v1/namespaces/proxy-7186/pods/https:proxy-service-x2mc4-gdlwx:462/proxy/: tls qux (200; 3.943951ms) Apr 21 14:28:23.552: INFO: (16) /api/v1/namespaces/proxy-7186/services/proxy-service-x2mc4:portname2/proxy/: bar (200; 4.265008ms) Apr 21 14:28:23.552: INFO: (16) /api/v1/namespaces/proxy-7186/services/proxy-service-x2mc4:portname1/proxy/: foo (200; 4.28613ms) Apr 21 14:28:23.552: INFO: (16) /api/v1/namespaces/proxy-7186/services/https:proxy-service-x2mc4:tlsportname2/proxy/: tls qux (200; 4.429515ms) Apr 21 14:28:23.552: INFO: (16) /api/v1/namespaces/proxy-7186/services/http:proxy-service-x2mc4:portname2/proxy/: bar (200; 4.405185ms) Apr 21 14:28:23.552: INFO: (16) /api/v1/namespaces/proxy-7186/services/http:proxy-service-x2mc4:portname1/proxy/: foo (200; 4.383931ms) Apr 21 14:28:23.555: INFO: (17) /api/v1/namespaces/proxy-7186/pods/https:proxy-service-x2mc4-gdlwx:462/proxy/: tls qux (200; 2.398879ms) Apr 21 14:28:23.555: INFO: (17) /api/v1/namespaces/proxy-7186/pods/https:proxy-service-x2mc4-gdlwx:443/proxy/: ... (200; 4.404261ms) Apr 21 14:28:23.557: INFO: (17) /api/v1/namespaces/proxy-7186/pods/http:proxy-service-x2mc4-gdlwx:162/proxy/: bar (200; 4.579346ms) Apr 21 14:28:23.557: INFO: (17) /api/v1/namespaces/proxy-7186/pods/https:proxy-service-x2mc4-gdlwx:460/proxy/: tls baz (200; 4.513933ms) Apr 21 14:28:23.557: INFO: (17) /api/v1/namespaces/proxy-7186/pods/proxy-service-x2mc4-gdlwx:162/proxy/: bar (200; 4.595115ms) Apr 21 14:28:23.557: INFO: (17) /api/v1/namespaces/proxy-7186/services/proxy-service-x2mc4:portname1/proxy/: foo (200; 4.674584ms) Apr 21 14:28:23.557: INFO: (17) /api/v1/namespaces/proxy-7186/pods/proxy-service-x2mc4-gdlwx:1080/proxy/: test<... (200; 4.600889ms) Apr 21 14:28:23.557: INFO: (17) /api/v1/namespaces/proxy-7186/pods/proxy-service-x2mc4-gdlwx/proxy/: test (200; 4.710641ms) Apr 21 14:28:23.557: INFO: (17) /api/v1/namespaces/proxy-7186/pods/http:proxy-service-x2mc4-gdlwx:160/proxy/: foo (200; 4.693034ms) Apr 21 14:28:23.557: INFO: (17) /api/v1/namespaces/proxy-7186/services/https:proxy-service-x2mc4:tlsportname1/proxy/: tls baz (200; 4.867426ms) Apr 21 14:28:23.560: INFO: (18) /api/v1/namespaces/proxy-7186/pods/http:proxy-service-x2mc4-gdlwx:160/proxy/: foo (200; 2.870572ms) Apr 21 14:28:23.560: INFO: (18) /api/v1/namespaces/proxy-7186/pods/proxy-service-x2mc4-gdlwx/proxy/: test (200; 3.095814ms) Apr 21 14:28:23.560: INFO: (18) /api/v1/namespaces/proxy-7186/pods/proxy-service-x2mc4-gdlwx:1080/proxy/: test<... (200; 3.13968ms) Apr 21 14:28:23.560: INFO: (18) /api/v1/namespaces/proxy-7186/pods/https:proxy-service-x2mc4-gdlwx:443/proxy/: ... (200; 3.205645ms) Apr 21 14:28:23.561: INFO: (18) /api/v1/namespaces/proxy-7186/pods/proxy-service-x2mc4-gdlwx:162/proxy/: bar (200; 3.260114ms) Apr 21 14:28:23.562: INFO: (18) /api/v1/namespaces/proxy-7186/services/proxy-service-x2mc4:portname2/proxy/: bar (200; 4.33598ms) Apr 21 14:28:23.562: INFO: (18) /api/v1/namespaces/proxy-7186/services/proxy-service-x2mc4:portname1/proxy/: foo (200; 4.473772ms) Apr 21 14:28:23.562: INFO: (18) /api/v1/namespaces/proxy-7186/services/http:proxy-service-x2mc4:portname1/proxy/: foo (200; 4.402286ms) Apr 21 14:28:23.562: INFO: (18) /api/v1/namespaces/proxy-7186/services/https:proxy-service-x2mc4:tlsportname2/proxy/: tls qux (200; 4.658515ms) Apr 21 14:28:23.562: INFO: (18) /api/v1/namespaces/proxy-7186/services/https:proxy-service-x2mc4:tlsportname1/proxy/: tls baz (200; 4.670576ms) Apr 21 14:28:23.562: INFO: (18) /api/v1/namespaces/proxy-7186/services/http:proxy-service-x2mc4:portname2/proxy/: bar (200; 4.536635ms) Apr 21 14:28:23.566: INFO: (19) /api/v1/namespaces/proxy-7186/services/http:proxy-service-x2mc4:portname1/proxy/: foo (200; 4.20245ms) Apr 21 14:28:23.566: INFO: (19) /api/v1/namespaces/proxy-7186/services/https:proxy-service-x2mc4:tlsportname1/proxy/: tls baz (200; 4.378953ms) Apr 21 14:28:23.567: INFO: (19) /api/v1/namespaces/proxy-7186/pods/https:proxy-service-x2mc4-gdlwx:443/proxy/: ... (200; 4.650077ms) Apr 21 14:28:23.567: INFO: (19) /api/v1/namespaces/proxy-7186/pods/http:proxy-service-x2mc4-gdlwx:160/proxy/: foo (200; 4.67621ms) Apr 21 14:28:23.567: INFO: (19) /api/v1/namespaces/proxy-7186/services/https:proxy-service-x2mc4:tlsportname2/proxy/: tls qux (200; 4.618846ms) Apr 21 14:28:23.567: INFO: (19) /api/v1/namespaces/proxy-7186/pods/https:proxy-service-x2mc4-gdlwx:462/proxy/: tls qux (200; 4.724428ms) Apr 21 14:28:23.567: INFO: (19) /api/v1/namespaces/proxy-7186/pods/proxy-service-x2mc4-gdlwx:1080/proxy/: test<... (200; 4.930863ms) Apr 21 14:28:23.567: INFO: (19) /api/v1/namespaces/proxy-7186/services/http:proxy-service-x2mc4:portname2/proxy/: bar (200; 4.821669ms) Apr 21 14:28:23.567: INFO: (19) /api/v1/namespaces/proxy-7186/pods/http:proxy-service-x2mc4-gdlwx:162/proxy/: bar (200; 4.876456ms) Apr 21 14:28:23.567: INFO: (19) /api/v1/namespaces/proxy-7186/pods/https:proxy-service-x2mc4-gdlwx:460/proxy/: tls baz (200; 4.942446ms) Apr 21 14:28:23.567: INFO: (19) /api/v1/namespaces/proxy-7186/pods/proxy-service-x2mc4-gdlwx:160/proxy/: foo (200; 4.888386ms) Apr 21 14:28:23.567: INFO: (19) /api/v1/namespaces/proxy-7186/pods/proxy-service-x2mc4-gdlwx/proxy/: test (200; 4.925785ms) Apr 21 14:28:23.567: INFO: (19) /api/v1/namespaces/proxy-7186/pods/proxy-service-x2mc4-gdlwx:162/proxy/: bar (200; 4.998211ms) Apr 21 14:28:23.567: INFO: (19) /api/v1/namespaces/proxy-7186/services/proxy-service-x2mc4:portname1/proxy/: foo (200; 5.034912ms) STEP: deleting ReplicationController proxy-service-x2mc4 in namespace proxy-7186, will wait for the garbage collector to delete the pods Apr 21 14:28:23.626: INFO: Deleting ReplicationController proxy-service-x2mc4 took: 6.948854ms Apr 21 14:28:23.926: INFO: Terminating ReplicationController proxy-service-x2mc4 pods took: 300.302355ms [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 21 14:28:26.526: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-7186" for this suite. Apr 21 14:28:32.544: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 21 14:28:32.624: INFO: namespace proxy-7186 deletion completed in 6.094065877s • [SLOW TEST:17.415 seconds] [sig-network] Proxy /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:58 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 21 14:28:32.625: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name s-test-opt-del-02c61e32-d7ab-4c38-8a74-e5fcccb6b2f1 STEP: Creating secret with name s-test-opt-upd-fa631a1a-d117-437a-919f-4324ace6dc25 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-02c61e32-d7ab-4c38-8a74-e5fcccb6b2f1 STEP: Updating secret s-test-opt-upd-fa631a1a-d117-437a-919f-4324ace6dc25 STEP: Creating secret with name s-test-opt-create-0eb20096-39bd-4087-90cf-dcad9465fde6 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 21 14:30:07.229: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-186" for this suite. Apr 21 14:30:29.248: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 21 14:30:29.332: INFO: namespace projected-186 deletion completed in 22.098998827s • [SLOW TEST:116.707 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 21 14:30:29.332: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name s-test-opt-del-2288f7eb-02b4-4f58-9b0d-d9784d2f3c45 STEP: Creating secret with name s-test-opt-upd-2c7ffefe-c10d-48db-bb56-b09d0ec9dd68 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-2288f7eb-02b4-4f58-9b0d-d9784d2f3c45 STEP: Updating secret s-test-opt-upd-2c7ffefe-c10d-48db-bb56-b09d0ec9dd68 STEP: Creating secret with name s-test-opt-create-c95c9a56-006a-4f5c-87f4-3d8593501dc4 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 21 14:31:55.856: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-6416" for this suite. Apr 21 14:32:17.884: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 21 14:32:17.961: INFO: namespace secrets-6416 deletion completed in 22.102394297s • [SLOW TEST:108.629 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 21 14:32:17.962: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars Apr 21 14:32:18.044: INFO: Waiting up to 5m0s for pod "downward-api-3ab8f9ca-2483-4d9e-8a50-9899ab915aa6" in namespace "downward-api-3883" to be "success or failure" Apr 21 14:32:18.079: INFO: Pod "downward-api-3ab8f9ca-2483-4d9e-8a50-9899ab915aa6": Phase="Pending", Reason="", readiness=false. Elapsed: 35.183567ms Apr 21 14:32:20.083: INFO: Pod "downward-api-3ab8f9ca-2483-4d9e-8a50-9899ab915aa6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.038894241s Apr 21 14:32:22.088: INFO: Pod "downward-api-3ab8f9ca-2483-4d9e-8a50-9899ab915aa6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.043357121s STEP: Saw pod success Apr 21 14:32:22.088: INFO: Pod "downward-api-3ab8f9ca-2483-4d9e-8a50-9899ab915aa6" satisfied condition "success or failure" Apr 21 14:32:22.091: INFO: Trying to get logs from node iruya-worker2 pod downward-api-3ab8f9ca-2483-4d9e-8a50-9899ab915aa6 container dapi-container: STEP: delete the pod Apr 21 14:32:22.145: INFO: Waiting for pod downward-api-3ab8f9ca-2483-4d9e-8a50-9899ab915aa6 to disappear Apr 21 14:32:22.148: INFO: Pod downward-api-3ab8f9ca-2483-4d9e-8a50-9899ab915aa6 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 21 14:32:22.148: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3883" for this suite. Apr 21 14:32:28.189: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 21 14:32:28.266: INFO: namespace downward-api-3883 deletion completed in 6.108384736s • [SLOW TEST:10.304 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 21 14:32:28.268: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W0421 14:32:58.872560 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Apr 21 14:32:58.872: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 21 14:32:58.872: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-8232" for this suite. Apr 21 14:33:04.890: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 21 14:33:04.955: INFO: namespace gc-8232 deletion completed in 6.079593829s • [SLOW TEST:36.687 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 21 14:33:04.955: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating replication controller my-hostname-basic-050a8af5-8338-4900-b2ca-9491ff93a313 Apr 21 14:33:05.167: INFO: Pod name my-hostname-basic-050a8af5-8338-4900-b2ca-9491ff93a313: Found 0 pods out of 1 Apr 21 14:33:10.171: INFO: Pod name my-hostname-basic-050a8af5-8338-4900-b2ca-9491ff93a313: Found 1 pods out of 1 Apr 21 14:33:10.171: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-050a8af5-8338-4900-b2ca-9491ff93a313" are running Apr 21 14:33:10.174: INFO: Pod "my-hostname-basic-050a8af5-8338-4900-b2ca-9491ff93a313-8qghr" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-21 14:33:05 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-21 14:33:07 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-21 14:33:07 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-21 14:33:05 +0000 UTC Reason: Message:}]) Apr 21 14:33:10.174: INFO: Trying to dial the pod Apr 21 14:33:15.186: INFO: Controller my-hostname-basic-050a8af5-8338-4900-b2ca-9491ff93a313: Got expected result from replica 1 [my-hostname-basic-050a8af5-8338-4900-b2ca-9491ff93a313-8qghr]: "my-hostname-basic-050a8af5-8338-4900-b2ca-9491ff93a313-8qghr", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 21 14:33:15.186: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-4340" for this suite. Apr 21 14:33:21.208: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 21 14:33:21.282: INFO: namespace replication-controller-4340 deletion completed in 6.092403494s • [SLOW TEST:16.327 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 21 14:33:21.283: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-map-4b59dada-3c3e-44cd-9a14-9aa8cc451d86 STEP: Creating a pod to test consume configMaps Apr 21 14:33:21.385: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-a81c3ef7-3246-4aeb-ae3b-d108ceed8857" in namespace "projected-3433" to be "success or failure" Apr 21 14:33:21.390: INFO: Pod "pod-projected-configmaps-a81c3ef7-3246-4aeb-ae3b-d108ceed8857": Phase="Pending", Reason="", readiness=false. Elapsed: 4.332816ms Apr 21 14:33:23.394: INFO: Pod "pod-projected-configmaps-a81c3ef7-3246-4aeb-ae3b-d108ceed8857": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008278668s Apr 21 14:33:25.398: INFO: Pod "pod-projected-configmaps-a81c3ef7-3246-4aeb-ae3b-d108ceed8857": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012998608s STEP: Saw pod success Apr 21 14:33:25.398: INFO: Pod "pod-projected-configmaps-a81c3ef7-3246-4aeb-ae3b-d108ceed8857" satisfied condition "success or failure" Apr 21 14:33:25.401: INFO: Trying to get logs from node iruya-worker pod pod-projected-configmaps-a81c3ef7-3246-4aeb-ae3b-d108ceed8857 container projected-configmap-volume-test: STEP: delete the pod Apr 21 14:33:25.422: INFO: Waiting for pod pod-projected-configmaps-a81c3ef7-3246-4aeb-ae3b-d108ceed8857 to disappear Apr 21 14:33:25.426: INFO: Pod pod-projected-configmaps-a81c3ef7-3246-4aeb-ae3b-d108ceed8857 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 21 14:33:25.426: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3433" for this suite. Apr 21 14:33:31.455: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 21 14:33:31.529: INFO: namespace projected-3433 deletion completed in 6.099877198s • [SLOW TEST:10.247 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 21 14:33:31.529: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0666 on node default medium Apr 21 14:33:31.585: INFO: Waiting up to 5m0s for pod "pod-f17cdac9-d3fa-4ec6-b062-a0acc3a49fde" in namespace "emptydir-9573" to be "success or failure" Apr 21 14:33:31.588: INFO: Pod "pod-f17cdac9-d3fa-4ec6-b062-a0acc3a49fde": Phase="Pending", Reason="", readiness=false. Elapsed: 3.490522ms Apr 21 14:33:33.592: INFO: Pod "pod-f17cdac9-d3fa-4ec6-b062-a0acc3a49fde": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007210794s Apr 21 14:33:35.597: INFO: Pod "pod-f17cdac9-d3fa-4ec6-b062-a0acc3a49fde": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011903464s STEP: Saw pod success Apr 21 14:33:35.597: INFO: Pod "pod-f17cdac9-d3fa-4ec6-b062-a0acc3a49fde" satisfied condition "success or failure" Apr 21 14:33:35.600: INFO: Trying to get logs from node iruya-worker pod pod-f17cdac9-d3fa-4ec6-b062-a0acc3a49fde container test-container: STEP: delete the pod Apr 21 14:33:35.620: INFO: Waiting for pod pod-f17cdac9-d3fa-4ec6-b062-a0acc3a49fde to disappear Apr 21 14:33:35.624: INFO: Pod pod-f17cdac9-d3fa-4ec6-b062-a0acc3a49fde no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 21 14:33:35.624: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9573" for this suite. Apr 21 14:33:41.682: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 21 14:33:42.463: INFO: namespace emptydir-9573 deletion completed in 6.836021126s • [SLOW TEST:10.934 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 21 14:33:42.464: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44 [It] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod Apr 21 14:33:42.530: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 21 14:33:48.931: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-9916" for this suite. Apr 21 14:34:12.969: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 21 14:34:13.043: INFO: namespace init-container-9916 deletion completed in 24.092951287s • [SLOW TEST:30.579 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 21 14:34:13.043: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating the pod Apr 21 14:34:17.640: INFO: Successfully updated pod "annotationupdatee0e48faf-5b2c-4929-bb3d-6451d528376f" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 21 14:34:19.683: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7523" for this suite. Apr 21 14:34:41.701: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 21 14:34:41.785: INFO: namespace downward-api-7523 deletion completed in 22.097483922s • [SLOW TEST:28.742 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 21 14:34:41.786: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Apr 21 14:34:50.895: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Apr 21 14:34:50.961: INFO: Pod pod-with-prestop-http-hook still exists Apr 21 14:34:52.962: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Apr 21 14:34:52.965: INFO: Pod pod-with-prestop-http-hook still exists Apr 21 14:34:54.962: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Apr 21 14:34:54.966: INFO: Pod pod-with-prestop-http-hook still exists Apr 21 14:34:56.962: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Apr 21 14:34:56.966: INFO: Pod pod-with-prestop-http-hook still exists Apr 21 14:34:58.962: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Apr 21 14:34:58.967: INFO: Pod pod-with-prestop-http-hook still exists Apr 21 14:35:00.962: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Apr 21 14:35:00.979: INFO: Pod pod-with-prestop-http-hook still exists Apr 21 14:35:02.962: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Apr 21 14:35:02.966: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 21 14:35:02.974: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-1924" for this suite. Apr 21 14:35:24.995: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 21 14:35:25.060: INFO: namespace container-lifecycle-hook-1924 deletion completed in 22.082953372s • [SLOW TEST:43.273 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 21 14:35:25.060: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test override all Apr 21 14:35:25.216: INFO: Waiting up to 5m0s for pod "client-containers-8f5dd33a-7964-48d8-a2fa-b4feca288483" in namespace "containers-6840" to be "success or failure" Apr 21 14:35:25.225: INFO: Pod "client-containers-8f5dd33a-7964-48d8-a2fa-b4feca288483": Phase="Pending", Reason="", readiness=false. Elapsed: 9.330363ms Apr 21 14:35:27.229: INFO: Pod "client-containers-8f5dd33a-7964-48d8-a2fa-b4feca288483": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012894423s Apr 21 14:35:29.233: INFO: Pod "client-containers-8f5dd33a-7964-48d8-a2fa-b4feca288483": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016620652s STEP: Saw pod success Apr 21 14:35:29.233: INFO: Pod "client-containers-8f5dd33a-7964-48d8-a2fa-b4feca288483" satisfied condition "success or failure" Apr 21 14:35:29.235: INFO: Trying to get logs from node iruya-worker2 pod client-containers-8f5dd33a-7964-48d8-a2fa-b4feca288483 container test-container: STEP: delete the pod Apr 21 14:35:29.248: INFO: Waiting for pod client-containers-8f5dd33a-7964-48d8-a2fa-b4feca288483 to disappear Apr 21 14:35:29.253: INFO: Pod client-containers-8f5dd33a-7964-48d8-a2fa-b4feca288483 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 21 14:35:29.253: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-6840" for this suite. Apr 21 14:35:35.264: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 21 14:35:35.356: INFO: namespace containers-6840 deletion completed in 6.100039698s • [SLOW TEST:10.295 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-network] Services should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 21 14:35:35.356: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88 [It] should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating service multi-endpoint-test in namespace services-2212 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-2212 to expose endpoints map[] Apr 21 14:35:35.474: INFO: Get endpoints failed (2.844782ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found Apr 21 14:35:36.478: INFO: successfully validated that service multi-endpoint-test in namespace services-2212 exposes endpoints map[] (1.007168001s elapsed) STEP: Creating pod pod1 in namespace services-2212 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-2212 to expose endpoints map[pod1:[100]] Apr 21 14:35:39.513: INFO: successfully validated that service multi-endpoint-test in namespace services-2212 exposes endpoints map[pod1:[100]] (3.027071245s elapsed) STEP: Creating pod pod2 in namespace services-2212 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-2212 to expose endpoints map[pod1:[100] pod2:[101]] Apr 21 14:35:42.590: INFO: successfully validated that service multi-endpoint-test in namespace services-2212 exposes endpoints map[pod1:[100] pod2:[101]] (3.073379178s elapsed) STEP: Deleting pod pod1 in namespace services-2212 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-2212 to expose endpoints map[pod2:[101]] Apr 21 14:35:43.748: INFO: successfully validated that service multi-endpoint-test in namespace services-2212 exposes endpoints map[pod2:[101]] (1.149895967s elapsed) STEP: Deleting pod pod2 in namespace services-2212 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-2212 to expose endpoints map[] Apr 21 14:35:44.871: INFO: successfully validated that service multi-endpoint-test in namespace services-2212 exposes endpoints map[] (1.118250577s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 21 14:35:44.915: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-2212" for this suite. Apr 21 14:36:07.007: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 21 14:36:07.090: INFO: namespace services-2212 deletion completed in 22.150581624s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92 • [SLOW TEST:31.734 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 21 14:36:07.090: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars Apr 21 14:36:07.169: INFO: Waiting up to 5m0s for pod "downward-api-09ba746d-4e5a-41e1-ba85-0e3b74bd60e3" in namespace "downward-api-1674" to be "success or failure" Apr 21 14:36:07.178: INFO: Pod "downward-api-09ba746d-4e5a-41e1-ba85-0e3b74bd60e3": Phase="Pending", Reason="", readiness=false. Elapsed: 9.200954ms Apr 21 14:36:09.182: INFO: Pod "downward-api-09ba746d-4e5a-41e1-ba85-0e3b74bd60e3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012532168s Apr 21 14:36:11.186: INFO: Pod "downward-api-09ba746d-4e5a-41e1-ba85-0e3b74bd60e3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016933818s STEP: Saw pod success Apr 21 14:36:11.186: INFO: Pod "downward-api-09ba746d-4e5a-41e1-ba85-0e3b74bd60e3" satisfied condition "success or failure" Apr 21 14:36:11.189: INFO: Trying to get logs from node iruya-worker pod downward-api-09ba746d-4e5a-41e1-ba85-0e3b74bd60e3 container dapi-container: STEP: delete the pod Apr 21 14:36:11.208: INFO: Waiting for pod downward-api-09ba746d-4e5a-41e1-ba85-0e3b74bd60e3 to disappear Apr 21 14:36:11.212: INFO: Pod downward-api-09ba746d-4e5a-41e1-ba85-0e3b74bd60e3 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 21 14:36:11.212: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1674" for this suite. Apr 21 14:36:17.222: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 21 14:36:17.299: INFO: namespace downward-api-1674 deletion completed in 6.08483588s • [SLOW TEST:10.209 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSApr 21 14:36:17.300: INFO: Running AfterSuite actions on all nodes Apr 21 14:36:17.300: INFO: Running AfterSuite actions on node 1 Apr 21 14:36:17.300: INFO: Skipping dumping logs from cluster Ran 215 of 4412 Specs in 6020.207 seconds SUCCESS! -- 215 Passed | 0 Failed | 0 Pending | 4197 Skipped PASS